Device Trust with step-ca, Google Cloud CAS, and SCEP: a Practical, Cloud-Ready Device Trust Build Out

I’m often asked how to stand up a device trust layer that scales from homelab to enterprise without the additional complexity of Hardware security modules. In this post, I’ll show you how to wire up step-ca (Smallstep’s open-source CA), Google Cloud Certificate Authority Service (CAS) as a managed signing backend, and SCEP for mass device enrollment. The result is a modern, automatable PKI that issues device certificates for clientAuth, mTLS, Wi-Fi, VPN, Access Gateways, and beyond.


Why this stack?

  • step-ca: A lightweight CA with batteries included—ACME, OIDC, SCEP, SSH CA, templates, audit logs, and more. It’s ideal as your “front-door” RA/CA service and policy brain.
  • Google Cloud CAS: A managed, audited CA that signs certificates on your behalf. Offload availability and compliance to Google while keeping your issuance logic under your control.
  • SCEP: A well-supported protocol for bulk device enrollment—especially useful for legacy or embedded systems, printers, network gear, and certain MDM agents.

This combo gives you:

  • Cloud scalability + low ops: Managed CA with step-ca’s simple deployment.
  • Security: Keep the signing CAs inside Google CAS; expose only step-ca to your fleet.
  • Automation: Scriptable bootstraps, ephemeral certs, and policy controls.

Architecture at a glance

Flow summary

  1. Devices talk SCEP to step-ca.
  2. step-ca acts as a Registration Authority, relaying/signing requests via CAS.
  3. SCEP returns a signed device certificate.
  4. Devices use certs for clientAuth challenges from the IdP/SSO provider or an access gateway before reaching protected apps

Prerequisites

  • A Google Cloud project with CAS enabled and an active CA in an appropriate CA Pool.
  • A Linux host/VM/container to run step-ca (front-door RA/CA and SCEP endpoint).
  • DNS for your step-ca endpoint (public or private, as needed).
  • Firewall rules allowing inbound 443 to step-ca.
  • gcloud CLI and step CLI.

Installation docs for step-ca: https://smallstep.com/docs/step-ca/installation/


Step 1 — Create a service account for step-ca → CAS access

Create a service account that step-ca will use to interact with CAS:

gcloud iam service-accounts create step-cas-sa \
    --description "Step-CA Service Account" \
    --display-name "Step-CA Service Account"

Grant this SA appropriate CAS roles (for certificate issuance). In many deployments that’s at least roles/privateca.certificateRequester on the CA Pool. (Use the principle of least privilege.)

Tip: You can utilize a certificate template within GCP CAS to further restrict the usage and options available to the service account when an certificate is requested.


Step 2 — Install step-ca and initialize with Cloud CAS as the RA

Create a working directory and initialize step-ca’s config to use Cloud CAS as the Registration Authority:

mkdir /etc/step-ca
export STEPPATH=/etc/step-ca
step ca init --name="CasPoC" --deployment-type standalone --remote-management --provisioner="admin@example.com" --ra=CloudCAS --issuer=projects/<project>/locations/<us-central1>/caPools/<Ca Pool>/certificateAuthorities/<ca ID> --dns="<FQDN>" --address=":443"

What this does:

  • Creates /etc/step-ca/config/ca.json and related directories.
  • Sets CloudCAS as the RA, pointing at your CA (–issuer=projects/…/certificateAuthorities/<ca ID>).
  • Binds HTTPS on :443 (we’ll allow a non-root user to bind shortly).
  • Registers a default OIDC or local provisioner (here, admin@example.com) for management.

Tip: Authenticate gcloud as the service account, or run step-ca compute with that service account identity, so Cloud CAS requests succeed.


Step 3 — Create a restricted system user and grant low-port bind

Run step-ca as a non-root user:

useradd step
passwd -l step
chown -R step:step /etc/step-ca

Allow binding to port 443 without root:

setcap CAP_NET_BIND_SERVICE=+eip /usr/bin/step-ca

This capability approach is safer than running as root.


Step 4 — (Optional) Prepare an intermediate CA template

 (CSR → CAS-signed)

We’ll generate an intermediate key and CSR locally, then have CAS sign it to establish our operational intermediate for issuance:

cat <<EOF >  /etc/step-ca/templates/rsa_intermediate_ca.tpl
{
  "subject": {{ toJson .Subject }},
  "issuer": {{ toJson .Subject }},
  "keyUsage": ["certSign", "crlSign"],
  "basicConstraints": {
    "isCA": true,
    "maxPathLen": 0
  }
  {{- if typeIs "*rsa.PublicKey" .Insecure.CR.PublicKey }}
    , "signatureAlgorithm": "SHA256-RSAPSS"
  {{- end }}
}
EOF

Generate the CSR and key:

step certificate create "SCEP Intermediate CA" \
    /etc/step-ca/certs/intermediate_ca.csr \
    /etc/step-ca/secrets/intermediate_ca_key \
    --template /etc/step-ca/templates/rsa_intermediate_ca.tpl \
    --kty RSA \
    --size 3072 --csr

Ask Cloud CAS to sign the CSR:

gcloud privateca certificates create CERT_ID \
    --issuer-pool <Pool> \
    --issuer-location <Location> \
    --csr /etc/step-ca/certs/intermediate_ca.csr \
    --cert-output-file /etc/step-ca/certs/intermediate_ca.crt \
    --validity "P1Y"

Update your ca.json so step-ca uses the newly minted intermediate:

        "root": "/etc/step-ca/certs/root_ca.crt",
        "federatedRoots": null,
        "crt": "/etc/step-ca/certs/intermediate_ca.crt",
        "key": "/etc/step-ca/secrets/intermediate_ca_key",

Notes
• Store /etc/step-ca/secrets on encrypted disk or a secrets-managed volume.
• Rotate the intermediate on a schedule (e.g., annually) and keep a CRL/OCSP strategy in place.


Step 5 — Dry run the CA

Before opening the floodgates, do a dry run:

sudo -u step step-ca /etc/step-ca/config/ca.json

If it boots cleanly, you should see the HTTP listener and provisioners registered in logs.


Step 6 — Enable SCEP  for device enrollment

Add a SCEP provisioner to your existing CA service. We’ll set challenge credentials and certificate lifetimes:

step ca provisioner add poc_devicetrust \
  --type SCEP --challenge "<redacted>" \
   --x509-min-dur=24h \
   --x509-max-dur=8760h \
   --x509-default-dur=1080h \
  --encryption-algorithm-identifier 2 --admin-name step

A few practical notes:

  • Challenge: Keep it secret (vault, KMS, or MDM payloads). Consider migrating to SCEP with client-side RA or EST for stronger auth if your device ecosystem supports it.
  • Durations: Default shown is 45 days (1080h). Short-lived certs reduce revocation surface.
  • Algorithm Identifier 2: This sets the SCEP Encryption Algorithm Identifier (commonly RSA/3DES/AES variations). Keep this consistent with your device agents.

Step 7 — (Workaround) Clean up ca.json authorities section if needed

Some versions/paths require removing the cloudcas entry from the authorities section. If you see startup errors or odd RA behavior, adjust and restart:

sudo vi /etc/step-ca/config/ca.json

Then relaunch:

sudo -u step step-ca /etc/step-ca/config/ca.json

Note: SCEP can operate with just the local Intermediate CA being used to sign device cert requests. We don’t need direct access to the Root CA or subordinate CAs, so they can be isolated within CAS.


Step 8 — Bootstrap devices and request a cert via SCEP

Once your root CA is trusted on the device (via MDM, config management, or manual import), request a certificate with a SCEP client. Example:

scepclient -private-key client.key -server-url=https://<Domain/IP>/scep/poc_devicetrust -challenge=<Redacted> -dnsname "Lab-PC.local" -cn "Lab-PC" -country "US" -organization "Lab" -ou "Device Trust"

This will:

  • Pull the CA chain from GetCACert
  • Submit a PKCSReq containing your CSR and challenge
  • Receive a signed certificate in CertRep

SCEP Enrollment Sequence

SCEP Enrollment Sequence

Hardening, Operations, and Best Practices

1) TLS and network posture

  • Put step-ca behind a reverse proxy or L7 load balancer for WAF/DoS controls.
  • Consider utilizing a URL Map and/or HTTP targets to restrict acces only to your provisioners
  • Use mTLS for internal admin APIs and restrict management endpoints by IP/VPN.

2) Identity for step-ca

  • Run as the dedicated step user (as above) and consider systemd hardening (ProtectSystem, NoNewPrivileges, PrivateTmp, AmbientCapabilities=CAP_NET_BIND_SERVICE).
  • Keep /etc/step-ca on a read-only or append-only partition where possible; separate secrets onto encrypted volumes.

3) Secrets management

  • Store SCEP challenge in a secret manager and template it into device configs at enrollment time.
  • If using multiple SCEP realms (e.g., printers vs. laptops), separate provisioners with distinct challenges and policies.

4) Short-lived certs + automation

  • Favor short lifetimes (7–45 days) and auto-renew via SCEP or ACME where supported.
  • For services (ingress gateways, sidecars), consider ACME provisioners in step-ca instead of SCEP.

5) Revocation and status

  • Enable OCSP and/or regularly published CRLs. Some gear only understands CRLs; others can do OCSP.
  • Document how to revoke by CN/serial and how MDM/CM tooling redistributes CRL/OCSP endpoints.

6) Names and OIDs

  • Standardize Subject and SANs. For devices, prefer DNS SANs and URNs (e.g., urn:device:asset:1234) over stuffing identifiers in CN.
  • Use policy OIDs or Extended Key Usages (EKUs) that match your relying parties (ClientAuth, ServerAuth, Wi-Fi EAP-TLS, IPsec, etc.).

7) Auditing

  • Step-ca logs each issuance; forward logs to a SIEM with context (device inventory ID, enrollment workflow ID).
  • Cloud CAS has control plane logs—monitor for volume spikes or unusual issuers.

Validating the build (quick checks)

  • Health: curl -ik https://<FQDN>/health (if you expose a health endpoint or use LB health checks).
  • SCEP reachability: curl -I https://<FQDN>/scep/poc_devicetrust should not 404.
  • CAS connectivity: Attempt a test cert request; if it fails, check service account auth and CAS IAM.
  • Chain trust: On a device, ensure the root (and intermediate, if needed) are in the appropriate trust store. Many SCEP clients install the chain automatically after GetCACert.

Troubleshooting tips

  • Challenge mismatch: 401/failed enrollment—verify SCEP challenge and transport (hidden in MDM payloads).
  • CAS quota or IAM: CAS may rate-limit or block by IAM; review Google Cloud audit logs.
  • Template oddities: If you need custom subject or SAN logic per device group, use step templates and multiple SCEP provisioners.

Where to go from here

  • Add ACME for servers and gateways while keeping SCEP for legacy endpoints.
  • Introduce device attestation checks before issuance (e.g., callbacks/webhooks from step-ca to your SCEP/ACME services).
  • Build up resilience by added device posture checks with existing VPN, EDR/XDR. and other security agents
  • Automate intermediate rotation with change windows and controlled CRL/OCSP updates.

Kubernetes Harvester to Gather Credentials with Limited Access

Project URL: https://github.com/sleventyeleven/Kubernetes-Harvester

Kubernetes Harvester Example Run

What is Kubernetes Harvester?

Harvester is a new python based project that attempts to leverage access in order to gather potentially sensitive information. Its designed to either leverage the access of users credentials or the default access granted to a pod via automountServiceAccountToken, which I wrote about recently. The harvester.py script currently primarily targets pod container environment variables, container manifest environment variables, and config map entries utilized as environment variables, to look for potential credentials.

Why Create A Credential Harvester

The default admission controls in many of the Kubernetes implementation apply a read/view policy to newly created users. However custom policies, admissions, and operators have become more common place. What’s more troublesome is the read permissions given to the automountServiceAccountToken by default. Without adjusting or disabling service tokens, compromised containers could effectively read all pod specs in all namespaces. With access to all pod specs, an attacker could potentially gather credentials or other sensitive information. Harvester is a tool that attempts to help automate the review process.

How Kubernetes Harvester Works

The harvester.py script utilizes the automountServiceAccountToken mounted within a given container or the standard user credentials within the Kube config file (~/.kube/config). Then the Kube API server is queried to look for sensitive information within the pod spec of each pod in the following steps.

  1. Use access to request pod specs for all namespaces within the cluster.
  2. Parse all pod specs to map and dedupe container container information
  3. Review each containers environment variables for sensitive values
  4. Review each config map entry, mapped to container environment variables for sensitive values
  5. Attempt to pull each container image and review the manifest environment variables for sensitive values
  6. Attempt to request authentication tokens from the internal metadata API for each of the major cloud provider

Other Resources:

  • Introduction to Kubernetes – A Free introduction course diving into Kubernetes as a tool for containerized infrastructure. Its a a great place to begin if your just getting started with Kubernetes.
  • The Linux Foundations Official Course – This is the most robust general knowledge based course I’ve seen. If you want to learn Kubernetes and how to do almost anything with it, get the CKA + CKAD combo package.

Sometimes You Just Have to Proxy Your Socks Off

Problem

Sometimes during assessments sensitive systems are significantly segmented from other networks. Therefore its very important for penetration testers to know how to proxy your socks off in order to move across network.

Solution

To gain access to other networks, whether it’s the internet or a protected subnet. We can use putty on windows and the native ssh client on Linux to preform port forwarding and create Socks proxies to bypass access controls.

Proxy Your Socks Off - Web server Post Exploitation with SSH Tunnels and Socks Proxy

Proxy Caveats

SOCKS proxies only work for TCP traffic and with applications that support using a transparent proxy. Applications that use their own proxy settings, require forward secrecy, or check session integrity likely won’t function correctly.

All ports from 1-1024 require administrative rights to allocate on both windows and Linux systems.

Port 0 is used to represent a randomly generated port number, in both windows and linux systems.

How to Proxy Your Socks Off

Proxy Traffic in Windows

In Windows simply open putty and enter the IP address you want to connect  in as the Hostname/IP address.

Proxy your socks off - configure Putty SSH connections

Next we have to tell putty that we want it to open a port on the localhost to be used to forward all traffic to our remote host. To do that, we go to the connections -> SSH -> Tunnels section, add a source port, choose the Dynamic option, and click the add button.

Proxy your socks off - configure Putty SSH for dynamic port forwarding options

At this point you can click the open button and authenticate as if it were a normal SSH connection. Just be sure to leave the terminal open once authenticated, to ensure traffic is being passed from the local port to the remote host.

To tell windows to use the socks proxy, open internet options from the control panel or the start menu search. Then go to the connections tab and open LAN Settings.

Proxy your socks off - configure Control Panel Internet Properties for Socks Proxy

Once LAN settings opens, select the “use a proxy server for your LAN” check box and click the advanced.

Proxy your socks off - configure Windows Lan Settings for Socks Proxy

In the Socks box add localhost or 127.0.0.1 and the port you set as dynamic in putty. Then click OK three times to save all the settings.

Proxy your socks off - configure Windows Advanced Proxy Settings for Socks Proxy

Proxy Traffic in Linux

If you need to proxy your Kali system, the process is fairly similar. Start by using the ssh client to dynamically forward traffic from a local port. This can be done with a command similar to the following, where 9050 is our dynamic port.

ssh -NfD 9050 root@159.246.29.206

Next we need to tell proxy chains where to send traffic from our programs. This can be set globally be using a command like the following.

echo "socks4\t127.0.0.1\t9050" >> /etc/proxychains.conf

To run an application through the socks proxy, simply prepend it with the proxychains command, like the following.

proxychains iceweasel

There is not built in means to setup a system wide socks proxy. However the BadVPN package has a package tun2socks that can tunnel all traffic over a local socks proxy.

Proxy Your Socks Off with Metasploit

Sometimes, while doing an assessment you may even want to run some tools such as nmap or even SQL Management studio (ssms.exe) over an established shell. Metasploit has a post module (auxiliary/server/socks4a) that can be used to create a socks4 proxy on an existing session.

However, to start off we need to tell metasploit how to route traffic to each of our shell’s networks before running the socks proxy. This can either be done manually with the route command or if your session is on a windows host with the autoroute module (post/windows/manage/autoroute).

To add a route manually you can use the built in route command with options similar to the following.

route add 10.0.0.0 255.255.255.0 1

To add routes with autoroute, either use the post module or run autoroute from a meterpreter shell. For the autoroute module (post/windows/manage/autoroute) just set the session ID and run. For autoroute from meterpreter use a command similar to the following.

run autoroute -s 10.0.0.0

Once routes are established within metasploit to your target networks, you can run the socks proxy module (auxiliary/server/socks4a) and note the SRVPORT.

Using Proxychains to Proxy Traffic through Metasploit Meterpreter

Next we need to tell proxychains what port to send traffic to within the global configuration file (/etc/proxychains.conf), just like in the Linux example above. There should be a line like “socks4 127.0.0.1 1080” at the bottom of the file, change the port 1080 to whatever your SRVPORT was in metasploit.

Once the configuration file is updated, proxychains can be used to issue commands through metasploit shell(s). Like with the following nmap example.

proxychains nmap -v -sS 10.0.0.0/24

If we want to make this socks proxy available to a windows host for programs like SQL Server Management Studio, perform a local port forward  to the socks port on the Linux system. To do this we can use putty and follow steps similar to those presented above.

Start by creating a local port forward of a local port on our windows system, to the local socks port on the Linux system with putty. Start by allocating a source port for connection on the local system and forward to a destination of 127.0.0.1:1080; where 1080 is your metasploit SRVPORT.

Proxy your socks off - configure Putty SSH to allocate local port to connect to remote Socks Proxy

We can then just configure a system wide proxy by adding our forwarded port as the socks port, instead of using a local socks proxy.

Proxy your socks off - configure Windows Advanced Proxy Settings for forwarded Socks Proxy

Once those settings are changes, we should be able to use the majority of our tools within windows without issue.

Using SSH to Provide Remote System Internet Access via local Socks Proxy

An SSH tunnel can be used to forward traffic from your local system to a port on a remote system. This can be done in Linux by switching the -L option with -R. Or in putty by choosing the Remote option under tunnels instead of Local. For example if you wanted to share your local socks proxy with a remote system to provide internet access, putty can be used with a remote forward like the following.

Proxy your socks off - configure Putty SSH to allow remote host internet access via a remote port forward to a local socks proxy

Using Compromised Linux Webserver to Access Internal Network and Database

It’s also worth noting that SSH port forwarding can be performed on the network socket level and does not require an interactive session be established; only valid authentication is required. For instance, say you wanted to log into a restricted database of a webserver. But you only have access to the webserver account. The webserver user is not allowed to log into the server interactively by default, but that doesn’t mean it can’t authenticate. In many cases SSH can be used as described in my post on SSH for post exploitation to get around limited user shells.

Using Linux Native Tools to Proxy Your Socks Off

Tools natively built-in to windows and linux can also be used to preform port forwarding. Just note that this methodology simply makes a port to port translation and does not manipulate the traffic in any way. Netcat (nc) is found in almost every single Linux distribution and can be used to easily preform port forwarding with commands similar to the following.

First we have to make a named pipe so that any response from the server aren’t dumped to standard out.

mkfifo backpipe

Then we can use a command similar to the following to send traffic from 8080 on the localhost to a remote host on a different port utilizing the named pipe. This could help get around a firewall or help send traffic to another system to be caught by another port translation or process.

nc -l 8080 0<backpipe | nc example.com 80 1>backpipe

Similarly the netsh (commandline windows firewall editor) command in windows can be used to create a local port forward as well. In this cause we can follow the same example and create a port translation from localhost 8080 to example.com on port 80.

netsh interface portproxy add v4tov4 listenport=8080 listenaddress=127.0.0.1 connectport=80 connectaddress=example.com

Windows 7 and above will likely require administrative privileges to make changes to the windows firewall. But you can likely still utilize the windows version of nc or netcat to redirect traffic all the same.