Harden Device Trust with Token Permissions: Preventing Subversion with GitHub Personal Access Tokens

Device Trust is rapidly becoming a cornerstone of modern security strategies, particularly within software development lifecycles. By ensuring that code changes are initiated from trusted devices, organizations can significantly reduce the risk of supply chain attacks and unauthorized modifications. However, a critical vulnerability often overlooked lies in the potential for users to bypass these controls using Personal Access Tokens (PATs). This blog post will delve into how attackers can leverage PATs to subvert Device Trust mechanisms, and more importantly, how you can harden Device Trust with token permissions through robust management practices.

Why PATs Are a Threat to Device Trust

AspectTraditional Device Trust (Web UI)PAT‑Based Access
Authentication pointBrowser session tied to SSO and device compliance checksDirect API call with static secret
VisibilityUI logs, conditional access policiesAPI audit logs only; may be ignored
Revocation latencyImmediate when device is non‑compliantRequires token rotation or explicit revocation
Scope granularityOften coarse (read/write) per repositoryFine‑grained scopes (e.g., pull_request:writerepo:status)

A PAT can be generated with any combination of scopes that the user’s role permits. When a developer creates a token for automation, they may inadvertently grant more privileges than needed, especially if the organization does not enforce fine‑grained tokens and approvals. The result is a secret that can be used from any machine, managed or unmanaged, effectively sidestepping Device Trust enforcement.

Real‑World Consequence

Imagine an attacker who gains access to a developer’s laptop after it is stolen. They locate the file ~/.git-credentials (or a credential helper store) and extract a PAT that includes pull_request:write. Using this token they can:

  1. Pull the latest code from any repository.
  2. Approve a malicious pull request without ever opening the controlled web UI.
  3. Merge the PR, causing malicious code to flow into production pipelines.

Because the action occurs via the API, the organization’s monitoring solution sees no violation, no unmanaged device attempted to open the GitHub website. The only evidence is an audit‑log entry that a token performed the operation, which may be missed if logging and alerting are not tuned for PAT usage.

Attack Flow: Bypassing Device Trust with PATs

Let’s illustrate how an attacker might exploit this vulnerability using a GitHub example. This flow can be adapted to other platforms like GitLab, Azure DevOps, etc., but the core principles remain consistent.

Explanation:

  1. Attacker Obtains Compromised PAT: This could happen through phishing, malware, credential stuffing, or insecure storage practices by the user.
  2. GitHub API Access: The attacker uses the stolen PAT to authenticate with the GitHub API.
  3. Forge Pull Request: The attacker creates a pull request containing malicious code changes.
  4. Approve Pull Request (Bypass Device Trust): Using the API, the attacker approves the pull request without going through the standard Device Trust verification process. This is the critical bypass step.
  5. Merge Changes to Main Branch: The approved pull request is merged into the main branch, potentially introducing malicious code into production.

The “Device Trust Workflow” subgraph shows the intended secure path. Notice how the attacker completely circumvents this path by leveraging the PAT directly against the API.

Leveraging gh cli and the GitHub API with PATs

Attackers or savvy users don’t need sophisticated tools to exploit PATs. The readily available gh cli (GitHub Command Line Interface) or simple scripting using curl can be used effectively.

Approving a Pull Request with gh cli:

Assuming you have the PAT stored in an environment variable GITHUB_TOKEN:

# Export the stolen token into an environment variable (or store it in ~/.config/gh/config.yml)
export GH_TOKEN=ghp_XXXXXXXXXXXXXXXXXXXXXXXXXXXX

# Authenticate gh with the token (no interactive login required)
gh auth status  # verifies that the token is valid

# List open pull requests for a target repository
gh pr list --repo AcmeCorp/webapp --state open

# Approve and merge a specific PR (ID = 42)
gh pr review 42 --repo AcmeCorp/webapp --approve --body "Looks good to me!"
gh pr merge 42 --repo AcmeCorp/webapp --merge 

All of these actions are performed via the GitHub API behind the scenes. These simple commands bypass any Device Trust checks that would normally be required when approving a pull request through the web interface.

Approving a Pull Request with curl:

# Variables
TOKEN="ghp_XXXXXXXXXXXXXXXXXXXXXXXXXXXX"
OWNER="AcmeCorp"
REPO="webapp"
PR_NUMBER=42

# Submit an approval review
curl -X POST \
  -H "Authorization: token $TOKEN" \
  -H "Accept: application/vnd.github+json" \
  https://api.github.com/repos/$OWNER/$REPO/pulls/$PR_NUMBER/reviews \
  -d '{"event":"APPROVE"}'

# Merge the pull request
curl -X PUT \
  -H "Authorization: token $TOKEN" \
  -H "Accept: application/vnd.github+json" \
  https://api.github.com/repos/$OWNER/$REPO/pulls/$PR_NUMBER/merge \
  -d '{"merge_method":"squash"}'

If the token includes pull_request:write permission scope, both calls succeed, and the attacker has merged malicious code without ever interacting with the controlled web flow.

Hardening Device Trust: Token Management Strategies

The key to mitigating this risk lies in proactive token management and granular permission control. Here’s a breakdown of strategies you can implement:

Disable PATs Where Possible:

This is the most secure approach, but often impractical for organizations heavily reliant on automation or legacy integrations. However, actively identify and eliminate unnecessary PAT usage. Encourage users to migrate to more secure authentication methods like GitHub Apps where feasible.

GitHub now offers Fine-Grained Personal Access Tokens (FG-PATs) which allow you to scope permissions down to specific repositories and even individual resources within those repositories. This is a significant improvement over classic PATs, but still requires careful management.

Implement Organization-Level Policies:

GitHub provides features for managing PAT usage at the organization level:

  • Require FG-PATs: Enforce the use of Fine-Grained Personal Access Tokens instead of classic PATs.
  • Restrict Token Creation: Limit who can create PATs within the organization. Consider restricting creation to specific teams or administrators.
  • Require Administrator approval: Requires an Administrators to approve the token and scope before being usable.
  • Token Expiration Policies: Set a maximum expiration time for all PATs. Shorter lifespans reduce the window of opportunity for attackers if a token is compromised.
  • IP Allowlisting (GitHub Enterprise): Restrict PAT usage to specific IP address ranges, limiting access from known and trusted networks.

GitHub introduced Fine‑grained personal access tokens (FGPATs) that let administrators define which repositories a token can access and what actions it may perform. To require FGPATs, enable the “Restrict access via personal access tokens (classic)” option in Organization Settings → Personal Access Tokens → Settings → Tokens (classic)

Focus on Repository-Level Scopes and Require Approval :

In addition to restricting the use of classic Personal Access Tokens, try to utilize Github apps and/or Oauth for access as they offer a far more robust set of configuration and controls for autonomous workloads. If still need to leverage Fine-Grain Personal access tokens, limit them to a target set of repo(s), require administrator approval, and set a maximum expiration date to limit exposure.

This provides more granular control over permissions and allows for active review/approval:

  • Restrict pull_request:write Permission: The pull_request:write permission is particularly dangerous as it allows users to approve pull requests without Device Trust verification. Consider removing this permission from PATs unless absolutely necessary.
  • Least Privilege Principle: Grant only the minimum permissions required for each PAT. Avoid broad “repo” scope access whenever possible. FG-PATs make this much easier.
  • Code Owners Review: Enforce code owner reviews on all pull requests, even those approved via API. This adds an extra layer of security and helps detect malicious changes.

Token Auditing and Monitoring:

  • Regularly Review PAT Usage: Identify unused or overly permissive tokens.
  • Monitor API Activity: Look for suspicious activity, such as unexpected pull request approvals or changes made outside of normal working hours. GitHub provides audit logs that can be integrated with SIEM systems.
  • Automated Scanning: Use tools to scan code repositories and identify hardcoded PATs.

User Education:

Educate developers about the risks associated with PATs and best practices for secure token management, including:

  • Never commit PATs to source control.
  • Use strong passwords and multi-factor authentication.
  • Rotate tokens regularly.
  • Report any suspected compromise immediately.

Conclusion

Device Trust is a vital security component, but it’s not a silver bullet. Attackers will always seek the path of least resistance, and PATs represent a significant vulnerability if left unmanaged. By implementing robust token management strategies – including disabling unnecessary PATs, enforcing granular permissions, and actively monitoring API activity – you can harden Device Trust with token permissions and significantly reduce your risk of supply chain attacks. Remember that security is a layered approach; combining Device Trust with strong token controls provides the most comprehensive protection for your software development lifecycle.

Creating a Simple Device Trust Gateway Using Device Certificates

In the evolving world of cybersecurity, identity-based access alone is no longer sufficient. The modern Zero Trust model mandates that access decisions consider not just the user but also the device. A user might be who they claim to be, but what if they’re logging in from a compromised machine or a jailbroken phone?

That’s where a device trust gateway comes in—a simple, scalable method to enforce access controls based on both user identity and device posture. Surprisingly, this doesn’t require complex architecture. In fact, with just a few lines of configuration in common web proxies like NGINX, you can create a robust checkpoint to validate device certificates before allowing application access.

In this post, we’ll explore how to build a simple yet effective device trust gateway using web proxy configurations, why it matters, and how it enhances your Zero Trust posture.

What Is a Device Trust Gateway?

device trust gateway is a proxy layer that sits in front of applications and checks whether the connecting device presents a valid, cryptographically signed certificate. This certificate—typically issued by a corporate Certificate Authority (CA)—acts as a machine identity, verifying that the device is registered, managed, and secure.

By validating the certificate before allowing a user session to proceed, organizations can enforce stronger controls such as:

  • Allowing access only from corporate-managed endpoints
  • Blocking jailbroken or unmanaged devices
  • Issuing short-lived access tokens only after successful posture checks

This approach complements MFA and SSO. Even if credentials are phished or stolen, an attacker can’t authenticate without access to a trusted device.

How It Works

  1. Device Enrollment: Devices are provisioned with client certificates from an internal CA.
  2. Proxy Enforcement: A reverse proxy (like NGINX or Apache) is configured to validate client certificates.
  3. Access Control: Only clients presenting valid certificates can reach upstream applications or IdPs (Identity Providers).
  4. Logging and Auditing: All device certificate checks are logged for forensics and compliance.

Why This Matters

In many organizations, devices are a weak link. Remote work, BYOD, and cloud-native services increase the risk of unmanaged or misconfigured endpoints.

By enabling device trust enforcement at the proxy level, you:

  • Avoid re-architecting your identity system
  • Add a powerful security control with minimal code changes
  • Stop attackers who steal credentials but don’t have trusted hardware

The best part? You likely already have the infrastructure to make it happen.

NGINX: Enforcing Client Certificate Validation

NGINX makes it straightforward to enable cleintAuth and client certificate validation.

server {
    listen 443 ssl;
    server_name secure.mycompany.com;

    ssl_certificate /etc/nginx/certs/server.crt;
    ssl_certificate_key /etc/nginx/certs/server.key;
    ssl_client_certificate /etc/nginx/certs/ca.crt; # Your CA Chain
    ssl_verify_client on;# <‑ key line

    location / {
        proxy_pass http://internal-app;
        proxy_set_header X-Client-Cert $ssl_client_cert;
        proxy_set_header X-Client-DN  $ssl_client_s_dn;
    }
}

In this snippet:

  • ssl_client_certificate points to the CA that signed your device certificates
  • ssl_verify_client on enforces certificate presentation
  • The subject DN is passed upstream for audit or additional policy checks

If a device doesn’t present a valid certificate, NGINX terminates the connection.

Note: The client cert can be passed to through the proxy to other backend services using the nginx variable $ssl_client_cert which contains the entire URL encoded client certificate in PEM format.

Optional: Enforce Device Policies

If you want to go beyond “certificate is valid” and enforce per‑device rules, leverage OpenSSL extensions or X.509 Subject Alternative Names (SAN). For example:

# Add a custom extension in the CSR:
openssl req -new -key device-01.key.pem \
    -subj "/CN=device-01.acme.com/O=Acme Devices/C=US" \
    -addext "subjectAltName = @alt_names" \
    -config <(cat /etc/ssl/openssl.cnf <(printf "[alt_names]\nrole=admin\n"))

Then in nginx you can inspect $ssl_client_s_dn or $ssl_client_cert and use map directives to block or allow based on the role.

Apache HTTPD: A Similar ClientAuth Approach

Apache’s mod_ssl module can perform the same function.

<VirtualHost *:443>
    ServerName secure.mycompany.com

    SSLEngine on
    SSLCertificateFile /etc/httpd/certs/server.crt
    SSLCertificateKeyFile /etc/httpd/certs/server.key
    SSLCACertificateFile /etc/httpd/certs/ca.crt
    SSLVerifyClient require

    <Location />
        ProxyPass http://internal-app/
        ProxyPassReverse http://internal-app/
    </Location>
</VirtualHost>

Apache enforces client cert verification with SSLVerifyClient require, ensuring only trusted devices make it through.

Monitoring & Logging

Nginx logs each handshake, including whether client cert verification succeeded. Add a custom log format:

log_format devicelog '$remote_addr - $remote_user [$time_local] '
                     '"$request" $status $body_bytes_sent '
                     'client_cert="$ssl_client_verify" '
                     'cn="$ssl_client_s_dn"';
access_log /var/log/nginx/device_access.log devicelog;

Now you can audit which devices accessed the gateway, detect expired certs, or spot anomalies.

Testing the Gateway

Valid Device – On a client machine, install device-01.cert.pem and device-01.key.pem. Or use curl:

curl -k --cert device-01.cert.pem \
     --key  device-01.key.pem \
     https://proxy.acme.com/

You should get the backend response.

  • Invalid Device – Remove or rename the cert/key and try again; you’ll receive a 403.
  • Expired Certificate – Tamper with device-01.cert.pem’s validity period or use openssl x509 -in device-01.cert.pem -noout -dates to verify expiration. The gateway will reject it automatically.

Device Trust Gateway Flow

Device Trust Gateway Authentication work flow

Steps:

  1. Device connects to proxy and presents client certificate
  2. Proxy checks cert against trusted CA
  3. If valid, forwards request to application
  4. If invalid, terminates connection

Implementation Tips

  • Use short-lived device certificates (e.g., 24 hours)
  • Automate provisioning with MDM scripts and/or SCEP
  • Use headers like X-Client-Cert to enrich identity at the application layer
  • Monitor failed certificate handshakes as potential threats

Conclusion

  • Fast Implementation – Adding just two lines (ssl_verify_client on + ssl_client_certificate) turns any TLS‑enabled proxy into a device trust gateway.
  • Zero‑Trust Foundation – Every device must prove its identity before accessing sensitive resources.
  • Scalable – The same CAs can issue thousands of certificates; you can automate provisioning via scripts or PKI tools like step-ca.

Final Thoughts

You don’t need to overhaul your infrastructure to implement device trust. Adding a few lines of proxy configuration can provide a powerful gateway that ensures only secure, trusted devices can access your applications.

In a Zero Trust world, identity is not enough. Trust must be earned—and verified—by the devices themselves.

Sometimes You Just Have to Proxy Your Socks Off

Problem

Sometimes during assessments sensitive systems are significantly segmented from other networks. Therefore its very important for penetration testers to know how to proxy your socks off in order to move across network.

Solution

To gain access to other networks, whether it’s the internet or a protected subnet. We can use putty on windows and the native ssh client on Linux to preform port forwarding and create Socks proxies to bypass access controls.

Proxy Your Socks Off - Web server Post Exploitation with SSH Tunnels and Socks Proxy

Proxy Caveats

SOCKS proxies only work for TCP traffic and with applications that support using a transparent proxy. Applications that use their own proxy settings, require forward secrecy, or check session integrity likely won’t function correctly.

All ports from 1-1024 require administrative rights to allocate on both windows and Linux systems.

Port 0 is used to represent a randomly generated port number, in both windows and linux systems.

How to Proxy Your Socks Off

Proxy Traffic in Windows

In Windows simply open putty and enter the IP address you want to connect  in as the Hostname/IP address.

Proxy your socks off - configure Putty SSH connections

Next we have to tell putty that we want it to open a port on the localhost to be used to forward all traffic to our remote host. To do that, we go to the connections -> SSH -> Tunnels section, add a source port, choose the Dynamic option, and click the add button.

Proxy your socks off - configure Putty SSH for dynamic port forwarding options

At this point you can click the open button and authenticate as if it were a normal SSH connection. Just be sure to leave the terminal open once authenticated, to ensure traffic is being passed from the local port to the remote host.

To tell windows to use the socks proxy, open internet options from the control panel or the start menu search. Then go to the connections tab and open LAN Settings.

Proxy your socks off - configure Control Panel Internet Properties for Socks Proxy

Once LAN settings opens, select the “use a proxy server for your LAN” check box and click the advanced.

Proxy your socks off - configure Windows Lan Settings for Socks Proxy

In the Socks box add localhost or 127.0.0.1 and the port you set as dynamic in putty. Then click OK three times to save all the settings.

Proxy your socks off - configure Windows Advanced Proxy Settings for Socks Proxy

Proxy Traffic in Linux

If you need to proxy your Kali system, the process is fairly similar. Start by using the ssh client to dynamically forward traffic from a local port. This can be done with a command similar to the following, where 9050 is our dynamic port.

ssh -NfD 9050 root@159.246.29.206

Next we need to tell proxy chains where to send traffic from our programs. This can be set globally be using a command like the following.

echo "socks4\t127.0.0.1\t9050" >> /etc/proxychains.conf

To run an application through the socks proxy, simply prepend it with the proxychains command, like the following.

proxychains iceweasel

There is not built in means to setup a system wide socks proxy. However the BadVPN package has a package tun2socks that can tunnel all traffic over a local socks proxy.

Proxy Your Socks Off with Metasploit

Sometimes, while doing an assessment you may even want to run some tools such as nmap or even SQL Management studio (ssms.exe) over an established shell. Metasploit has a post module (auxiliary/server/socks4a) that can be used to create a socks4 proxy on an existing session.

However, to start off we need to tell metasploit how to route traffic to each of our shell’s networks before running the socks proxy. This can either be done manually with the route command or if your session is on a windows host with the autoroute module (post/windows/manage/autoroute).

To add a route manually you can use the built in route command with options similar to the following.

route add 10.0.0.0 255.255.255.0 1

To add routes with autoroute, either use the post module or run autoroute from a meterpreter shell. For the autoroute module (post/windows/manage/autoroute) just set the session ID and run. For autoroute from meterpreter use a command similar to the following.

run autoroute -s 10.0.0.0

Once routes are established within metasploit to your target networks, you can run the socks proxy module (auxiliary/server/socks4a) and note the SRVPORT.

Using Proxychains to Proxy Traffic through Metasploit Meterpreter

Next we need to tell proxychains what port to send traffic to within the global configuration file (/etc/proxychains.conf), just like in the Linux example above. There should be a line like “socks4 127.0.0.1 1080” at the bottom of the file, change the port 1080 to whatever your SRVPORT was in metasploit.

Once the configuration file is updated, proxychains can be used to issue commands through metasploit shell(s). Like with the following nmap example.

proxychains nmap -v -sS 10.0.0.0/24

If we want to make this socks proxy available to a windows host for programs like SQL Server Management Studio, perform a local port forward  to the socks port on the Linux system. To do this we can use putty and follow steps similar to those presented above.

Start by creating a local port forward of a local port on our windows system, to the local socks port on the Linux system with putty. Start by allocating a source port for connection on the local system and forward to a destination of 127.0.0.1:1080; where 1080 is your metasploit SRVPORT.

Proxy your socks off - configure Putty SSH to allocate local port to connect to remote Socks Proxy

We can then just configure a system wide proxy by adding our forwarded port as the socks port, instead of using a local socks proxy.

Proxy your socks off - configure Windows Advanced Proxy Settings for forwarded Socks Proxy

Once those settings are changes, we should be able to use the majority of our tools within windows without issue.

Using SSH to Provide Remote System Internet Access via local Socks Proxy

An SSH tunnel can be used to forward traffic from your local system to a port on a remote system. This can be done in Linux by switching the -L option with -R. Or in putty by choosing the Remote option under tunnels instead of Local. For example if you wanted to share your local socks proxy with a remote system to provide internet access, putty can be used with a remote forward like the following.

Proxy your socks off - configure Putty SSH to allow remote host internet access via a remote port forward to a local socks proxy

Using Compromised Linux Webserver to Access Internal Network and Database

It’s also worth noting that SSH port forwarding can be performed on the network socket level and does not require an interactive session be established; only valid authentication is required. For instance, say you wanted to log into a restricted database of a webserver. But you only have access to the webserver account. The webserver user is not allowed to log into the server interactively by default, but that doesn’t mean it can’t authenticate. In many cases SSH can be used as described in my post on SSH for post exploitation to get around limited user shells.

Using Linux Native Tools to Proxy Your Socks Off

Tools natively built-in to windows and linux can also be used to preform port forwarding. Just note that this methodology simply makes a port to port translation and does not manipulate the traffic in any way. Netcat (nc) is found in almost every single Linux distribution and can be used to easily preform port forwarding with commands similar to the following.

First we have to make a named pipe so that any response from the server aren’t dumped to standard out.

mkfifo backpipe

Then we can use a command similar to the following to send traffic from 8080 on the localhost to a remote host on a different port utilizing the named pipe. This could help get around a firewall or help send traffic to another system to be caught by another port translation or process.

nc -l 8080 0<backpipe | nc example.com 80 1>backpipe

Similarly the netsh (commandline windows firewall editor) command in windows can be used to create a local port forward as well. In this cause we can follow the same example and create a port translation from localhost 8080 to example.com on port 80.

netsh interface portproxy add v4tov4 listenport=8080 listenaddress=127.0.0.1 connectport=80 connectaddress=example.com

Windows 7 and above will likely require administrative privileges to make changes to the windows firewall. But you can likely still utilize the windows version of nc or netcat to redirect traffic all the same.