Harden Access Gateways with Geofencing – A Practical Guide for Enhanced Perimeter Security

In today’s distributed world the perimeter of an organization is no longer a single firewall at the edge of a data center. Users, services and devices connect from any location, over public clouds, through orchestrated container networks and via remote VPNs. This shift has driven security teams to adopt Zero Trust, a model that validates not only who you are but also where your device is located and what network it uses before granting access to critical resources.

One of the most effective ways to add an additional layer of verification is geofencing – the practice of allowing or denying traffic based on its geographic or network attributes such as country, city, or Autonomous System Number (ASN). When combined with strong device authentication (for example Device Trust certificates) geofencing can dramatically reduce the attack surface of your Device Trust access gateways.

This post explains how to harden access gateways with geofencing using Nginx and the ngx_http_geoip2_module. We will walk through obtaining free GeoIP data from MaxMind, configuring Nginx as a reverse proxy that blocks traffic by ASN, integrating geofence policies into modern identity providers like Authentik, and visualizing a example secure access flow. The examples are designed for Linux environments but can be adapted to any container or cloud platform.

Why Device Trust Matters in Modern Cloud Environments

  • Devices now connect from home offices, coffee shops, mobile networks and public clouds.
  • Attackers often use compromised devices or rented cloud instances that appear legitimate.
  • Traditional username/password/MFA checks do not verify the legitimacy of the device itself.
  • Adding a location check/monitoring makes it much harder for an adversary to reuse stolen credentials from an unexpected region.

When you combine device certificates, modern identity federation, and geofencing, you create a zero trust style gateway that only accepts traffic that meets all three criteria:

  1. Valid client certificate issued by your private Device Trust CA.
  2. Successful authentication with your Identity Provider (IdP).
  3. Source IP (or X-Forward-For) belongs to an expected country, city or ASN.

If any of these checks fail, the request is dropped before it reaches downstream services.

The Role of Geofencing in Hardening Access Gateways

Geofencing works by mapping an incoming IP address to a set of attributes – usually:

  • Country code (ISO‑3166 two‑letter format).
  • City name, coordinates and accuracy radius.
  • Autonomous System Number (ASN) which identifies the ISP or network owner.

These mappings are provided by public databases such as MaxMind’s GeoLite2. Because the data is freely available, you can implement geofencing without paying for a commercial service. The key steps are:

  1. Download and regularly update the GeoIP database.
  2. Load the database into your reverse proxy (Nginx in this example).
  3. Define rules that allow or deny traffic based on the mapped attributes.
  4. (Optional) Combine those rules with device certificate validation and IdP User attributes .

Getting Started with GeoIP Data Sources

MaxMind offers three primary free databases:

  • GeoLite2‑Country – maps IP to country code.
  • GeoLite2‑ASN – maps IP to ASN number and organization name.
  • GeoLite2-City– maps IP to City name as well as latitude, longitude, and accuracy radius.

Note: There are other free and paid providers of MaxMind (.mmdb) geolocation databases, which also should integrate into the same tooling without issue. Some great options are ipinfo lite, iplocate free, and ip2location lite.

You can obtain them by creating a free MaxMind account, accepting the license, and downloading the .mmdb files. To keep the data fresh you should schedule regular updates (MaxMind releases new versions weekly). The open source tool geoipupdate automates this process:

# Install geoipupdate on Debian/Ubuntu
apt-get update
apt-get install -y geoipupdate

# Create /etc/GeoIP.conf with your account details
cat <<EOF | sudo tee /etc/GeoIP.conf
AccountID YOUR_ACCOUNT_ID
LicenseKey YOUR_LICENSE_KEY
EditionIDs GeoLite2-Country GeoLite2-City GeoLite2-ASN
EOF

# Run the update immediately and enable a daily cron job
sudo geoipupdate

The resulting files are typically stored in /var/lib/GeoIP/ as GeoLite2-Country.mmdb and GeoLite2-ASN.mmdb. Adjust the paths in your Nginx configuration accordingly.

Installing and Configuring ngx_http_geoip2_module

The ngx_http_geoip2_module is a third‑party module that provides fast lookups of GeoIP data inside Nginx. It works with both the open source and commercial versions of Nginx, but for most Linux distributions you will need to compile it as a dynamic module.

#Install Build Prerequisites
apt-get install -y build-essential libpcre3-dev zlib1g-dev libssl-dev libmaxminddb-dev  nginx wget git vim

#Download Nginx Source and the GeoIP2 Module
NGINX_VERSION=$(nginx -v 2>&1 | cut -d'/' -f2| cut -d' ' -f1)
wget http://nginx.org/download/nginx-${NGINX_VERSION}.tar.gz
tar xzf nginx-${NGINX_VERSION}.tar.gz
git clone https://github.com/leev/ngx_http_geoip2_module.git

#Compile the Module as a Dynamic Loadable Object
cd nginx-${NGINX_VERSION}
./configure --with-compat --add-dynamic-module=../ngx_http_geoip2_module
make modules
cp objs/ngx_http_geoip2_module.so /usr/share/nginx/modules/
echo "load_module modules/ngx_http_geoip2_module.so;" > /etc/nginx/modules-available/mod-http-geoip2.conf
ln -s /etc/nginx/modules-available/mod-http-geoip2.conf /etc/nginx/modules-enabled/60-mod-http-geoip2.conf


#Enable and configure the Module by adding the following to the http section
vim /etc/nginx/nginx.conf

'''
geoip2 /var/lib/GeoIP/GeoLite2-Country.mmdb {
    auto_reload 60m;
    $geoip2_metadata_country_build metadata build_epoch;
    $geoip2_country_code country iso_code;
    $geoip2_country_name country names en;
}

geoip2 /var/lib/GeoIP/GeoLite2-City.mmdb {
    auto_reload 60m;
    $geoip2_metadata_city_build metadata build_epoch;
    $geoip2_city_name city names en;
}

fastcgi_param COUNTRY_CODE $geoip2_country_code;
fastcgi_param COUNTRY_NAME $geoip2_country_name;
fastcgi_param CITY_NAME    $geoip2_city_name;
'''

Now you can use the GeoIP2 country or city variables or create custom directives inside your server blocks.

Blocking Traffic by Country with Nginx GeoIP2

A simple config that only allows traffic from the United States and Canada might look like this:

http {

    map $geoip2_country_code $allowed_country {
        default no;
        US      yes;
        CA      yes;
    }

    server {
        listen 443 ssl;
        server_name gateway.example.com;

        # TLS configuration omitted for brevity

        if ($allowed_country = no) {
            return 403;
        }

        location / {
            proxy_pass http://backend;
        }
    }
}

This configuration uses the $geoip2_country_code variable loaded from the geoip2 via the main nginx config. The map block creates a boolean variable $allowed_country that is later used in an if statement to reject disallowed traffic with HTTP 403.

ASN Based Geofence on an Nginx Reverse Proxy

Blocking by ASN provides finer granularity than country alone, especially when you want to restrict access to corporate ISP ranges or known cloud providers. Below is a more advanced configuration that:

  • Allows only devices originating from your corporate ASN (e.g., AS12345) or a trusted cloud provider (AS67890).
  • Requires a valid client certificate signed by your internal CA.
  • Sends the authenticated request to an internal API gateway.
http {
    # Load both country and ASN databases
    geoip2 /var/lib/GeoIP/GeoLite2-ASN.mmdb {
        auto_reload 5m;
        $geoip2_asn_number asn asn;
        $geoip2_asn_org asn organization;
    }

    # Define the list of permitted ASNs
    map $geoip2_asn_number $asn_allowed {
        default          no;
        12345            yes;   # Corporate ISP
        67890            yes;   # Trusted Cloud Provider
    }

    server {
        listen 443 ssl;
        server_name api-gateway.example.com;

        # TLS configuration (certificate, key) omitted for brevity

        # Enforce mutual TLS – reject if no cert or invalid cert
        ssl_verify_client on;
        ssl_client_certificate /etc/nginx/certs/ca.crt; # Your CA Chain

        # If client certificate verification fails, Nginx returns 400 automatically.
        # Add an explicit check for ASN after TLS handshake:
        if ($asn_allowed = no) {
            return 403;
        }

        location / {
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Client-Cert $ssl_client_cert;
            proxy_set_header X-Client-DN  $ssl_client_s_dn
            proxy_pass http://internal-app;
        }
    }
}

Explanation of key parts

  • $geoip2_asn_number is populated by the GeoIP2 lookup; the map block translates the ASN into a simple yes/no flag.
  • The if ($asn_allowed = no) clause blocks any request that does not originate from an allowed ASN, even if the client certificate is valid.

You can extend this pattern to include city-level checks ($geoip2_city_name) or combine multiple criteria with logical operators.

Integrating GeoIP Policy with Authentik IDP

Authentik is a modern open source identity provider that supports OIDC, SAML and LDAP. It can enforce additional policies during the authentication flow, such as requiring a specific claim that matches your geofence rules.

Enable the GeoIP Policy within Authentik

Since Authentik version 2022.12 GeoIP is baked in and only requires mmdb files provided during startup for policies to be enabled.

  • Upload the same GeoLite2-City.mmdb & GeoLite2-ASN.mmdb files used for Nginx.
  • Provide the GeoIP (city mmdb) and ASN (ASN mmdb) file paths as env variables during startup
  • Setup schedule to update the files or configure geoipupdate plugin/container with your license key.

Now every authentication request will have an ASN value attached, which can be referenced in policies.

Create a GeoIP Polices in Authentik

In the Authentik admin UI:

  1. Navigate to Customizations → Policies.
  2. Add a new GeoIP Policy named “GeoIP Default”.
  3. Configure the default Distance and Static settings based on your needs

Distance Settings:

  • Maximum distance – The maximum distance allowed in Kilometers between logins
  • Distance tolerance – The allowable difference to account for data accuracy
  • Historical Login Count – The number of past logins to account for when evaluating
  • Check impossible travel – Whether to check logins/sessions for impossible travel >1k

Static Settings

  • Allowed ASNs – Comma separated list of all of the ASNs allowed to for the given policy
  • Allowed Countries – List of countries the policy allows connections from

(Optional) Create a Custom Policy in Authentik

In the Authentik admin UI:

  1. Navigate to Customizations → Policies.
  2. Add a new Expression Policy named “GeoIP ASN Allowlist”.
  3. Use the following Jinja2 expression (replace the ASNs with your allowed values):
{% set allowed_asns = [12345, 67890] %}
{{ context["asn"]["asn"] in allowed_asns and context["geoip"]["continent"] == "NA" }}

The context[“asn”] attribute is automatically populated by Authentik when the GeoIP ASN database and context[“geoip”] is provided by GeoIP City database. Both are used in conjunction here to required connections from an approved ASN network and from North America.

Attach the Policy to Your OIDC Application

  1. Open Applications → Your API Gateway.
  2. Under Policy Binding, add the “GeoIP Default” policy.
  3. (Optional) Under Policy Binding, add the “GeoIP ASN Allowlist” policy.
  4. Save changes.

When a user authenticates via Authentik, the flow will evaluate the policy. If the source IP belongs to an unauthorized ASN, authentication fails and no token is issued. This adds a second line of defense: even if an attacker obtains valid client certificates, they cannot get a JWT unless they connected from an allowed network.

Enforce Token Claims at Nginx

You can configure Nginx to validate the JWT issued by Authentik and also verify that it contains the expected ip_asn claim. The ngx_http_auth_jwt_module (available in the open source version) can be used:

http {
    # Load GeoIP2 as before

    server {
        listen 443 ssl;
        server_name api-gateway.example.com;

        # TLS settings omitted

        auth_jwt "Protected API";
        auth_jwt_key_file /etc/nginx/jwt-public.key;   # Authentik public key
        auth_jwt_claim_set $jwt_asn ip_asn;

        # Reject if JWT claim does not match allowed ASN list
        map $jwt_asn $jwt_asn_allowed {
            default no;
            12345   yes;
            67890   yes;
        }

        if ($jwt_asn_allowed = no) {
            return 403;
        }

        location / {
            proxy_pass http://internal-api;
        }
    }
}

The flow now looks like this:

  1. TLS handshake verifies client certificate.
  2. Nginx extracts the source IP and performs GeoIP ASN lookup.
  3. The request is redirected to Authentik for OIDC authentication.
  4. Authentik checks the “GeoIP ASN Allowlist” policy; if it passes, a JWT containing ip_asn is returned.
  5. Nginx validates the JWT and ensures the claim matches the allowed list before proxying to the backend.

This combination of device trust certificates, geofence enforcement and IdP policies creates a robust zero‑trust perimeter around your sensitive services.

Best Practices for Maintaining Geofencing Rules

  • Regularly update GeoIP databases – use geoipupdate with cron or systemd timers.
  • Keep an audit log of denied requests – configure Nginx error logs to capture $remote_addr, $geo_country_code, $geo_asn_number and the reason for denial.
  • Use a allowlist rather than a blocklist – allow only known good ASNs/countries; attackers can easily spoof or route through VPNs that belong to allowed regions.
  • Combine with rate limiting – even legitimate IP ranges may be abused; use limit_req_zone and limit_conn_zone.
  • Test changes in a staging environment – a mis‑configured ASN map could lock out all users out of an application, including administrators.
  • Monitor for anomalies – sudden spikes of traffic from an unexpected ASN can indicate compromised credentials.

Access Flow Diagram

Below is flowchart that visualizes the hardened access methodology. Only devices presenting a valid client Device Trust certificate and originating from an allowed location/ASN are permitted to obtain an authentication token and reach the protected service.

harden access gateways with geofencing

The diagram highlights three independent checks:

  • TLS client certificate – ensures the device holds a trusted private key.
  • GeoIP ASN validation at Nginx – blocks traffic from unknown networks before any authentication attempt.
  • Authentik policy enforcement and JWT claim verification – guarantees that the token itself reflects an allowed source network and travel distance is within tolerance.

Only when all three conditions succeed does the request reach the backend service.

Monitoring and Auditing Geofence Enforcement

A hardened gateway is only as good as its visibility. Implementing robust logging and alerting helps you detect misconfigurations or active attacks.

Nginx Log Format Extension

Add a custom log format that captures GeoIP variables:

http {
    log_format geo_combined '$remote_addr - $remote_user [$time_local] '
                            '"$request" $status $body_bytes_sent '
                            '"$http_referer" "$http_user_agent" '
                            'asn=$geoip2_asn_number country=$geoip2_country_code';

    access_log /var/log/nginx/access_geo.log geo_combined;
}

Centralized Log Collection

  • Ship logs to Elasticsearch, Splunk or Loki.
  • Create dashboards that filter on status=403 and group by $geoip2_asn_number.
  • Set alerts for spikes in denied traffic from a single ASN.

Scaling Geofence Enforcement Across Multiple Gateways

In large environments you may have dozens of ingress controllers. To keep policies consistent:

  • Store the allowed ASN list in a central source (e.g., Consul KV, etcd, or a ConfigMap).
  • Use a templating engine like envsubst or Helm to generate Nginx configs on each node.
  • Automate database updates with a CI/CD pipeline that pulls the latest MaxMind files and pushes them to all pods.

By treating the geofence policy as code you can version it, review changes via pull requests, and roll back quickly if an error blocks legitimate traffic.

Conclusion

Geofencing is a powerful yet straightforward technique for hardening access gateways with geofencing. By leveraging free GeoIP data from MaxMind, the ngx_http_geoip2_module in Nginx, and modern identity providers such as Authentik, you can enforce policies that require:

  • A trusted device certificate.
  • An allowed source network identified by ASN.
  • Successful authentication with a policy‑aware IdP.

The layered approach dramatically reduces the attack surface for privileged services, makes credential theft less useful, and gives security teams clear visibility into who is trying to connect from where. Combined with automated updates, logging, and containerized deployment, geofence enforcement can scale across hybrid cloud environments without adding significant operational overhead.

Start by downloading the GeoLite2 databases, compile the GeoIP2 module for your Nginx instances, define a allowlist of ASNs that correspond to your corporate trust network and approved cloud providers, and integrate the policy into your IdP. From there monitor the logs, tune the allowlist as your network evolves, and you’ll have a robust zero‑trust perimeter protecting your most sensitive workloads.

Remember: Device trust plus geofencing equals stronger security – and with the tools described in this post you can implement it today on Linux, cloud, and container platforms.

Harden Device Trust with Token Permissions: Preventing Subversion with GitHub Personal Access Tokens

Device Trust is rapidly becoming a cornerstone of modern security strategies, particularly within software development lifecycles. By ensuring that code changes are initiated from trusted devices, organizations can significantly reduce the risk of supply chain attacks and unauthorized modifications. However, a critical vulnerability often overlooked lies in the potential for users to bypass these controls using Personal Access Tokens (PATs). This blog post will delve into how attackers can leverage PATs to subvert Device Trust mechanisms, and more importantly, how you can harden Device Trust with token permissions through robust management practices.

Why PATs Are a Threat to Device Trust

AspectTraditional Device Trust (Web UI)PAT‑Based Access
Authentication pointBrowser session tied to SSO and device compliance checksDirect API call with static secret
VisibilityUI logs, conditional access policiesAPI audit logs only; may be ignored
Revocation latencyImmediate when device is non‑compliantRequires token rotation or explicit revocation
Scope granularityOften coarse (read/write) per repositoryFine‑grained scopes (e.g., pull_request:writerepo:status)

A PAT can be generated with any combination of scopes that the user’s role permits. When a developer creates a token for automation, they may inadvertently grant more privileges than needed, especially if the organization does not enforce fine‑grained tokens and approvals. The result is a secret that can be used from any machine, managed or unmanaged, effectively sidestepping Device Trust enforcement.

Real‑World Consequence

Imagine an attacker who gains access to a developer’s laptop after it is stolen. They locate the file ~/.git-credentials (or a credential helper store) and extract a PAT that includes pull_request:write. Using this token they can:

  1. Pull the latest code from any repository.
  2. Approve a malicious pull request without ever opening the controlled web UI.
  3. Merge the PR, causing malicious code to flow into production pipelines.

Because the action occurs via the API, the organization’s monitoring solution sees no violation, no unmanaged device attempted to open the GitHub website. The only evidence is an audit‑log entry that a token performed the operation, which may be missed if logging and alerting are not tuned for PAT usage.

Attack Flow: Bypassing Device Trust with PATs

Let’s illustrate how an attacker might exploit this vulnerability using a GitHub example. This flow can be adapted to other platforms like GitLab, Azure DevOps, etc., but the core principles remain consistent.

Explanation:

  1. Attacker Obtains Compromised PAT: This could happen through phishing, malware, credential stuffing, or insecure storage practices by the user.
  2. GitHub API Access: The attacker uses the stolen PAT to authenticate with the GitHub API.
  3. Forge Pull Request: The attacker creates a pull request containing malicious code changes.
  4. Approve Pull Request (Bypass Device Trust): Using the API, the attacker approves the pull request without going through the standard Device Trust verification process. This is the critical bypass step.
  5. Merge Changes to Main Branch: The approved pull request is merged into the main branch, potentially introducing malicious code into production.

The “Device Trust Workflow” subgraph shows the intended secure path. Notice how the attacker completely circumvents this path by leveraging the PAT directly against the API.

Leveraging gh cli and the GitHub API with PATs

Attackers or savvy users don’t need sophisticated tools to exploit PATs. The readily available gh cli (GitHub Command Line Interface) or simple scripting using curl can be used effectively.

Approving a Pull Request with gh cli:

Assuming you have the PAT stored in an environment variable GITHUB_TOKEN:

# Export the stolen token into an environment variable (or store it in ~/.config/gh/config.yml)
export GH_TOKEN=ghp_XXXXXXXXXXXXXXXXXXXXXXXXXXXX

# Authenticate gh with the token (no interactive login required)
gh auth status  # verifies that the token is valid

# List open pull requests for a target repository
gh pr list --repo AcmeCorp/webapp --state open

# Approve and merge a specific PR (ID = 42)
gh pr review 42 --repo AcmeCorp/webapp --approve --body "Looks good to me!"
gh pr merge 42 --repo AcmeCorp/webapp --merge 

All of these actions are performed via the GitHub API behind the scenes. These simple commands bypass any Device Trust checks that would normally be required when approving a pull request through the web interface.

Approving a Pull Request with curl:

# Variables
TOKEN="ghp_XXXXXXXXXXXXXXXXXXXXXXXXXXXX"
OWNER="AcmeCorp"
REPO="webapp"
PR_NUMBER=42

# Submit an approval review
curl -X POST \
  -H "Authorization: token $TOKEN" \
  -H "Accept: application/vnd.github+json" \
  https://api.github.com/repos/$OWNER/$REPO/pulls/$PR_NUMBER/reviews \
  -d '{"event":"APPROVE"}'

# Merge the pull request
curl -X PUT \
  -H "Authorization: token $TOKEN" \
  -H "Accept: application/vnd.github+json" \
  https://api.github.com/repos/$OWNER/$REPO/pulls/$PR_NUMBER/merge \
  -d '{"merge_method":"squash"}'

If the token includes pull_request:write permission scope, both calls succeed, and the attacker has merged malicious code without ever interacting with the controlled web flow.

Hardening Device Trust: Token Management Strategies

The key to mitigating this risk lies in proactive token management and granular permission control. Here’s a breakdown of strategies you can implement:

Disable PATs Where Possible:

This is the most secure approach, but often impractical for organizations heavily reliant on automation or legacy integrations. However, actively identify and eliminate unnecessary PAT usage. Encourage users to migrate to more secure authentication methods like GitHub Apps where feasible.

GitHub now offers Fine-Grained Personal Access Tokens (FG-PATs) which allow you to scope permissions down to specific repositories and even individual resources within those repositories. This is a significant improvement over classic PATs, but still requires careful management.

Implement Organization-Level Policies:

GitHub provides features for managing PAT usage at the organization level:

  • Require FG-PATs: Enforce the use of Fine-Grained Personal Access Tokens instead of classic PATs.
  • Restrict Token Creation: Limit who can create PATs within the organization. Consider restricting creation to specific teams or administrators.
  • Require Administrator approval: Requires an Administrators to approve the token and scope before being usable.
  • Token Expiration Policies: Set a maximum expiration time for all PATs. Shorter lifespans reduce the window of opportunity for attackers if a token is compromised.
  • IP Allowlisting (GitHub Enterprise): Restrict PAT usage to specific IP address ranges, limiting access from known and trusted networks.

GitHub introduced Fine‑grained personal access tokens (FGPATs) that let administrators define which repositories a token can access and what actions it may perform. To require FGPATs, enable the “Restrict access via personal access tokens (classic)” option in Organization Settings → Personal Access Tokens → Settings → Tokens (classic)

Focus on Repository-Level Scopes and Require Approval :

In addition to restricting the use of classic Personal Access Tokens, try to utilize Github apps and/or Oauth for access as they offer a far more robust set of configuration and controls for autonomous workloads. If still need to leverage Fine-Grain Personal access tokens, limit them to a target set of repo(s), require administrator approval, and set a maximum expiration date to limit exposure.

This provides more granular control over permissions and allows for active review/approval:

  • Restrict pull_request:write Permission: The pull_request:write permission is particularly dangerous as it allows users to approve pull requests without Device Trust verification. Consider removing this permission from PATs unless absolutely necessary.
  • Least Privilege Principle: Grant only the minimum permissions required for each PAT. Avoid broad “repo” scope access whenever possible. FG-PATs make this much easier.
  • Code Owners Review: Enforce code owner reviews on all pull requests, even those approved via API. This adds an extra layer of security and helps detect malicious changes.

Token Auditing and Monitoring:

  • Regularly Review PAT Usage: Identify unused or overly permissive tokens.
  • Monitor API Activity: Look for suspicious activity, such as unexpected pull request approvals or changes made outside of normal working hours. GitHub provides audit logs that can be integrated with SIEM systems.
  • Automated Scanning: Use tools to scan code repositories and identify hardcoded PATs.

User Education:

Educate developers about the risks associated with PATs and best practices for secure token management, including:

  • Never commit PATs to source control.
  • Use strong passwords and multi-factor authentication.
  • Rotate tokens regularly.
  • Report any suspected compromise immediately.

Conclusion

Device Trust is a vital security component, but it’s not a silver bullet. Attackers will always seek the path of least resistance, and PATs represent a significant vulnerability if left unmanaged. By implementing robust token management strategies – including disabling unnecessary PATs, enforcing granular permissions, and actively monitoring API activity – you can harden Device Trust with token permissions and significantly reduce your risk of supply chain attacks. Remember that security is a layered approach; combining Device Trust with strong token controls provides the most comprehensive protection for your software development lifecycle.

Detecting Device Trust Certificate Exports: How to Build Custom SIEM/XDR Rules

Introduction

In modern Zero‑Trust environments a Device Trust certificate is the linchpin that provides endpoints the ability to authenticate independently and extends an additional layer of actionable controls. These certificates are typically combined into PKCS #12 (*.p12) bundles and imported into the Windows Certificate Store or macOS Keychain with the non‑exportable private key attribute set. The intention is clear: users should never be able to pull the private key out of the local store.

In practice, however, attackers (or careless insiders) can still attempt to export the private key using native utilities such as certutil.exe on Windows or the security CLI on macOS. Because the export operation leaves an audit trail in system logs, detecting Device Trust certificate exports becomes a realistic and valuable detection use‑case for any SIEM or XDR platform.

This blog walks you through:

  • Why non‑exportable flags are not always enough
  • The exact commands used to attempt to export on Windows and macOS
  • The log events generated by each those commands
  • How to translate that knowledge into reusable Sigma detection rules

Device Trust Certificates – The Security Goal and the Reality

A Device Trust certificate forums an endpoint’s identity for an additional layer of authentication and authorization in your environment . When a service receives a TLS client‑certificate handshake it can verify that the device its identity, managed, and security posture.

Administrators usually elect to set the NoExport option when importing with certutil in Windows and/or rely on the -x (non-extractable) option when using security import in macOS. Setting these flags tell the OS to keep the private key inside a protected store and not to write them to disk in clear text.

Why it still matters:

  • The protection is enforced at the API level, not at the file‑system level.
  • A user with administrative rights can invoke privileged utilities that request the private key from the CryptoAPI (Windows) or Security.framework (macOS).
  • Attackers who have already compromised an account often have the same ability to run those utilities or advanced forensics tools

Note: If certificates are not set as non-exportable when loaded into the users certificate store. The user will be able to extract the private key and full key bundle without any additional permissions.

Therefore, detection must focus on the act of attempting an export, not just the presence of a non‑exportable flag.

Threat Scenario – Exporting a Private Key for Lateral Movement

Consider an attacker who has gained local admin rights on a laptop that is enrolled in Device Trust. The attacker’s objectives may include:

  1. Steal the private key to impersonate the device when communicating with internal services.
  2. Reuse the certificate on another machine to bypass device compliance checks.
  3. Exfiltrate the key for later use in a authentication attacks or user impersonation .

Even if the attacker cannot directly read the private key from the store, they attempt to call certutil -exportpfx (Windows) or security export (macOS) to generate a new PKCS #12 bundle. The resulting file can be copied to a USB drive, uploaded to cloud storage, or transmitted over an encrypted tunnel—each step leaving observable footprints.

Export Techniques on Windows

Using certutil.exe

The most common native tool is certutil.exe. Users can attempt export from their local store like:

certutil -exportpfx -p "" MY <Thumbprint> C:\Temp\device-trust.p12
  • -p "" supplies an empty password (or a user‑chosen one).
  • My is the personal store where Device Trust certificates live.
  • <Thumbprint> identifies the exact certificate to export.

If the key was not marked non‑exportable or it was specifically marked as exportable, certutil will still succeed when run under a privileged context because it uses the Cryptographic Service Provider (CSP) API that can request the private key material for an authorized user.

PowerShell Alternative

PowerShell’s Export-PfxCertificate cmdlet also works:

$cert = Get-ChildItem Cert:\CurrentUser\My\<Thumbprint>
Export-PfxCertificate -Cert $cert -FilePath C:\Temp\device-trust.p12 -Password (ConvertTo-SecureString -String "" -AsPlainText -Force)

Both commands generate a *.p12 file that can be moved off the host.

What Gets Logged?

  • Windows Security Auditing – Event ID 4688 (A new process has been created) logs the full command line when audit policy Process Creation is enabled.
  • Sysmon (if installed) – Event ID 1 captures the same data with additional hashes for the executable.
  • Application Logscertutil may emit an informational entry in the System or Application log, but process creation events are the most reliable source.

Note: The windows certificateservicesclient lifecycle system now also natively creates an event with id 1007 when any certificate is exported from the local certificate store (including Public-only and CA Chain certificates). This means it can commonly be triggered by various applications and can’t be scoped to specific certificates, but there is already a community approved Sigma rule for the event.

Export Techniques on macOS

Using the security CLI

On macOS the security command can read from the login keychain and write a PKCS #12 bundle:

security export -k ~/Library/Keychains/login.keychain-db -t priv -p "" -o /tmp/device-trust.p12
  • -k points to the keychain file.
  • -t priv requests private keys only.
  • -p "" supplies an empty password for the output file (or a user‑chosen one).

If the certificate’s private key is stored in the Secure Enclave, the command may prompt for Touch ID or the user’s password. However, an attacker with a compromised admin session can bypass that prompt by using sudo.

Note: In newer versions of MacOSX the native tools do still respect the non-extractable attribute set when the certificate is imported, even with elevated privileges. But given its just an encrypted db, you can still utilize thrid-party tools like chainbreaker to export the keys with a password and hexdump.

What Gets Logged?

  • Unified Logging (os_log) – The subsystem com.apple.security.keychain logs messages such as “Exported private key”.
  • Auditd – If audit is enabled (audit -s), an execve record for /usr/bin/security with the export arguments appears.
  • Console.app – Shows the same entries, but for automated detection we rely on the log files under /var/log/system.log or the structured logging API.

Note: It may also be possible, even with non-exportable option set, to utilize Mimikatz like mimikatz log "crypto::certificates /export /systemstore:my" exit or Chainbreaker like python -m chainbreaker --export-x509-certificates to extract certificates from the user-space during post exploitable; but that feels like a whole different topic entirely.

Building Detection Rules in Sigma

Sigma is a vendor‑agnostic rule format that can be translated into SPL (Splunk), KQL (Sentinel), Lucene DSL (Elastic) and many others. Below are two examples—one for Windows, one for macOS—targeting the export commands discussed earlier.

Windows Sigma Rule – Detect CertUtil Export

title: Detection of Device Trust Certificate Export via certutil
id: e7c9b1a4-3d6f-4eaa-b5c8-0c2f6a9c1234
status: stable
description: |
  Detects execution of certutil.exe with the -exportpfx flag which is commonly used to extract a Device Trust private key even when it is marked non‑exportable.
author: Michael Contino
date: 2025-11-06
logsource:
  product: windows
  service: sysmon
detection:
  selection_process:
    Image|endswith: '\\certutil.exe'
  selection_export:
    CommandLine|contains: '-exportpfx'
  condition: all of selection_*
fields:
  - CommandLine
  - ParentImage
  - User
level: high
tags:
  - attack.t1552.006   # Unsecured Credentials: Private Keys
  - detection.DeviceTrustExport

If you prefer the native Windows Security log, change service to security and use Event ID 4688 in the detection block.

macOS Sigma Rule – Detect security Export

title: Detection of Device Trust Certificate Export via security CLI
id: a1f4c9e2-7b2a-44d5-a6f3-58c9f2d8b765
status: stable
description: |
  Flags execution of the macOS `security` command with arguments that request export of private keys, indicating an attempt to extract a Device Trust certificate.
author: Michael Contino
date: 2025-11-06
logsource:
  product: macos
  service: auditd
detection:
  selection_process:
    exe|endswith: '/usr/bin/security'
  selection_export:
    cmdline|contains: 'export'
  selection_privkey:
    cmdline|contains: '-t priv'
  condition: all of selection_*
fields:
  - exe
  - cmdline
  - auid
level: high
tags:
  - attack.t1552.006   # Unsecured Credentials: Private Keys
  - detection.DeviceTrustExport

For environments that rely on Unified Logging, replace service with osquery or a custom parser that extracts the com.apple.security.keychain messages.

Converting Sigma to Platform Queries

Most SIEMs provide an online converter (e.g., Sigma Converter at sigmahq.io). Below are quick examples for three popular platforms.

Splunk SPL

index=windows sourcetype="XmlWinEventLog:Microsoft-Windows-Sysmon/Operational"
(Image="*\\certutil.exe" AND CommandLine="* -exportpfx*")

Elastic Lucene DSL

{
  "query": {
    "bool": {
      "must": [
        { "wildcard": { "process.executable": "*certutil.exe" }},
        { "wildcard": { "process.command_line": "*-exportpfx*" }}
      ]
    }
  }
}

Azure Sentinel KQL

Sysmon
| where Image endswith @"\certutil.exe"
| where CommandLine contains "-exportpfx"

Apply the same conversion logic to the macOS rule, swapping process_name for exe and adjusting the field names accordingly.

Attack Flow Diagram – Exporting a Device Trust Certificate

Visualizing the steps helps analysts understand the context of an alert.

graph TD
    A[Compromised Endpoint] --> B[Locate Device Trust Cert in Store]
    B --> C{Export Attempt}
    C -->|Windows| D[certutil -exportpfx]
    C -->|macOS| E[security export -t priv]
    D --> F[PKCS12 file written to TEMP file]
    E --> F
    F --> G[Copy to Staging Location USB, Share, Cloud]
    G --> H[Exfiltration over Network or Physical Media]
    H --> I[Attacker Reuses Private Key on Remote Service]
    style A fill:#ffcccc,stroke:#c00
    style I fill:#ccffcc,stroke:#090

Explanation of the flow

  1. Compromised Endpoint – The attacker already has local admin or system privileges.
  2. Locate Device Trust Cert – Queries the certificate store (certutil -store My or security find‑identity).
  3. Export Attempt – Executes a native export tool (Windows or macOS).
  4. PKCS12 file written – The private key is now in clear text inside the .p12.
  5. Copy to Staging Location – Moves the file to a place where it can be exfiltrated.
  6. Exfiltration – Could be a cloud upload, SMB share copy, or USB drop.
  7. Attacker Reuses Private Key – The stolen key is used for impersonation, lateral movement, or credential stuffing against services that trust the Device Trust certificate.

Deploying and Tuning Your Rules

Enriching Alerts

When an export is detected, enrich the event with:

  • Certificate Thumbprint – Extracted from the command line to correlate with asset inventory.
  • Process Hash – Compare against known good binaries (e.g., Microsoft‑signed certutil.exe).
  • Endpoint Context – OS version, posture level, logged in user(s), and serial number/UUIDs.

Enrichment enables faster triage: if the exporter is a legitimate admin running from a hardened workstation, you may downgrade the alert. Otherwise, trigger an automated response.

Preventive Controls

Detection is only half of the story. Harden the environment to reduce the chance that an attacker can run the export commands:

ControlWindowsmacOS
Set Non-Exportable OptionsEnsure you still utilize export blocking options when importing certutil -importPFX [PFXfile] NoExportLikewise specific the key as non-extractable with the -x when importing security import <p12_path> -x
AppLocker / SRPBlock certutil.exe except for signed admin scriptsUse /usr/sbin/launchd policies to restrict security binary execution
Group PolicySet Do not allow private key export in PKI templates (doesn’t fully prevent certutil)Enable Secure Enclave only keys (-T -s) which refuse export without Touch ID
Audit PoliciesEnable “Process Creation” and “Credential Access” sub‑categoriesTurn on auditd with execve monitoring for /usr/bin/security

Why Detection Complements Non‑Exportable Keys

  • Non‑exportable flags protect against casual dumping but do not stop a privileged user from invoking OS APIs that return the private key material.
  • Native utilities (certutil.exe, security) are widely available, leaving an audit trail that can be detected by any modern SIEM/XDR platform.
  • By crafting Sigma rules focused on command‑line arguments and process creation events, you gain a vendor‑agnostic detection layer that works across Windows and macOS fleets.

Implementing the examples in this post will give your organization an early warning system for detecting Device Trust certificate exports, reducing the risk of credential theft, lateral movement, and unauthorized device impersonation.

Taking Action

  1. Deploy the SIEM/XDR rules above into your rule repository.
  2. Enable detailed process creation auditing on all Windows endpoints (Event ID 4688) and auditd logging on macOS.
  3. Test the detection by intentionally exporting a test certificate in a lab environment – verify that alerts fire with the expected context.
  4. Harden your endpoint policies to restrict certutil and security usage to approved admin accounts only.

Stay ahead of attackers and resistant users alike, but understand the “non‑exportable” flag is not always enough. Detecting Device Trust certificate exports gives you the visibility you need to protect the cryptographic foundation of your Zero‑Trust architecture.