1 minute read

On this page: Identify any internally or externally exposed services, as they represent attack surface. Pay special attention to services exposed outside of the cluster, as they are ripe for attack.

3. What Services are Publicly Exposed?

Any deployment or service in a cluster can be publicly exposed to the Internet. Some clusters will use the built-in Load Balancer or Node Port to expose it while others will use a third party load balancer or integrate with the cloud provider’s services.

For example, today, most tools that build Kubernetes clusters, and most managed Kubernetes providers, will expose the Kubernetes API to the Internet. That means that the HTTPS service that is used to manage and deploy objects into your cluster is sitting exposed to all attackers.

If you don’t believe this happens, check out Binary Edge and their built in support for extracting information out of publicly exposed endpoints without authentication:

np-viewer

4. What Services are Internally Exposed?

This should be relatively easy for you to profile as you likely have experience nmaping or reviewing network policies to find exposed services. You’re just looking for adjacent services that are accessible by the Kubernetes cluster.

There are two ways to review the networking of your cluster:

  1. Read the configs: kubectl get NetworkPolicies -A will give you all of the Kubernetes network restrictions in place. The problem is, I, for one don’t believe you can completely trust these configs. There are too many ways to bypass some of the controls that try to resolve to individual objects. For example your CNI may automatically resolve hostnames for you but if that information is stale or incorrect, the rules will be wrong. That’s why we don’t use this review alone.
  2. Manual testing: This is not the most thorough or time-effective way of doing this but it does let you verify whether something is accessible or not without needed to go through Kubernetes itself. nmap for services in adjacent VPC’s, try to find everything that’s exposed on port 80. Does it match up to what your configs show?

But let’s start with the obvious one: Is your cloud provider’s metadata service exposed to the cluster? This often means you’ll be able to pivot to other cloud services.

You can do a quick test of this yourself using busybox within a cluster:

> kubectl run -it myshell --image=busybox -- /bin/sh
# busybox wget -q -O - http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key

Did it return results? That may mean that the cloud metdata API can be compromised by any Pod in the cluster.