On this page: Adjacent services often feed into the cluster or the cluster will affect their exposure. Reviewing build pipelines, CI/CD systems, and cloud services.
We’ve been focusing on the security of the cluster itself, but what if I told you that in my experience, the easiest way to compromise a cluster was to target its external integrations, like CI/CD, image hosting, and the build pipeline?
To me, a Kubernetes cluster represents the center of an ecosystem – it’s the output of your DevOps pipelines, and therefore these need to be just as secure as your cluster configuration.
Imagine an ultra-secure Kubernetes cluster: you’ve spent months designing it, have built clear multi-tenant isolation strategies, complex RBAC controls, strict access controls, isolated resources, strict Pod policies, etc.
…and then your entire company uses an outdated version of Jenkins to build and deploy into it. But this would never happen, Jenkins has a strong history of securing their product, right?
For this phase you’ll need to collect information on those secondary services including:
This is a very difficult service to secure. See the talk from Pi Unnerup and Andrew Martin for more information.
In short, images need to be built before they can be deployed into a cluster, and how do you build an operating system without compromising the kernel of the host?
Imagine an attacker able to compromise the build infrastructure and what the impact of this exploit would be. Is the entire organization going to be impacted?
An image registry is where the images are stored and usually the output from your image building pipeline. Ask yourself:
Who has read/write access here? How are the files protected?
Oftentimes you will notice that a Kubernetes cluster needs read access to pull down an image, but in fact nodes are able to push new images to the cluster as well.
This means that a compromised image could push other malicious images into the registry, compromising the cluster.
Similar to building images, what CI/CD tooling is the organization using to manage deployment from source to production? Does your organization already have a good handle on this?
If not, it’s going to be a critical component to asses the risk of, independent of the Kubernetes cluster.
Are you using Kubernetes’ builtin secrets management for most things? That’s sometimes
sufficient for some organizations but the default backend,
etcd, does not always
provide the best security as Omer Hevroni’s AppSec Cali 2019 talk Can Kubernetes Keep a
You’ll need to take into account whether
something like Hashicorp Vault or Bitnami
Sealed Secrets fills a gap in
your Kubernetes secrets plan. If you want to see how easy Kubernetes secrets are to access you can use the krew
> kubectl view_secret <secretname>
Here’s an example of the same command except taking another user’s service token and using it to try and access a secret. This will validate whether RBAC or other controls are sufficiently controlling access:
> kubectl get secrets NAME TYPE DATA AGE default-token-jpm9z kubernetes.io/service-account-token 3 18h > kubectl view_secret default-token-jpm9z Multiple sub keys found. Specify another argument, one of: -> ca.crt -> namespace -> token > export TOKEN=$(kubectl view_secret default-token-jpm9z token) > kubectl get secret --token=$TOKEN NAME TYPE DATA AGE default-token-jpm9z kubernetes.io/service-account-token 3 18h