14

Everyone might be a cluster-admin in your Kubernetes cluster

 4 years ago
source link: https://www.jeffgeerling.com/blog/2020/everyone-might-be-cluster-admin-your-kubernetes-cluster
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Quite often, when I dive into someone's Kubernetes cluster to debug a problem, I realize whatever pod I'm running has way too many permissions. Often, my pod has the cluster-admin role applied to it through it's default ServiceAccount.

Sometimes this role was added because someone wanted to make their CI/CD tool (e.g. Jenkins) manage Kubernetes resources in the cluster, and it was easier to apply cluster-admin to a default service account than to set all the individual RBAC privileges correctly. Other times, it was because someone found a new shiny tool and blindly installed it.

One such example I remember seeing recently is the spekt8 project; in it's installation instructions, it tells you to apply an rbac manifest:

kubectl apply -f https://raw.githubusercontent.com/spekt8/spekt8/master/fabric8-rbac.yaml

What the installation guide doesn't tell you is that this manifest grants cluster-admin privileges to every single Pod in the default namespace !

At this point, a more naive reader (and I mean that in the nicest way—Kubernetes is a complex system and you need to learn a lot!) might say: "All the Pods I run are running trusted applications and I know my developers wouldn't do anything nefarious, so what's the big deal?"

The problem is, if any of your Pods are running anything that could potentially result in code execution, it's trivial for a bad actor to script the following:

  1. Download kubectl in a pod
  2. Execute a command like:

    kubectl get ns --no-headers=true | sed "/kube-*/d" | sed "/default/d" | awk '{print $1;}' | xargs kubectl delete ns

If you have your RBAC rules locked down, this is no big deal; even if the application inside the Pod is fully compromised, the Kubernetes API will deny any requests that aren't explicitly allowed by your RBAC rules.

However, if you had blindly installed spekt8 in your cluster, you would now have no namespaces left in your cluster, besides the default namespace.

Try it yourself

I created a little project called k8s-pod-rbac-breakout that you can use to test whether your cluster has this problem—I typically deploy a script like this 58-line index.php script to a Pod running PHP in a cluster and see what it returns.

You'd be surprised how many clusters give me all the info and no errors:

bAnAZjZ.png!web

Too many Kubernetes users build snowflake clusters and deploy tools (like spekt8 —though there are many , many others) into them with no regard for security, either because they don't understand Kubernetes' RBAC model, or they needed to meet a deadline.

If you ever find yourself taking shortcuts to get past pesky User "system:serviceaccount:default:default" cannot [do xyz] messages, think twice before being promiscuous with your cluster permissions. And consider automating your cluster management (I'm writing a book for that) so people can't blindly deploy insecure tools and configurations to it!


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK