Achieving Network, Namespace, and Cluster Isolation in Kubernetes - Part 1

Lukas Gentele
Hrittik Roy
9 min read

The more popular Kubernetes becomes, the more tempting it becomes for bad actors to target its systems. Attacks like unauthorized access or external attacks on pods can result in data loss, data leakage, or even complete system compromise.

Fortunately, isolation in Kubernetes, with the help of segregation at different levels, can protect your cluster from threats. And if you need multi-tenancy, isolation helps resources coexist without impacting each other.

This article covers the different types of isolation features available in Kubernetes, how they implement isolation, and their limitations. By the end, you should have a strong understanding of Kubernetes’s isolation capability, so you can craft your security posture as required.

#Types of Isolation in Kubernetes

In Kubernetes, hardware resources are generally shared with the help of virtual segregation and namespaces. These basic abstraction layers help you create more virtual clusters from one physical cluster.

This introduces a need for isolation. Your virtual clusters should have segregation built in to effectively divide cluster resources between multiple users, teams, or projects. Generally, you want to ensure segregation by network, API, and cluster isolation when a namespace is concerned.

#API Isolation

Kubernetes implements API isolation through a variety of approaches, including roles, role bindings, and cluster roles. These techniques offer a way to manage access to the Kubernetes API by limiting access to certain users or groups for various resources.

Of course, role-based access control (RBAC) means you need to configure access to your resources. To demonstrate, let’s create a namespace and deploy two pods in the default namespace and the newly created test-namespace.

kubectl create namespace test-namespace
kubectl run nginx-test --image=nginx -n test-namespace
kubectl run nginx-default --image=nginx

If successful, you can see the following success message on your terminal:

namespace/test-namespace created
pod/nginx-test created
pod/nginx-default created

Create a role that grants permissions to access only the pods resource in the namespace:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: test-namespace
  name: test-master-role
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "list", "watch"]

You’ll also need a RoleBinding that associates the role with a specific user or group. Here, it is alice:

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: test-master-binding
  namespace: test-namespace
subjects:
- kind: User
  name: alice
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: test-master-role
  apiGroup: rbac.authorization.k8s.io

Apply the manifests, and when you query your test-namespace namespace for the pods, you’ll get them. However, when you try fetching the resources from different namespaces, the attempt will return a Forbidden error message.

hrittik@Azure:~$ kubectl get pods -n  test-namespace --as alice
NAME         READY   STATUS    RESTARTS   AGE
nginx-test   1/1     Running   0          44m

hrittik@Azure:~$ kubectl get pods --as alice
Error from server (Forbidden): pods is forbidden: User "alice" cannot list resource "pods" in API group "" in the namespace "default"

You can list the permissions in more detail with the help of the kubectl auth can-i --list -n test-namespace --as alice command. For pods you have access to, you can see get, list, and watch as the user alice.

hrittik@Azure:~$ kubectl auth can-i --list -n test-namespace --as alice
Resources                                       Non-Resource URLs   Resource Names   Verbs
selfsubjectaccessreviews.authorization.k8s.io   []                  []               [create]
selfsubjectrulesreviews.authorization.k8s.io    []                  []               [create]
pods                                            []                  []               [get list watch]

As you can see, segregation in Kubernetes namespaces is possible with RBAC. But there’s also ClusterRole, which allows users or groups to have access to certain cluster-level resources. They can override API isolation at the namespace level and are intended for cluster administrators. Clearly, configuring access at different levels is crucial in order to provide appropriate isolation.

To demonstrate this, create a ClusterRole that grants access to all pods across all namespaces:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: pod-reader
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "list", "watch"]

You’ll also need a ClusterRoleBinding that associates the ClusterRole with a user. Here, it is bob or a group:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: pod-reader-binding
subjects:
- kind: User
  name: bob
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

Apply these manifests, and when you query the permissions, you’ll see it has the capacity to get, list, and watch pods for all namespaces.

hrittik@Azure:~$ kubectl auth can-i --list --as bob
Resources                                       Non-Resource URLs   Resource Names   Verbs
selfsubjectaccessreviews.authorization.k8s.io   []                  []               [create]
selfsubjectrulesreviews.authorization.k8s.io    []                  []               [create]
pods                                            []                  []               [get list watch]

You can verify this by using kubectl get pods -A to get pods for all the namespaces, including pods from kube-system. Here’s the output:

NAMESPACE        NAME                                  READY   STATUS    RESTARTS   AGE
default          busybox                               1/1     Running   0          157m
default          nginx                                 1/1     Running   0          69m
default          nginx-default                         1/1     Running   0          68m
kube-system      ama-logs-4qgt2                        2/2     Running   0          24h
kube-system      ama-logs-6xfrx                        2/2     Running   0          24h
kube-system      ama-logs-rs-66798f6f45-dkv8h          1/1     Running   0          24h
kube-system      azure-ip-masq-agent-9hsxq             1/1     Running   0          24h
kube-system      azure-ip-masq-agent-b8vgq             1/1     Running   0          24h
kube-system      cloud-node-manager-b5tqd              1/1     Running   0          24h
kube-system      cloud-node-manager-zsjqm              1/1     Running   0          24h
kube-system      coredns-6b9cb549f4-8ftxw              1/1     Running   0          23h
kube-system      coredns-6b9cb549f4-8wr29              1/1     Running   0          24h
kube-system      coredns-autoscaler-7fd9d8d7cf-mmvtj   1/1     Running   0          24h
kube-system      csi-azuredisk-node-mrrwk              3/3     Running   0          24h
kube-system      csi-azuredisk-node-sgf5m              3/3     Running   0          24h
kube-system      csi-azurefile-node-d2swc              3/3     Running   0          24h
kube-system      csi-azurefile-node-rmsgk              3/3     Running   0          24h
kube-system      konnectivity-agent-f9cb5574f-6l2gk    1/1     Running   0          23h
kube-system      konnectivity-agent-f9cb5574f-cgqp4    1/1     Running   0          23h
kube-system      kube-proxy-mrm24                      1/1     Running   0          18h
kube-system      kube-proxy-qc79z                      1/1     Running   0          18h
kube-system      metrics-server-59996b9895-6jjk8       2/2     Running   0          24h
kube-system      metrics-server-59996b9895-96q6r       2/2     Running   0          24h
test-namespace   nginx-test                            1/1     Running   0          65m
test             nginx-test                            1/1     Running   0          68m

While you can query for a pod object, a non-secure cluster with badly designed ClusterRoles introduces a chance for compromised security. Even with proper RBAC, you must still provide isolation to required entities.

#Network Isolation

Network isolation between namespaces is a way to prevent communication between different networks or network segments, and therefore helps secure access. However, there’s not much that Kubernetes does for network isolation across namespaces, but let’s have a look at your options firsthand.

First, create a namespace where you want to deploy your applications:

kubectl create namespace network

With a namespace successfully created, you can now deploy, say, a sample NGINX image to your namespace and expose the ports so you can access the endpoints.

kubectl run nginx --image=nginx --labels app=nginx --namespace network --expose --port 80

A successful deployment will create a Pod and a Service in your namespace:

service/nginx created
pod/nginx created

To list all the created objects, query with the kubectl get command and pass the namespace name with the -n flag:

hrittik@Azure:~$ kubectl get all -n network
NAME        READY   STATUS    RESTARTS   AGE
pod/nginx   1/1     Running   0          45s

NAME            TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
service/nginx   ClusterIP   10.0.124.23   <none>        80/TCP    45s

Now, you can query your IP, as you’ll need to check if a pod can access the endpoint. You can either run kubectl describe and then find the IP field, or run the following command to filter out the IP field:

kubectl describe pod nginx -n network | grep -w 'IP:' -m 1 | awk '{print $2}'

To test the isolation, create a pod in the default namespace and start a shell session. You can run the pod in the network namespace using the following command:

kubectl run busybox --rm -ti --image=alpine -- /bin/sh

Install the required curl utility:

apk add curl

Once the busybox pod is created, the curl utility is installed, and the IP address of the deployed NGINX pod is available, you can access NGINX content from the busybox pod. As you can see, the NGINX pod running in the network namespace may indeed be accessed by the busybox pod.

You can conduct the same experiment with several namespaces and achieve the same outcome.

/ # curl 10.244.1.12:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

You probably noticed that your Kubernetes cluster’s default behavior does not ensure any network-level isolation across the namespaces. There’s no network filtering happening as a layer of security. That means you’ll need to configure and maintain network policies if you want to configure network isolation.

#Cluster Isolation

There’s no cluster-level isolation built into Kubernetes by default. If a namespace is compromised, it can potentially affect other namespaces and resources in the same cluster, therefore compromising your entire cluster. For example, an attacker who gains access to a pod in one namespace can potentially access other pods and services in the same cluster, even if they are in a different namespace.

Let’s demonstrate this. First, build a privileged pod and then access the node by launching a shell session. The session should have full access to the node host’s file system, PID, and network namespaces (Linux namespaces, specifically).

You can build a privileged pod with the following manifest and apply it to your cluster using kubectl apply -f <file_name>:

apiVersion: v1
kind: Pod
metadata:
  name: busybox  # Name of the pod
spec:
  containers:
  - name: shell  # Name of the container
    image: ubuntu  # Docker image used for the container
    stdin: true  # Allocate a tty for interactive shell
    securityContext:
      privileged: true  # Give the container privileged access
    volumeMounts:
    - name: host-root-volume  # Name of the volume to be mounted
      mountPath: /host  # Mount path inside the container
      readOnly: true  # Mount volume as read-only
  volumes:
  - name: host-root-volume  # Name of the volume to be created
    hostPath:
      path: /  # Path on the host machine to mount as a volume
  hostNetwork: true  # Use the host network namespace
  hostPID: true  # Use the host PID namespace

Once the creation is completed, use the following command to look at which node your pod is running. You’ll need to verify it in the lateral steps:

controlplane $ kubectl get pods -o wide
NAME      READY   STATUS             RESTARTS   AGE   IP            NODE     NOMINATED NODE   READINESS GATES
busybox   1/1     Running            0          14m   172.30.2.2    node01   <none>           <none>

With that data, you can exec into your pods using the following command and have access to a shell session:

kubectl exec -it busybox -- chroot /host

The hostname command returns the name of the node your container was running on—clearly, it’s a simple matter to be compromised without proper isolation.

$ hostname
node01

With access to the node, a bad actor can access the whole system. Installing malware or changing system configuration will not be difficult, which will compromise your security posture. Moreover, there’s no namespace limitation, as you’re accessing the host system directly. Enforcing isolation that extends to your cluster is important for a strong security posture.

#Conclusion

Kubernetes helps you quickly meet the demands of users by abstracting away a lot of infrastructure complexity. However, as you’ve seen in this article, Kubernetes is lacking when it comes to isolation and security. You’ll need to achieve network, namespace, and cluster isolation in ways that don’t compromise your security posture.

In Part 2 of this series, you’ll see how to achieve proper isolation in Kubernetes with Loft Labs. You’ll learn how to unlock Kubernetes efficiency at scale with solutions like vcluster for multi-tenant clusters. You’ll be able to enforce strict security boundaries and provide isolation to your cluster with very low management overheads.

#Additional Articles You May Like

Sign up for our newsletter

Be the first to know about new features, announcements and industry insights.