Table of Contents
In Part I of this series, you thoroughly explored the limitations of Kubernetes in reference to API isolation, network isolation, and cluster isolation. You were able to see firsthand how an attacker could gain access to any of your cluster nodes from a namespace if security measures aren’t taken. It’s time to dive into Part II.
This article will show you how to implement security measures necessary to ensure proper isolation, using Kubernetes native resources as well as Loft's virtual clusters.
Achieving Isolation in Kubernetes
Kubernetes offers a multitude of native resources that allow you to address different tasks and challenges. In particular, you can use network policies to achieve network, namespace, and cluster isolation.
Kubernetes Network Isolation Using Calico Network Policies
In Kubernetes, you can control traffic flow between pods using network policies. For that, you'll need to install a supported Container Network Interface (CNI) plugin, but keep in mind that not all CNIs support all network policies. This guide uses Calico, as it supports most network policies.
Once you’ve installed Calico, you can define a NetworkPolicy
that describes your desired network isolation rules. For example, you can create a global network policy that denies ingress traffic to pods labeled nginx
if the traffic comes from pods labeled untrusted
.
kind: GlobalNetworkPolicy
apiVersion: projectcalico.org/v3
metadata:
name: my-network-policy
spec:
selector: app == 'nginx'
ingress:
- action: Deny
protocol: TCP
source:
selector: app == 'untrusted'
Apply the NetworkPolicy
to your Kubernetes cluster using kubectl
:
kubectl apply -f my-network-policy.yaml
From now on, new pods created using this network policy will not allow traffic that matches the labels set in the definition.
Another interesting example is the deny-all-policy
network policy, as demonstrated here. The example there shows how you can use network policies in conjunction with namespaces to achieve a higher degree of isolation.
Kubernetes Namespace Isolation
As mentioned in Part I, namespaces in Kubernetes provide a way to divide cluster resources among multiple users or groups. However, as Part I also discussed, namespaces don’t offer real isolation from other namespaces. You can solve that by using what you learned just now in the previous section.
Let's adjust the deny-all-policy
example so that it denies all traffic to pods with the label app: nginx
in the network
namespace.
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-all-policy
namespace: network
spec:
podSelector:
matchLabels:
app: nginx
ingress: []
Apply the new NetworkPolicy
to your Kubernetes cluster:
kubectl apply -f deny-all-policy.yaml
According to this policy, the nginx
deployment you used in Part I won’t allow traffic from any new pod that attempts to connect to it. Give it a try, using the same test as before to check the network isolation.
Hint—if you don't remember the IP address of the nginx
pod, use the command:
kubectl describe pod nginx -n network | grep -w 'IP:' -m 1 | awk '{print $2}'
Now, run a test pod:
kubectl run testpod --rm -ti --image=alpine -- /bin/sh
As before, install the required curl utility:
apk add curl
You can try to get a response from the pod using the command:
curl {nginx-IP-address}:80
Unlike before, the output this time should be similar to the following:
/ # curl 10.42.0.17
curl: (28) Failed to connect to 10.42.0.17 port 80 after 0 ms: Couldn't connect to server
This output shows that the NetworkPolicy
is denying all traffic, as expected.
You can use the
NetworkPolicy
resource to create definitions that suit your specific requirements. In the Kubernetes documentation, you can find examples where roles, as well as ingress and egress rules, are assigned by IP blocks, ports, and more.
Back to our example! Using network policies, you can isolate pods running in the network
namespace. This all sounds good, but there’s still a problem to solve.
You may have noticed that we have emphasized that the network policies apply to new pods. Since network policies are applied dynamically at runtime, any pods created before the network policy will not respect it.
You can check this by using the busybox
deployment from Part I that runs in the default
namespace:
kubectl exec -it busybox -- chroot /host
Run curl
targeting the nginx
pod, and you’ll see that you can still connect to the pod and get a response:
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
The previous code demonstrates how keeping your Kubernetes cluster secure goes beyond enforcing network policies and namespaces.
Achieving isolation in Kubernetes can be daunting—it requires a lot of manual work, which takes time. However, if you use a solution like Loft, your team can isolate Kubernetes networks, namespaces, and clusters effortlessly.
Achieving Isolation Using Loft
Achieving isolation in your Kubernetes cluster doesn't have to be time-consuming. Let’s walk through a quick demonstration.
Start by signing up for free with Loft to access the Loft CLI installation instructions. The local client is available for Mac (x86_64/Apple Silicon
), Linux (x86_64/ARM
), and Windows Powershell.
Once installed, all you have to do is run loft start
from your console. You’ll see an output similar to the following image.
The command uses your current Kubernetes context to deploy the Loft agent to your cluster automatically. If you can't connect to your Kubernetes cluster, you can review Loft’s documentation for troubleshooting.
Once you’re connected, use the address and password shown in the console to access Loft’s web UI. After registering with your email and name, you can take a guided tour that teaches you the basics of Loft. If you follow the tutorial, it will show you how to create your first virtual cluster.
If you choose not to follow the tutorial, don't worry; creating a virtual cluster is easy.
But first things first, you should review the available templates for creating virtual clusters. Navigate to the Templates tab, and you’ll see a screen similar to the following image.
Click on the default template, Isolated Virtual Cluster Template. The next screen allows you to edit the isolation mode configuration settings, which leverage vcluster project isolation and security features. For now, just make sure that both isolation
and networkPolicy
are set to true
.
Click Save Changes when you’re ready. Return to the home screen by navigating to the Projects tab, then click Create Virtual Cluster.
Choose the template you just edited, Isolated Virtual Cluster. This will take you to the next screen, where you can give different cluster role permissions to individual users or groups.
Add your current user, assign it a Cluster Admin role, and click Create Virtual Cluster.
It will take a few seconds for your virtual cluster to be deployed.
If you click on the name of the cluster, in this case my-vcluster
, you’ll see that inside are the namespaces that you usually expect from a Kubernetes cluster.
However, if you run kubectl get namespaces
from your console, you’ll notice that what you’ve actually created is a new namespace within which the virtual cluster my-vcluster
lives.
NAME STATUS AGE
default Active 16h
kube-system Active 16h
kube-public Active 16h
kube-node-lease Active 16h
loft Active 16h
loft-p-default Active 16h
network Active 18m
loft-default-v-my-vcluster Active 5m33s
In fact, the formal definition of virtual clusters is a fully functional Kubernetes cluster that runs inside a namespace of another Kubernetes cluster.
The next step will be to give access to your virtual cluster to a user. Hover over the triangle next to my-vcluster
to see a drop-down menu. Click Edit.
The next screen allows you to add users or teams to your cluster. For this example, a new user with edit
permissions was added. Save the changes to continue.
You can create as many virtual clusters, users, and teams as your organization requires. Each one has access to their own Loft UI session, where they can only perform the tasks allowed according to their permissions.
The following image shows the Loft UI from the perspective of the user John Doe. Note that now hovering over my-cluster
only gives him the option to download the kubeconfig
.
If you download that file and try to run kubectl get ns
, the output looks like this:
$ kubectl get ns --kubeconfig=kubeconfig.yaml
NAME STATUS AGE
kube-public Active 35m
kube-node-lease Active 35m
default Active 35m
kube-system Active 35m
In other words, from John Doe's perspective, there is no way to tell if this is a virtual cluster or not.
To continue from the user session, let’s create a dummy deployment.
Navigating to the Pods tab shows the new pod running in the default
namespace of the virtual cluster my-vcluster
.
The user could now get the IP address of the new pod using a command like this:
kubectl describe pod deployment-fw23w-ff6774dc6-crd7f --kubeconfig=kubeconfig.yaml | grep -w 'IP:' -m 1 | awk '{print $2}'
The user could then launch a pod like this:
kubectl run testpod --kubeconfig=kubeconfig.yaml --rm -ti --image=alpine -- /bin/sh
This pod will also be displayed on John Doe’s UI. If the user hovers over the pod, he would be able to open a shell.
After installing curl
, the user could get a response from the dummy deployment using the IP obtained earlier.
This shouldn’t be surprising since this user has a role that allows him to create, edit, and view pods. However, what happens if he tries to get a response from a pod outside the virtual cluster?
The first problem he’ll face is how to get information out of the virtual cluster.
kubectl get pods --all-namespaces --kubeconfig=kubeconfig.yaml
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-b796cfc74-xnlpg 1/1 Running 0 84m
default deployment-fw23w-ff6774dc6-crd7f 1/1 Running 0 34m
default testpod 1/1 Running 0 21m
But let's assume for a moment that you deleted the NetworkPolicy
created in the previous section and gave the user the IP of the nginx
pod running in the network
namespace. Let's also assume that the user tries to get a response from said pod. Check out the following image for the result of such a test.
The trick we used in Part I—creating a privileged pod with access to the host—won't work either.
In fact, not even a user with the cluster-admin
role can create such a privileged pod in this virtual cluster. All thanks to the template used during the cluster’s creation. In short, with Loft, you can quickly and easily create virtual clusters that feature namespace, cluster, and network isolation.
Final Thoughts
In this second part of our series on isolation in Kubernetes, you’ve explored how to achieve network, namespace, and cluster isolation in Kubernetes natively. You’ve seen how this can be challenging and time-consuming, and that there’s an easier and more effective way to achieve isolation, with Loft.
Discover the power of Loft's virtual clusters for Kubernetes. Maximize resource efficiency, achieve as much as 70 percent cost savings, and simplify management with lightweight virtual clusters and self-service provisioning. Unlock a new era of distributed computing with Loft and experience the future of Kubernetes management in your organization.
Additional Articles You May Like
- Kubernetes Network Policies for Isolating Namespaces
- Achieving Network, Namespace, and Cluster Isolation in Kubernetes - Part 1
- Kubernetes Namespaces vs. Virtual Clusters
- Kubernetes Traefik Ingress: 10 Useful Configuration Options
- Best Practices for Achieving Isolation in Kubernetes Multi-Tenant Environments
- Virtualizing Kubernetes Is the Key to Cost-Effective Scale