Running a virtual cluster inside a Kubernetes cluster is cool, but running a virtual cluster inside a virtual cluster is some real inception. Watch this YouTube video to see how you can do this with our open source virtual cluster tool called vcluster, or read the transcript below if you prefer.
Hi, I’m Rich with Loft Labs. Recently, I did a video introducing our open source virtual cluster tool for Kubernetes called vcluster, which lets you run a virtual cluster entirely within a namespace of your host cluster. During the video, I mentioned that you can actually run a vcluster inside of a vcluster, and I thought I’d show you how this vcluster inception works. Let’s get to a shell and have a look.
Here’s level one, my host cluster. First I’ll started up an Nginx deployment with a single pod in the host cluster.
$ kubectl create deployment nginx-deployment -n default --image=nginx --replicas=1 deployment.apps/nginx-deployment created
Okay. We now have a single Nginx pod running. Now let’s drop down to level two and create our first vcluster.
$ vcluster create vc-level-2 -n level2 [info] Creating namespace level2 [info] execute command: helm upgrade vc-level-2 vcluster --repo https://charts.loft.sh --kubeconfig /var/folders/gy/d3_c4t1x731_hl8qtrfkhr_h0000gn/T/525221466 --namespace level2 --install --repository-config='' --values /var/folders/gy/d3_c4t1x731_hl8qtrfkhr_h0000gn/T/595064305 [done] √ Successfully created virtual cluster vc-level-2 in namespace level2. Use 'vcluster connect vc-level-2 --namespace level2' to access the virtual cluster
vclusters are deployed using Helm. Our vcluster is running in a namespace on the host cluster called level2.
$ kubectl get pods -n level2 NAME READY STATUS RESTARTS AGE vc-level-2-0 0/2 Pending 0 2s
In that namespace you can see the vcluster pod running. vcluster uses k3s under the hood, so there’s a full-blown API server running in that pod. Let’s connect to the vcluster.
$ vcluster connect vc-level-2 -n level2 [info] Waiting for vCluster to come up... [done] √ Virtual cluster kube config written to: ./kubeconfig.yaml. You can access the cluster via `kubectl --kubeconfig ./kubeconfig.yaml get namespaces` [info] Starting port forwarding: kubectl port-forward --namespace level2 vc-level-2-0 8443:8443 Forwarding from 127.0.0.1:8443 -> 8443 Forwarding from [::1]:8443 -> 8443
vcluster connect sets up port forwarding. We’ll leave that running and open a new shell.
vcluster connect also creates a kubeconfig file that points at the virtual cluster. Let’s point our local kubectl at that kubeconfig.
$ export KUBECONFIG=./kubeconfig.yaml
And then look at the namespaces.
$ kubectl get namespaces NAME STATUS AGE default Active 15m kube-system Active 15m kube-public Active 15m kube-node-lease Active 15m $ kubectl get pods No resources found in default namespace.
We don’t see the level2 namespace that was created in the host cluster. Our virtual cluster is running entirely inside of that namespace. We don’t see the Nginx deployment that’s running in the host cluster either. Let’s create an Nginx deployment with two replicas here at level2.
$ kubectl create deployment nginx-deployment -n default --image=nginx --replicas=2 deployment.apps/nginx-deployment created $ kubectl get pods NAME READY STATUS RESTARTS AGE nginx-deployment-84cd76b964-kp4w6 0/1 ContainerCreating 0 1s nginx-deployment-84cd76b964-78wnp 0/1 ContainerCreating 0 1s
Okay. Now for the real inception action. Let’s create a vcluster inside of our vcluster.
$ vcluster create vc-level-3 -n level3 [info] Creating namespace level3 [info] execute command: helm upgrade vc-level-3 vcluster --repo https://charts.loft.sh --kubeconfig /var/folders/gy/d3_c4t1x731_hl8qtrfkhr_h0000gn/T/088217689 --namespace level3 --install --repository-config='' --values /var/folders/gy/d3_c4t1x731_hl8qtrfkhr_h0000gn/T/968839140 [done] √ Successfully created virtual cluster vc-level-3 in namespace level3. Use 'vcluster connect vc-level-3 --namespace level3' to access the virtual cluster
And then connect to it. We have to use the level2 kubeconfig and specify a different local port for port forwarding.
$ vcluster connect vc-level-3 -n level3 --local-port=8444 [info] Waiting for vCluster to come up... [done] √ Virtual cluster kube config written to: ./kubeconfig.yaml. You can access the cluster via `kubectl --kubeconfig ./kubeconfig.yaml get namespaces` [info] Starting port forwarding: kubectl port-forward --namespace level3 vc-level-3-0 8444:8443 Forwarding from 127.0.0.1:8444 -> 8443 Forwarding from [::1]:8444 -> 8443
We’ll open one more tab for level three and use its kubeconfig. Let’s make an Nginx deployment here with three replicas.
$ export KUBECONFIG=./kubeconfig.yaml $ kubectl create deployment nginx-deployment -n default --image=nginx --replicas=3 deployment.apps/nginx-deployment created $ kubectl get pods NAME READY STATUS RESTARTS AGE nginx-deployment-84cd76b964-czlpz 0/1 ContainerCreating 0 3s nginx-deployment-84cd76b964-l292w 0/1 ContainerCreating 0 3s nginx-deployment-84cd76b964-ph79t 0/1 ContainerCreating 0 3s
And we only see the three Nginx pods. Let’s take a look at our host cluster again. We’ll switch to that kubeconfig.
$ export KUBECONFIG=~/.kube/config $ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default nginx-deployment-84cd76b964-64tbh 1/1 Running 0 24m kube-system coredns-f9fd979d6-mlctd 1/1 Running 0 77m kube-system coredns-f9fd979d6-pgnh8 1/1 Running 0 77m kube-system etcd-docker-desktop 1/1 Running 0 76m kube-system kube-apiserver-docker-desktop 1/1 Running 0 76m kube-system kube-controller-manager-docker-desktop 1/1 Running 0 76m kube-system kube-proxy-j42rb 1/1 Running 0 77m kube-system kube-scheduler-docker-desktop 1/1 Running 0 76m kube-system storage-provisioner 1/1 Running 0 77m kube-system vpnkit-controller 1/1 Running 0 77m level2 coredns-66c464876b-bg95z-x-kube-system-x-vc-level-3--e33a70f289 1/1 Running 0 8m56s level2 coredns-66c464876b-gl66g-x-kube-system-x-vc-level-2 1/1 Running 0 24m level2 nginx-deployment-84cd76b964-78wnp-x-default-x-vc-level-2 1/1 Running 0 9m17s level2 nginx-deployment-84cd76b964-czlpz-x-default-x-vc-lev-af1154c6f7 1/1 Running 0 9s level2 nginx-deployment-84cd76b964-kp4w6-x-default-x-vc-level-2 1/1 Running 0 9m17s level2 nginx-deployment-84cd76b964-l292w-x-default-x-vc-lev-91ce9ee9e0 1/1 Running 0 9s level2 nginx-deployment-84cd76b964-ph79t-x-default-x-vc-lev-5cdee9fab0 1/1 Running 0 9s level2 vc-level-2-0 2/2 Running 0 24m level2 vc-level-3-0-x-level3-x-vc-level-2 2/2 Running 0 9m9s
Here we see the Nginx deployment from our host cluster and the two virtual clusters. How does this all work? Each vcluster has an API server inside of it, but it doesn’t have a scheduler. So the pods are synced to the host cluster and run there.
And that’s a quick look at vcluster inception. You might not have a use case for running nested vclusters, but maybe you do. You could assign each developer at your company a namespace with a vcluster running, and allow them to create additional vclusters inside of it. Either way, it’s great to know that to the person using a virtual cluster, it looks like a real Kubernetes cluster.