Introduction to Virtual Clusters in Kubernetes

Fabian Kramm
Minute Read

With the increasing adoption of Kubernetes within organizations, the need for Kubernetes access for applications and engineers is also growing. Since it is neither feasible nor cost-efficient to always use whole physical Kubernetes clusters, virtualization for Kubernetes is the obvious solution. In this article, I will describe an implementation of such a Kubernetes virtualization: Virtual clusters. I will also explain how virtual Kubernetes clusters work, how they can be used, and why they are a real alternative to current approaches for Kubernetes access.

We have a hands-on tutorial for Kubernetes virtual clusters if you are looking to get started.

vcluster Series

Current Approaches For Kubernetes Access

Namespaces

The idea of virtualization within Kubernetes is not new: In the official Kubernetes documentation, namespaces are labeled "virtual clusters" that span a single physical cluster and provide a joint scope for related Kubernetes objects.
With Kubernetes namespaces, it is possible to create separate environments for multiple apps and users in the same Kubernetes cluster.

However, namespaces have some limitations: They cannot contain cluster scoped resources.
While this may seem obvious, a lot of applications actually need to create or at least access cluster scoped resources like nodes, cluster roles, persistent volumes and storage classes.
As soon as this happens, the application breaks its virtual namespace boundary and cannot be properly isolated from other applications anymore.

Problems go even further if applications need to create their own custom resource definition or extend the API server via an APIService.

Many Small Clusters

To solve these issues and create securely isolated ephemeral environments for applications during testing and development, the pattern of spinning up small, throw-away Kubernetes clusters has emerged.

Spinning up small, throw-away Kubernetes clusters solves the problem of coping with cluster scoped resources and isolation, but it is very cost-inefficient and negates one of the key advantages of Kubernetes itself: Being an orchestration system.
Imagine the cost of a single cluster running 1000 containers vs 1000 Kubernetes clusters running a single container:

  • Each cluster has at least an additional API server, controller manager and etcd.
  • Each cluster needs at least one kubelet with kube-proxy, networking and container runtime.
  • Spinning up a new cluster takes way more time than starting a new container.
  • That is a lot of overhead, which can also result in a significant rise in your infrastructure bill (not even mentioning cluster management fees, such as in AWS and Google Cloud).

    Another solution is to extend namespaces and to virtualize Kubernetes itself.

    How Do Virtual Kubernetes Clusters Work?

    The idea of virtualizing a Kubernetes cluster is similar to virtualizing a physical machine: The host system is used for actual computing, while everything else is emulated.

    Existing Solutions for Virtual Clusters

    There are already different implementations of the virtual cluster pattern in Kubernetes:

  • loft vClusters: A closed source (but free for up to 3 users) implementation of virtual clusters.
  • Multi-Tenancy SIG Virtual Cluster: An unreleased implementation of a virtual cluster operator for Kubernetes v1.16.
  • k3v: A Proof of Concept of how virtual clusters could work with k3s.
  • Namespaces are sometimes an acceptable choice for running multiple environments within a cluster. They provide limited user permissions, usage quotas, and some isolated resources within Kubernetes.

    Virtual clusters improve on namespaces in many ways by providing more substantial isolation for a more stable, resilient, and flexible Kubernetes environment. They can be used for both development and production workloads and help you save money versus traditional clusters.

    You can learn more about Kubernetes Namespaces vs. Virtual Clusters. To learn more or get started with virtual clusters, check out Loft.

    This article will mostly talk about the implementation of virtual Kubernetes clusters (vClusters) with loft.

    Technical Implementation of Virtual Clusters

    The basic idea of a virtual cluster is to spin up a new Kubernetes cluster within an existing cluster and sync certain core resources between those two clusters.

    A host cluster runs the actual virtual clusters pods and needs to be a fully working Kubernetes cluster. The virtual cluster itself only consists of the core Kubernetes components: API server, controller manager and etcd.

    To reduce virtual cluster overhead for vClusters in loft, loft builds on k3s, which is a fully working, lightweight Kubernetes distribution that compiles the Kubernetes components into a single binary and disables all unnecessary Kubernetes features, such as the pod scheduler.

    Besides k3s, there is a Kubernetes hypervisor that emulates a fully working Kubernetes setup in the virtual cluster. This component syncs certain virtual cluster resources to the host cluster and back:

  • Pods: All pods that are started in the virtual cluster are mutated and started in the namespace of the virtual cluster in the host cluster. Service account tokens, environment variables, DNS and other configurations are exchanged to point to the virtual cluster instead of the host cluster. Within the pod, it so seems that the pod is started in the virtual cluster instead of the host cluster.
  • Services: All services and endpoints are mutated and created in the namespace of the virtual cluster in the host cluster. The virtual and host cluster share the same service cluster IPs.
  • PersistentVolumeClaims: If persistent volume claims are created in the virtual cluster, they will be mutated and created in the namespace of the virtual cluster in the host cluster. If they are bound in the host cluster, the corresponding persistent volume will be synced back to the virtual cluster.
  • Others: Other resources such as configmaps, secrets, nodes, persistent volumes and storage classes are also synced between the clusters to assure correct functionality of pods.
  • Besides the synchronization of virtual and host cluster resources, the hypervisor also redirects certain Kubernetes API requests to the host cluster, such as port forwarding or pod/service proxying. It essentially acts as a reverse proxy for the virtual cluster.

    In the host cluster, all created resources by a virtual cluster are encapsulated in a single namespace (it is also possible to have multiple virtual clusters within a single namespace), which allows system admins to restrict resources of a virtual cluster via resource quotas.
    With this architecture, virtual clusters improve isolation:

  • Only certain namespaced resources are synced and available in the host cluster (such as Pods, Services, ConfigMaps etc.).
  • Users and pods that need to communicate with the virtual cluster (such as operators) now communicate only with the virtual Kubernetes API server instead of the host Kubernetes API server.
  • Pods cannot directly access host cluster resources.
  • Virtual cluster resources don't pollute the host cluster etcd.
  • Since the virtual cluster is a working Kubernetes cluster itself, it is also even possible to install virtual clusters within virtual clusters.

    How to Create vClusters in loft

    To test how virtual clusters work and if they could fit your use case, you can use loft, a multi-tenancy manager for Kubernetes.
    Virtual clusters should work in most Kubernetes clusters above version v1.14 that support persistent volume claims.

    You can set up loft for free on your local Kubernetes cluster or in any cloud provider via the following helm commands (see the official docs for more information):

    # Install ingress controller in the cluster
    helm install nginx-ingress nginx-ingress --repo https://kubernetes-charts.storage.googleapis.com \
      --namespace nginx-ingress \
      --set-string controller.config.hsts=false \
      --create-namespace \
      --wait
    
    # Install loft with self signed certificate
    # Change loft.localhost to your desired url
    # and make sure the URL points to the ingress
    # controller LoadBalancer external ip
    helm install loft loft --repo https://charts.devspace.sh/ \
      --namespace loft \
      --create-namespace \
      --set admin.username=admin \
      --set admin.password=admin \
      --set certIssuer.create=false \
      --set ingress.host=loft.localhost \
      --set ingress.tls.secret=loft-cert \
      --set cluster.connect.local=true \
      --wait
    

    Wait until loft is running and make sure you install the loft CLI and log in to your loft instance:

    # Login in the UI with admin:admin
    loft login https://loft.localhost --insecure
    

    You can now create a new virtual cluster via the loft CLI in any namespace you like:

    # if the namespace does not exist, loft will create it for you
    loft create vcluster test --space mynamespace
    

    If the command ends successfully, loft will also switch your current local kube-context to access the virtual cluster, which you can test via kubectl:

    kubectl get namespaces
    NAME              STATUS   AGE
    default           Active   37s
    kube-system       Active   37s
    kube-public       Active   37s
    kube-node-lease   Active   37s
    

    You can now use the vCluster in the same way as any other Kubernetes cluster.

    For more information about how to manage vClusters in loft, make sure to check out the official loft documentation.

    Advantages and Limitations of Virtual Clusters

    We think virtual clusters are an interesting new technology that can drastically reduce cost and effort for several use cases, such as ephemeral environments.
    Compared to the approach of creating many small independent clusters, virtual clusters have multiple advantages:

  • Less cluster boilerplate (one k3s pod in a shared host cluster vs a complete standalone Kubernetes cluster)
  • Easier to manage (helm deploy/delete vs custom terraform scripts)
  • Less startup and teardown time (seconds vs minutes)
  • While virtual clusters seem promising, they also have some limitations that should be taken into consideration:

  • Not all Kubernetes features work in virtual clusters (e.g. virtual storage classes, virtual container runtimes, network plugins etc.).
  • Isolation between standalone clusters is obviously still better than between virtual clusters having the same host cluster.
  • For a more detailed analysis of benefits and use cases of virtual clusters, take a look at this article.

    Conclusion

    Virtual clusters have the potential to become an important component in the Kubernetes ecosystem. Being more cost-effective and easier to manage than many small clusters while at the same time being better isolated than namespaces makes virtual clusters a superior solution for many use cases. Examples for this are scenarios in which engineers require access to Kubernetes such as testing, experimentation or cloud-native development. Virtual clusters could so help to foster Kubernetes diffusion in many organizations.

    Additional Articles You May Like:

    Sign up for our newsletter

    Be the first to know about new features, announcements and industry insights.