Virtual Clusters For Kubernetes with Loft v0.3

We’re proud to announce that loft is the first cloud-native technology that ships a stable implementation for virtualized Kubernetes clusters. With loft v0.3, users are now able to create virtual Kubernetes clusters (vClusters) in a matter of seconds either via a click on the UI, a single command with the loft CLI or by using the vCluster CRD (see the last paragraph for examples).

#What is loft?

Loft lets admins set up a self-service system for namespaces for their Kubernetes clusters. While working with a variety of companies and helping them to bring self-service Kubernetes namespaces to their engineering teams, we learned a lot about the tension between IT teams and engineering teams. While sysadmins are mostly concerned with the stability and security of their system, engineers strive for velocity and demand the freedom to do whatever they need for their work - including installing CRDs and custom RBAC rules when installing any one of the 1,000+ Helm charts out there which requires this kind of cluster-wide access.

v0.3 of loft now solves this issue with virtual clusters. Here’s how:

#What are Virtual Clusters?

You can imagine a virtual Kubernetes cluster as a fully functional Kubernetes cluster that runs on top of another cluster - contained in a single namespace of the underlying host cluster. A virtual Kubernetes cluster can be used just like any other Kubernetes cluster with kubectl, helm or whatever client tool has access to a valid kube-context for this virtual cluster.

The virtual cluster roughly consists of two things:

  • a control plane (i.e. API server, database like etcd and controller-manager)
  • a syncer which actually starts and manages the containers in the underlying host cluster

While the virtual cluster uses the underlying host cluster to actually run the containers for its pods, the virtual cluster has its own API server which is used to create and manage all Kubernetes resources such as pods, deployments, services, cluster roles, and even namespaces.

That means when you create a deployment in your virtual Kubernetes cluster, this deployment will not be sent to the underlying host cluster and it will also not be stored in the etcd storage of the underlying host cluster. Instead, the resource will be created using the separate API server of the virtual cluster and stored in the database (which may be etcd or even a simple sqlite or mysql database) of this virtual cluster. To actually run the containers for this deployment, the syncer component of the virtual cluster contacts the underlying host cluster and creates a pod there and wires everything up correctly, so that networking etc. will work as expected.

Similar to VMs, virtual clusters should aim for the following goals:

  • Tenants should be restricted to their virtual cluster and not be able to access the underlying host.
  • Tenants should not be aware of each other when running on the same host.
  • Virtual Clusters should be fast to spin up and easy to destroy without leaving anything behind on the underlying host cluster.

If you want to learn more about the virtual cluster implementation in loft, take a look at the loft documentation for virtual clusters.

#Why Virtual Clusters?

TL;DR: Engineers need to be able to install and manage their own CRDs, RBAC etc. Restricting users to namespaces does not allow them to do this - but virtual clusters can fix this.

#Namespaces are not enough

Remember the 90s? Web hosters offered webspaces on shared Linux machines where users could run their PHP app as long as it was compatible with whatever PHP version their hoster installed for all the users on the same machine. We’re doing pretty much the same thing with Kubernetes right now. We’re handing out namespaces to engineering teams and hope they won’t complain about RBAC or whatever CRD is missing.

Namespaces are the webspaces of Kubernetes - they are the core building block for isolating tenants and their workloads and they are insufficient for today’s need for high-velocity engineering. If teams are given access to Kubernetes namespaces, their access is typically restricted by things like RBAC. For this, users in a multi-tenant cluster are typically restricted to their namespaces and cannot modify any cluster-wide objects. That means with namespaces, engineers:

  • cannot install CRDs
  • cannot work with cluster-wide CRDs
  • cannot modify RBAC rules
  • cannot even list their own namespaces most of the time

While all of these things are annoying for engineering teams, the first one in this list is particularly problematic because CRDs are becoming increasingly important in the Kubernetes ecosystem. Whether a team wants to evaluate Kubeflow, start working with knative, or use any of the many other projects that heavily rely on CRDs, engineering teams need to be able to install CRDs. Without the capability to manage their own CRDs or set up their own RBAC rules, engineers will not able to install 90% of the Helm charts on Helm Hub - and that’s a problem. If you want to be competitive these days, you need to move at a fast pace. So, how can we fix this?

#vClusters are more powerful than namespaces

We could move fast and break things as Zuckerberg suggested. However, I think we can all agree that this did not work out well in a variety of cases. Just handing out admin access to the cluster and opening up our strict RBAC settings so engineers can install CRDs and RBAC rules themselves is a recipe for disaster.

How about moving things without breaking anything? Well, that’s essentially what we’re striving to make possible with virtual Kubernetes clusters.

Just like virtual servers or modern cloud VMs allowed us to host multiple operating systems on the same physical server hardware while omitting all the issues of webspaces, virtual Kubernetes clusters will allow us to host different engineering teams or even production workloads alongside each other on the same Kubernetes clusters but without having to restrict their capabilities.

#How to get started?

If you now want to try vClusters yourself, here is how to do it:

#1. Install Loft

Follow the instructions in the loft docs to install loft to any Kubernetes cluster.

#2. Create Virtual Clusters

Option A - Via CLI

loft create vcluster my-vcluster-1

Option B - Via UI


Option C - Via CRD

Take a look at the loft docs to learn more about using the loft CRDs for virtual clusters.


#What’s next?

Now that anyone can create virtual Kubernetes clusters within seconds, we are working on supporting some advanced use cases for virtual clusters including:

  • snapshot and restore virtual clusters
  • migration of virtual clusters between different Kubernetes clusters (e.g. from GKE to EKS)
  • virtual clusters that span across different host clusters.

#Get in touch!

If you’re interested in using virtual clusters, download loft and get started now. Help us shape the roadmap and reach out if you run into any issues or if you have any feature requests for virtual clusters. We want to make virtual clusters the new standard for Kubernetes multi-tenancy and we plan to support as many use cases as possible. So, reach out and let us know how you plan to use virtual clusters: - Follow @loft_sh on Twitter - Open issues in the loft GitHub repo - Email me via lg@loft.sh (Lukas)