vcluster is one of those projects that I loved from day 1. Creating Virtual Clusters inside your Kubernetes Cluster makes a lot of sense when Kubernetes namespace isolation is not enough or when creating a full-blown Kubernetes Cluster in a cloud provider is too expensive. In a world where cost optimization is becoming increasingly important, vcluster offers an option (with its trade-offs) to enable multi-tenant scenarios. Each tenant will have its own Kubernetes API Server, completely isolated from others.
In this short blog post, I wanted to take things to the next step. While providing isolation is great it comes with an extra cost associated. We all know that having a Kubernetes Cluster doesn’t take us too far. We will need to install components into these clusters to support our applications. Unless you are entirely relying on Cloud Provider services (which are also expensive), you will now own a fleet of vclusters. You will need to manage each of these vclusters, and if all your tenants have similar requirements, install in each of these vclusters all the things to make your tenants happy. There is a big overhead associated with installing one or ten components for each vcluster.
Check the step-by-step tutorial for this blog post here: https://github.com/salaboy/from-monolith-to-k8s/tree/main/platform/vcluster-dapr
First, if all your tenants need the same functionality, you will have these components doing the same things but in different vclusters. While this is OK for full isolation, it is not for cost-optimization.
Let’s take a look at a great feature that vcluster offers to mitigate scenarios like the one that I’ve described using a CNCF project called Dapr (Distributed Application Runtime).
You can use Dapr to build reliable distributed applications, as Dapr Components offer developers access to databases, message brokers, and workflows, among others, by relying on local HTTP and gRPC interfaces. Dapr abstracts away where these components are so developers can focus on writing features using their programming language of choice.
Dapr, like many other CNCF projects, can be installed using a Helm Chart into your Kubernetes Clusters. Once you have installed Dapr, you can configure Dapr Components for your applications. Because we are using Kubernetes, these components are Kubernetes Resources we can create using YAML files.
Once Dapr is installed, we can, for example, create a Dapr Statestore Component to connect to a key-value store like Redis. The Redis instance can be inside or outside the cluster, it doesn’t matter.
To do this, we define a Dapr Component using YAML (1) and apply it to our Kubernetes Cluster, where Dapr is installed. The statestore component connects to the Redis instance (2), allowing our applications to interact with the component APIs (3) using local HTTP/gRPC calls. This removes the need for the application code to connect to Redis using libraries or drivers. It also removes the need to know the credentials to connect to the instance, as the component handles that.
To drill down more on the Kubernetes specifics, Dapr and the Dapr control plane work by injecting a sidecar
daprd to your application workloads. That’s why your application can make local calls to the statestore component.
If sidecars are not your thing, don’t worry, Dapr can also work using different deployment strategies (check https://github.com/dapr-sandbox/dapr-ambient for more info).
Now imagine that you have 10 tenants (these can be teams, customers, development environments, etc.), and for each of them, you want to make sure that they have Dapr Components available to do their work.
Creating separate clusters for each tenant will be expensive not only on the Kubernetes Control Plane but also now you need to run Dapr in each cluster. Depending on the level of data isolation needed, you might need to have separate Redis instances for each tenant too, but that is a completely different topic.
So how can vcluster help, and how would it work for Dapr?
#vcluster + Dapr
With vcluster we can provide each tenant with a separate Kubernetes API server. This will prevent us from creating costly Kubernetes Control Planes. Just using vcluster alone in our previous example would look like this:
This works, and we are saving on costs already, because we just need a single Kubernetes Control Plane to host multiple vclusters. Now your tenants can access the vcluster API server (which, for them, is a fully compliant Kubernetes API Server) to deploy and manage their workloads. But as you can see in the diagram, in each vcluster we need to install Dapr; hence, in this setup, we will be running four Dapr control planes doing the same thing, and that sounds like a waste of resources.
Welcome to the vcluster generic syncer (https://www.vcluster.com/docs/architecture/synced-resources#generic-sync) !
#vcluster + generic syncer + Dapr
vcluster offers a nice way to sync custom resources between the Host Kubernetes Cluster and the vcluster via simple yaml configuration. The generic syncer is configured per vcluster at creation time, and you can define which resources can be imported (from the host to the vcluster) or exported (from the vcluster to the host). This mechanism allows us to use, from within a vcluster, controllers (control planes) installed only on the Host Kubernetes cluster.
You can check this out on your own laptop by following this step-by-step tutorial: https://github.com/salaboy/from-monolith-to-k8s/tree/main/platform/vcluster-dapr.
We will create Dapr-enabled vclusters that enable users to create and configure Dapr Components to connect to shared infrastructure.
To do this, we use a local KinD cluster (but it would work in any Kubernetes distribution) and install Dapr on it. Once Dapr is installed we can proceed to create a Redis instance, by using the Bitnami helm chart:
helm install redis bitnami/redis --set architecture=standalone
For this example and to keep things simple, we use a single Redis instance installed on the Host Cluster.
Then we are ready to start creating our vclusters and we do that by running the following command:
vcluster create --chart-version 0.15.0-alpha.0 dapr-enabled -f values.yaml.
This creates and connects to a vcluster called `dapr-enabled` and where the values.yaml file looks like this:
fallbackHostDns: true #A multiNamespaceMode: #B enabled: true sync: generic: clusterRole: extraRules: #C - apiGroups: [ "apiextensions.k8s.io" ] resources: [ "customresourcedefinitions" ] verbs: [ "get", "list", "watch" ] - apiGroups: ["dapr.io"] resources: ["components", "subscriptions", "configurations"] verbs: ["create", "delete", "patch", "update", "get", "list", "watch"] config: |- version: v1beta1 export: #D - kind: Component apiVersion: dapr.io/v1alpha1 import: #E - kind: Pod apiVersion: v1 replaceOnConflict: true
- #A This option allows vcluster’s coreDNS to fallback and do lookups in the host cluster DNS in case there are no matches in the vcluster DNS
- #B This is a feature in vcluster that maps each namespace in the vcluster to a separate namespace in the host cluster. This allows more enhanced syncing and backsyncing of CRDs/resources and thus reusing of tools/applications on host cluster across vclusters
- #C An extra list of rules that can be applied to allow vcluster to sync certain CRDs/resources by reusing the existing installation from the host cluster
- #D A list of CRDs/resources that will be synced from inside the vcluster to the host cluster
- #E A list of CRDs/resources that will be synced from the host cluster to the vcluster
Once you are connected to the virtual cluster you can start creating Dapr Components, as these will be synced back to the host and the Dapr control plane will know what to do with these resources.
The step-by-step tutorial then goes ahead and creates a new statestore component into the vcluster and deploy two apps one that reads from the statestore and one that writes data to it.
You can interact with both applications to check that they share the same statestore.
You can now create as many vclusters as your host Kubernetes cluster allows. If you use the same configurations (`values.yaml` file) all your vclusters will reuse the Dapr control plane installed in the host cluster, with out the need to install Dapr inside each vcluster.
Now we are saving big, we are saving on Kubernetes Control Planes but also on running other components such as the Dapr Control Plane inside each vcluster.
There are other examples where this is extremely useful and already tested, such as Knative and Cert Manager, so I will be working on examples that uses the power of the “vcluster generic syncer” reuse shared components that are installed only in our host clusters.
Stay tuned for more and check the latest Dapr 1.10 release contains many new features and improvements, such as Workflow, Pluggable Component SDKs, and Multi-App Run. Read all about them in this blog post. Check out the GitHub repository for more examples, and feel free to reach out if you have questions via Twitter @Salaboy or my blog https://www.salaboy.com. You are welcome to join the Dapr Discord and join the vcluster/loft slack (slack.loft.sh) to share your experience with both Dapr and vcluster.