Comparing Multi-tenancy Options in Kubernetes

Daniele Polencic
8 Minute Read

Balancing isolation, management ease, and cost is critical in multi-tenant Kubernetes setups. In this article, we’ll explore how to evaluate these factors to optimize resource utilization and tenant isolation. 

A key question when planning infrastructure is: how many Kubernetes clusters do you need? Multi-tenancy options, such as multi-tenant and single-tenant architectures, help decide whether to use one large cluster or several smaller ones.

Soft multi-tenancy works well when tenants are trusted, but it poses risks with third-party code, potentially leading to security breaches. Kubernetes offers mechanisms to manage tenant isolation effectively.

We'll examine three multi-tenancy options with stronger isolation and their trade-offs: the Hierarchical Namespace Controller (HNC), vCluster, and Karmada.

Main points

  • Kubernetes offers several multi-tenancy options, including HNC, vCluster, and Karmada, each providing different levels of isolation and management complexity.
  • While HNC is cost-effective and leverages namespace nesting, it struggles to manage global resources effectively for multiple tenants.
  • vCluster provides dedicated control planes for tenants, offering better isolation but increasing complexity in resource management.
  • Karmada ensures the highest isolation level  by assigning a dedicated cluster per tenant, though it comes with higher operational and management costs.

Kubernetes namespaces

You may be wondering why Kubernetes namespaces aren’t on that list. Ideally, Kubernetes namespaces serve as a fundamental building block for grouping in Kubernetes, but they aren't explicitly designed for multi-tenancy. 

Limitation occurs when trying to implement multi-tenant architecture in a multi-tenant. For example, when setting up a shared Kubernetes platform for two teams, each working on their own project, namespaces result in duplicate resources like network policies and resource quotas.

This is a limitation when trying to implement multi tenant architecture in a multi tenant environment. Policies like Quotas and NetworkPolicies apply at the namespace level, not to the tenants managing multiple projects simultaneously.

Simply put : 

  • Both namespaces will have similar network policies.
  • They might have the same ResourceQuotas and LimitRanges.
  • RBAC is also duplicated.

‎In this scenario, the challenge is that policies such as Quotas and NetworkPolicies are applied to the namespace, not the team (tenant) working on both projects simultaneously.

Kubernetes doesn’t have a concept of the tenant. However, you can get close by copying resources across the namespace. And that happens when you install the Hierarchical Namespace Controller (HNC) in your cluster.

Hierarchical Namespace Controller

The Hierarchical Namespace Controller (HNC) is a solution that allows you to nest namespaces. HNC works by having child namespaces inherit resources from the parent namespace, promoting shared resource utilization and improving the multi tenant system. For example, creating a role in the parent namespace makes it available to the children namespaces.

This approach, however, doesn’t completely solve the challenge of multi-tenancy because global resources, such as ClusterRoles, ClusterRoleBindings, and PersistentVolumes, are still shared across the entire cluster. This introduces complications when different tenants must manage their resources without conflict.

Let’s have a look at an example.

After you installed the controller, you can create a root namespace with the following command:

$ kubectl hns create <CHILD_NAMESPACE> -n <PARENT_NAMESPACE>

You can create a role in the parent namespace with:

$ kubectl -n <PARENT_NAMESPACE> create role test1 --verb=* \ --resource=pod

The role grants you unlimited access to pods in the parent namespace.

What happens when you list the roles in the child namespace?

$ kubectl get roles -n <CHILD_NAMESPACE>
NAME            CREATED AT
test1           2024-02-29T20:16:05Z

The role was propagated!

The Hierarchical Namespace Controller has no mechanism for overrides at this point, but it’s something that the maintainers are considering.

Namespaced vs global resources

In Kubernetes, resources such as Pod and Deployments can be deployed in a namespace.

However, some resources are global to the cluster, such as ClusterRoles, ClusterRoleBindings, Namespaces, PersistentVolumes, Custom Resource Definitions (CRDs), etc.

If tenants can manage Persistent Volumes, they can see all persistent volumes in the cluster, not just theirs.

But it doesn’t stop there.

If you decide to install a Custom Resource Definition (CRD) for a tenant, the type definition is not namespaced (but the resource itself could be).

You will face the same issue with Validating and Mutating admission webhooks: their resources are global even if they validate or mutate only resources in a specific namespace.

Some of those issues could be mitigated with proper RBAC controls.

However, there’s something else you should keep in mind.

Kubernetes has a single control plane for the entire cluster.

The entire cluster will suffer if any tenant abuses the API server or DNS.

So, how can you provide a control plan for tenants?

vCluster: a control plane per tenant

Another approach is vCluster, which allows each tenant to have its own control plane within a multi tenant environment. With vCluster, the control plane runs as a pod, which allows tenants to create and manage their own resources independently from others. This addresses the isolation issues seen in single tenant and multi tenant architectures, as tenants are provided with a control plane that removes global resource contention.

But where is the pod scheduled if you run just a control plane? With vCluster, tenant management becomes much easier, as each tenant has its own control plane to interact with the Kubernetes API. This approach is ideal for setups where tenants require more control over their resources, and isolation is critical.

When you schedule a Deployment in the nested control plane, the resulting pod specs are copied to the host control plane, where they are assigned and deployed to actual nodes.‎

  • ‎Each tenant has an entire control plane and the flexibility of a real Kubernetes cluster.
  • This control plane is only used to store resources in a database.
  • The controller can be instructed to copy only specific resources.

In other words, a careful synching mechanism lets you selectively decide how to propagate resources from the tenant cluster.

If you are a tenant, you can experiment as much as you’d like with what if feels like a real cluster.

As an admin, you can have a granular strategy to manage tenants and their resources.

Still, workloads deployed in vCluster end up in the same cluster.

What if you need to segregate workloads into different clusters for regulatory reasons?

A cluster for each tenant

For example, two apps cannot be on the same network.

Or one app is bound to be in a particular region for data protection.

In this case, you only have the option to have a dedicated cluster per tenant.

However, how do you manage multiple clusters at scale?

You could use Karmada to manage the tenant cluster and deploy common workloads all once across all clusters.

Karmada’s architecture is similar to vcluster.

First, a cluster manager control plane is aware of multiple clusters.

Karmada architecture

You usually deploy it in a specially design cluster that doesn’t run any workloads.

Then, Karmada employs an agent that receives instructions from the Karmada control plane and forwards them to the local cluster.

In the end, you have the following arrangement:

  • The Karmada control plane can schedule workloads across all clusters.
  • Each cluster can still deploy workloads independently.

Of all options, this is the most expensive to maintain and operate.

Just imagine (the horror! of) upgrading dozens of clusters to the latest version.

It’s also the solution that offers more isolation, as all tenants have a dedicated cluster to work with.

Isolation, ease of management and costs

The three options (HNC, vCluster and Karmada) offer different trade-offs for isolation and ease of management.

And they also come with different costs.

Running extra management clusters (Karamada) or control planes (vCluster) costs money compared to a simple controller that propagates resources (HNC).

Comparing direct costs for Karmada vs vCluster vs HNC

But this barely scratches the surface.

What happens when you want to run a monitoring solution like Prometheus?

With vCluster and HNC, you can run a single Prometheus instance per cluster. With Karmada, you are probably forced to have one Prometheus instance per tenant. So the costs don’t go up linearly but exponentially.

Take Control of Your Kubernetes Multi-Tenancy with Loft.

Now is the time to choose the right multi-tenancy option for your Kubernetes setup, tailored to your specific needs. Whether it's HNC, vCluster, or Karmada, each option offers unique benefits and challenges. The key lies in evaluating the perfect balance between costs, isolation, and ease of management. Whether you require fine-grained control over shared infrastructure, enhanced isolation, or streamlined tenant management, Kubernetes has a solution for you.

Ready to streamline your multi-tenant architecture? Loft is here to help. As a leading provider of multi-tenant solutions, Loft offers the expertise and tools to manage your Kubernetes clusters easily, ensuring optimal performance for multiple tenants. Explore how Loft can simplify your multi-tenant management today and take your Kubernetes infrastructure to the next level.

Architecting Kubernetes Clusters: One Large Shared or Multiple Small Clusters

Frequently Asked Questions

How do you share a Kubernetes cluster for multi-tenancy?

To share a Kubernetes cluster for multi-tenancy, use namespaces and RBAC to isolate resources between tenants. You can also implement tools like vCluster to create separate control planes or HNC to nest namespaces. These methods ensure efficient resource utilization and isolation across multiple tenants.

What is multi-tenancy in Kubernetes?

Multi-tenancy in Kubernetes allows multiple tenants to share the same cluster while maintaining isolation between them. It uses multi tenant architecture to separate resources and control access through namespaces and RBAC. This setup ensures multiple tenants can securely share infrastructure without performance issues.

Can a single Kubernetes cluster be used as a tenant?

Yes, a single Kubernetes cluster can act as a tenant in a larger system. Tools like vCluster allow tenants to run isolated control planes within a shared cluster, enabling multiple tenants to operate independently while using shared resources.

What is Kubernetes multi-tenancy working group?

The Kubernetes multi-tenancy working group develops tools and guidelines for multi-tenant systems. It focuses on improving isolation, security, and resource utilization in multi-tenant environments. Their work supports better management of multiple tenants within Kubernetes clusters.

Sign up for our newsletter

Be the first to know about new features, announcements and industry insights.