Table of Contents
Multitenancy refers to the mode of operation where multiple customers (tenants) access and share the same computing resources. Despite physically sharing the same resources, the tenants are logically separated and unaware of each other. With the rise of Kubernetes and its growing adoption in different fields, the demand for multitenancy in Kubernetes has increased rapidly. Suppose you have a Kubernetes cluster that multiple tenants share. In this case, you need to implement proper multitenancy practices so that each tenant can access the resources they need but, at the same time, cannot access other tenants' information.
There are many scenarios where you might want to implement multitenancy in Kubernetes. For example:
- In a large enterprise with multiple developers and projects, it makes sense to logically separate the resources of each team and/or project.
- In a SaaS offering where each customer gets their own instance, a multitenant architecture can help by sharing resources between instances.
- In a retail chain with multiple stores, providing each store with its own resource instance makes sense.
The most obvious approach for implementing multitenancy in Kubernetes is to have separate clusters for separate tenants. While this is the most secure option, it can be difficult to manage as the number of tenants grows. On the other hand, giving each tenant their own namespace is easier to manage, but it's not secure as the isolation is very weak. This is where Loft's vCluster tool comes in. With vCluster, you can create virtual clusters that run on top of real clusters and get all the benefits of having a separate cluster without having to deal with the nightmare that comes with managing a bunch of clusters.
In this article, you'll learn how to set up multitenancy in an Amazon EKS cluster using Loft's vCluster tool.
Challenges and Considerations of EKS Multitenancy
While setting up multitenancy in an EKS cluster, there are a few considerations you must take into account.
Security Concerns and User Isolation
This is one of the biggest challenges when setting up multitenancy. Under no circumstances do you want a tenant to be able to access the information of other tenants. This means that the isolation between tenants must be strong, and the tenants should not be aware of each other.
Resource Management and Optimization
Each tenant must be able to use all the resources they need, and the resource usage of one tenant should not be affected by other tenants. For example, one tenant running a resource-intensive pod should not cause problems for other tenants. At the same time, you must be able to introduce proper usage quotas to make sure a tenant doesn't get more resources than they need.
Operational Complexity
While addressing tenants' needs should be your top priority, you shouldn't forget about your DevOps and sysadmins, who are responsible for managing the cluster(s). Your multitenancy solution should be easy to maintain, extend or shrink as the number of tenants varies, and be inexpensive.
With the above considerations in mind, the two most common approaches for multitenancy in Kubernetes are cluster-based multitenancy (where each tenant gets a separate cluster) and namespace-based multitenancy (where each tenant gets a separate namespace).
Cluster-based multitenancy is the most secure option, as the isolation between tenants is very strong. There's no way a tenant can access other tenants' clusters, and the resource usage of one cluster is independent of other clusters. The tenants can also be given more control over their respective clusters through admin access. It is, however, expensive to run multiple clusters, and it can be challenging to manage a large number of clusters. Setting up the clusters requires a lot of duplicate work, and the same components (such as Istio, Consul, and Metrics Server) will need to be installed on every cluster.
On the other hand, namespace-based multitenancy is cheap and easy to maintain. However, the isolation between namespaces is very weak, and you risk tenants accidentally accessing one another's information. Also, it's not possible to give admin access to tenants, which can limit the control they have over their instances.
Why Use vCluster for EKS Multitenancy?
vCluster combines the two approaches and gives you the best of both worlds with the help of virtual clusters. Virtual clusters are full-fledged Kubernetes clusters running on top of other Kubernetes clusters. Virtual clusters reuse the worker nodes and networking of the host cluster but have their own control plane and schedule all workloads into a single namespace of the host cluster. From the tenants' perspective, each virtual cluster is a complete cluster on its own. However, from an administration perspective, each cluster is just a namespace in the host cluster. This approach gives you strong isolation, like separate physical clusters, while maintaining the low cost of the namespace-based solution.
Virtual clusters are inexpensive and easy to manage. You can configure each virtual cluster with its own settings and resource quotas to ensure each tenant can use only what they need. You can also give each tenant admin access to their own virtual clusters so that they can have complete control over the way their cluster operates.
Using Virtual Clusters with EKS
To follow along with the tutorial, you'll need to have:
- kubectl installed on your local machine.
- An AWS account.
- The AWS CLI installed and set up. Make sure you are logged in as an administrator.
- eksctl installed and set up.
- The vCluster CLI installed.
The following section describes how to create an EKS cluster, but you can skip it if you already have one. Ensure you have the Amazon EBS CSI driver installed in your cluster. This is needed for vCluster to work correctly.
Creating and Setting Up an EKS Cluster
To create an EKS cluster using eksctl
, run the following commands:
export cluster_name=vCluster-eks-demo
export region=us-east-1
eksctl create cluster --name $cluster_name --region $region
Replace cluster_name
and region
with different values to suit your needs.
Wait for a while as the cluster is spun up. Once it's ready, you should see an output like below:
2023-09-05 19:23:49 [✔] all EKS cluster resources for "vCluster-eks-demo" have been created
2023-09-05 19:23:52 [ℹ] kubectl command should work with "/home/aniket/.kube/config", try 'kubectl get nodes'
2023-09-05 19:23:52 [✔] EKS cluster "vCluster-eks-demo" in "us-east-1" region is ready
eksctl
automatically configures kubectl
for you. So, you can run kubectl get nodes
to get the list of nodes from your EKS cluster. If the cluster is set up correctly, you should see a similar output to the following:
NAME STATUS ROLES AGE VERSION
ip-192-168-20-179.ec2.internal Ready <none> 109m v1.25.12-eks-8ccc7ba
ip-192-168-40-63.ec2.internal Ready <none> 109m v1.25.12-eks-8ccc7ba
EKS automatically associates an OpenID Connect issuer URL with your cluster. This allows you to associate AWS IAM roles with cluster service accounts. But first, you must associate an IAM OIDC provider with your cluster using the following command:
eksctl utils associate-iam-oidc-provider --cluster $cluster_name --approve
The EBS CSI plugin requires IAM permissions to make calls to AWS APIs. So, create an IAM role with the AmazonEBSCSIDriverPolicy
policy:
eksctl create iamserviceaccount \
--name ebs-csi-controller-sa \
--namespace kube-system \
--cluster $cluster_name \
--role-name AmazonEKS_EBS_CSI_DriverRole \
--role-only \
--attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
--approve
Finally, install the EBS CSI driver. Replace <YOUR_ACCOUNT_NUMBER>
with your twelve-digit AWS account number:
eksctl create addon --name aws-ebs-csi-driver --cluster $cluster_name --service-account-role-arn arn:aws:iam::<YOUR_ACCOUNT_NUMBER>:role/AmazonEKS_EBS_CSI_DriverRole --force
Setting Up and Using vCluster
Consider the following scenario: your organization has sales and admin teams. The sales team needs access to the Kubernetes cluster to spin up demos for clients. However, since their resource needs are small, you should give them a small resource quota. On the other hand, admins require substantial resources, so you should give them a higher quota. You want to enable multitenancy so the sales and admin teams get their own clusters. vCluster to the rescue.
Let's set up the virtual cluster for the sales team first. Create a file named sales.yaml
with the following content:
isolation:
enabled: true
podSecurityStandard: baseline
resourceQuota:
enabled: true
quota:
requests.cpu: 5
requests.memory: 10Gi
requests.storage: "20Gi"
requests.ephemeral-storage: 100Gi
limits.memory: 10Gi
limits.ephemeral-storage: 100Gi
services.nodeports: 10
services.loadbalancers: 1
count/endpoints: 10
count/pods: 5
count/services: 10
count/secrets: 60
count/configmaps: 60
count/persistentvolumeclaims: 10
networkPolicy:
enabled: false
The above YAML file sets up the resource quotas for the virtual cluster. As you can see, among other restrictions, the sales team is only allowed to have a total of five pods.
Create the virtual cluster with the following command:
vCluster create sales -f sales.yaml
vCluster will create a namespace named vCluster-sales
and create a virtual cluster in it. After creating the virtual cluster, vCluster will automatically connect to it. Press Ctrl+C to stop the connection.
Let's now move to the admin cluster. Create a file called admin.yaml
with the following code:
isolation:
enabled: true
podSecurityStandard: baseline
resourceQuota:
enabled: true
quota:
requests.cpu: 10
requests.memory: 30Gi
requests.storage: "50Gi"
requests.ephemeral-storage: 100Gi
limits.memory: 40Gi
limits.ephemeral-storage: 100Gi
services.nodeports: 10
services.loadbalancers: 2
count/endpoints: 20
count/pods: 20
count/services: 20
count/secrets: 100
count/configmaps: 100
count/persistentvolumeclaims: 20
networkPolicy:
enabled: false
This is the same structure as sales.yaml
, except the limits have been increased. The admin team gets twenty pods. Create the cluster now:
vCluster create admin -f admin.yaml
Using the Virtual Clusters
Now that you have two virtual clusters ready, you can learn how to use them.
Connecting to a Virtual Cluster
You can connect to a virtual cluster using the vCluster connect
command:
vCluster connect sales
vCluster will open a connection to the sales
virtual cluster. You must keep this terminal open for as long as you want to stay connected to this cluster. You can press Ctrl+C to stop the connection at any point.
While the connection is open, in another terminal, run the following command to get the list of namespaces:
kubectl get ns
You should get the following output:
NAME STATUS AGE
default Active 19s
kube-system Active 19s
kube-public Active 18s
kube-node-lease Active 18s
Note that you're not seeing the vCluster-sales
namespace. This is because you're connected to the sales
virtual cluster. From the perspective of kubectl
, this is a full-fledged cluster, as it's unaware it is connected to a virtual cluster.
In the original terminal, stop the connection and connect to the admin
virtual cluster:
vCluster connect admin
In another terminal, run kubectl get ns
again:
NAME STATUS AGE
default Active 19s
kube-system Active 19s
kube-public Active 18s
kube-node-lease Active 18s
This time, even though the output is the same, you're actually seeing namespaces from the admin
virtual cluster!
Deploying Applications inside Virtual Clusters
Let's create a deployment in a virtual cluster. Connect to the sales
virtual cluster and apply the Nginx deployment manifest:
kubectl apply -f https://k8s.io/examples/application/deployment.yaml
Run the following command to get the list of all deployments:
kubectl get deployments
The output will look like this:
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 2/2 2 2 6m
Note that the deployment has two pods. Let's try to scale the deployment and see if the resource quotas have been applied or not. Remember, the sales team is only allowed to have a total of five pods. There are already two pods claimed by system components. This means the sales team can have three of their own pods. So, scale the deployment to four pods using the following command:
kubectl scale deployment/nginx-deployment --replicas=4
Run kubectl get deployments
again and observe that one of the pods is not running:
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/4 4 3 102s
Run kubectl get pods
to list all the pods:
NAME READY STATUS RESTARTS AGE
nginx-deployment-7fb96c846b-48hkt 1/1 Running 0 7m
nginx-deployment-7fb96c846b-9f56g 1/1 Running 0 7m
nginx-deployment-7fb96c846b-4vhl7 0/1 Pending 0 28s
Run kubectl describe pod <the name of the failing pod>
to see the event logs. You should see a message similar to the following:
As you can see, it says that the limit on pod count is five, and five pods are already running, so the last pod couldn't be started.
Connect to the admin
virtual cluster, and in another terminal, run kubectl get deployments
. You should get the following output:
No resources found in default namespace.
You can't see the deployment in the sales
virtual cluster, but this is to be expected. Since you're connected to the admin
virtual cluster, you cannot access any resource outside of this virtual cluster. Yay for multitenancy!
As you did before, create the same deployment:
kubectl apply -f https://k8s.io/examples/application/deployment.yaml
And scale it up:
kubectl scale deployment/nginx-deployment --replicas=4
Run kubectl get deployments
and verify that all the pods are running:
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 4/4 4 4 26s
Since the admin team has a limit of twenty pods, you can easily scale the deployment to four pods.
Deleting Virtual Clusters
You can now delete the clusters with the vCluster delete
command:
vCluster delete sales
vCluster delete admin
Monitoring in vCluster
You can install Metrics Server in a virtual cluster to enable metric collection. However, vCluster needs RBAC permissions to get node metrics from the cluster. For that, you need to enable real node syncing, which you can do with the following configuration:
sync:
nodes:
enabled: true
Save the above in a file named monitoring.yaml
and create a virtual cluster:
vCluster create monitoring -f monitoring.yaml
You can now install Metrics Server inside the virtual cluster:
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
Deploy a few resources so that you can monitor usage. You can make use of the same deployment, if you want, by running kubectl apply -f https://k8s.io/examples/application/deployment.yaml
.
Run kubectl top nodes
and kubectl top pods
to monitor resource usage:
$ kubectl top pods --all-namespaces
NAMESPACE NAME CPU(cores) MEMORY(bytes)
default nginx-deployment-7fb96c846b-g9gzp 2m 1Mi
default nginx-deployment-7fb96c846b-qmxpx 2m 1Mi
kube-system coredns-7db69cf49d-nrxs6 2m 9Mi
kube-system metrics-server-5b4fc487-8rqdf 2m 10Mi
If you have a Metrics Server installation in the host cluster (the EKS cluster), you can also use a proxy and use the underlying host cluster metrics-server
instead of installing a dedicated metrics-server
in the virtual cluster.
Conclusion
With the rise of Kubernetes, the need to achieve multitenancy in Kubernetes clusters has increased rapidly. The traditional approaches of cluster-based multitenancy and namespace-based multitenancy both fall short in certain regards. With virtual clusters, the best of both worlds is combined into an inexpensive, easy-to-manage, and secure package. Getting started with vCluster is simple and straightforward, even with cloud-based services like EKS, as you've seen by setting up and using vCluster in an EKS cluster in this tutorial.
If you're interested in vCluster, feel free to read the vCluster tutorial and start experimenting with vCluster today!