Table of Contents
Many companies have adopted Kubernetes recently. However, most of them still do not realize its full potential because the actual Kubernetes usage in these organizations is very limited. Since Kubernetes has evolved dramatically, it is now not only a technology for operations anymore but also non-ops engineers can work with it. For this, Kubernetes adoption should not end here, it rather just starts.
So, it now often makes sense to also include engineers in the Kubernetes adoption process and, as the latest Stack Overflow Developer Developer Survey shows, engineers appreciate it as they both want to work with Kubernetes if they are currently not using it and also like working with after they have started.
An easy way to have more developers start working with Kubernetes is to provide them with self-service namespaces. In this article, I will describe what self-service namespaces are, why they are a game-changer for Kubernetes adoption, and how to get them.
#What are self-service Kubernetes namespaces?
Self-service namespaces are Kubernetes namespaces that can be created by the users on-demand without the need to be an admin of the cluster the namespaces are running on. As such, self-service namespaces are running on a shared Kubernetes cluster and are created in a simple and standardized way by their users, e.g. via a UI of a self-service namespace platform.
Self-service namespaces so provide engineers an easy and always available access to Kubernetes, which is a huge advantage compared to possible alternatives: While local Kubernetes solutions such as minikube always have to be set up and configured by the engineers themselves and so are never readily available, giving each the developer an own cluster in the cloud is very expensive. Individual clusters ars also often unfeasible due to restricted cloud access rights and unnecessary because simple namespaces are enough for most standard use cases.
Providing namespaces in a self-service fashion compared to letting admins create them manually is therefore a decisive feature as only this eliminates the most important dev productivity impediment of “waiting for central IT to provide access to infrastructure”.
Overall, self-service namespaces are therefore the easiest way of providing engineers with a readily available Kubernetes access.
#Benefits of self-service namespaces
Providing a self-service namespace solution to users has advantages for both sides, the users (engineers) themselves and the admins.
#Benefits for namespace users:
1. Velocity: Self-service namespaces are always available and can be created fast and easy whenever they are needed by the users. This makes them a very useful solution for a variety of engineering tasks, ranging from cloud-native development, to CI/CD pipelines and AI/ML experiments.
2. Independence: The self-service aspect enables engineers to work independently from admins as they do not have to wait for the admins to create a work environment before they can start.
3. Easier Experimentation: This independence of the users also makes it possible to experiment more with namespaces as the namespaces can now be thrown away and recreated by the users themselves. The users so do not have to fear to break something and can eventually treat namespaces as “cattle” and not as “pet”.
The independence of users can be further enhanced and the fear of breaking can be reduced by using self-service virtual Clusters (vClusters), which are very similar to namespaces but provide harder isolation and give engineers even more freedom to configure Kubernetes.
#Benefits for cluster admins:
1. Better Stability: Since all namespaces are created in the same standardized way by the users, there is little room for human error in the whole namespace creation process, which improves the stability of the underlying Kubernetes cluster. Additionally, the users are encapsulated in namespaces, which prevents that they interfere with each other.
2. Less Effort and Pressure: The gained independence by the users reduces the pressure on the cluster admins. They do not have to be always available to provide work environments for the engineers and are so no longer a bottleneck for the engineering workflows with Kubernetes. Admins only have to set up the self-service platform in the first place and then ensure that it is available and that the underlying cluster is running.
3. Focus on Stability and Security: As the admins are not needed in the creation process of every namespace anymore, they can now focus more on the stability and security of the underlying cluster.
Providing self-service virtual Clusters can again improve the system, as vClusters provide an even stronger form of multi-tenancy and user isolation. They also allow the users to configure even more themselves in their vCluster, so that the underlying host cluster can be very rudimentary, which provides less attack surface and room for human error further improving stability and security.
#How to get self-service Kubernetes namespaces
The first part you need for a self-service namespace system is an underlying Kubernetes cluster that the namespaces are supposed to run on. If the self-service namespaces will be used for development and testing processes, it makes sense to create a new cluster that is separate from the cluster you run production workloads on.
Since one of the benefits of a self-service namespace solution is that it can be used and shared by many users, the cluster needs to be a cloud-based cluster and cannot run locally (even though you may test your setup with a local cluster first and then start again with a “real” version in the cloud).
Here, it does not matter if it is a cluster running in a public cloud or private cloud and if it is self-managed or managed by the cloud provider. However, it often makes sense to use a cluster that is similar to your production cluster (e.g. use AWS if your production cluster is AWS) because this makes development, testing, and other processes you want to use the self-service namespaces for more realistic.
A second central component for a self-service namespace solution is permission and user management. This allows the admins to keep control of who is allowed to create namespaces and to overview who is using what.
Especially in larger teams, having a Single-Sign-On solution is helpful because admins do not have to manually add the users and the users can start immediately. If you build a self-service namespace system yourself, solutions such as dex may be helpful for this task.
While you want to enable the users to create namespaces on-demand, you also want to prevent excessive usage in terms of CPU, memory, and potentially other factors such as number of containers, services, or ingresses. Such a limitation is very helpful to control cost, but you need to be careful not to limit the users in their work. For this, it should be up to the users how they want to allocate their allowed resources.
Implementing efficient user limits is much easier with manually provisioned and statically assigned namespaces than with dynamic namespaces that are created by the users on-demand. This is due to the fact that Kubernetes limits in Resource Quotas are on a namespaces-basis and not on a user-basis.
However, you want to limit users and not namespaces, so you need to solve this problem to get sensible user limits. For this, you need to use aggregated resource quotas, which can be done with the open-source solution kiosk.
#Make vs. buy
Now that you know the most essential components for a self-service namespace system, you need to decide if you want to build this system yourself or just buy an existing off-the-shelf solution.
Several large organizations have already built an internal Kubernetes platform for namespaces. A very good example of this is Spotify because there even was a public talk at KubeCon North America 2019 about their platform, so you can learn from their experience. However, even when using some open-source components such as dex or kiosk, building an own namespace self-service platform takes a lot of effort, which is probably the reason why mainly larger organizations or companies with very special needs go this way.
In contrast to this, buying an existing off-the-shelf solution is feasible for organizations of any size and has the advantage that you can get started very fast without a large upfront investment. Additionally, you get a specialized service that goes beyond the minimal needs that you would probably build on your own. One example ofsuch a ready-to-use solution is loft. loft is internally building on kiosk and, besides self-service namespaces on top of any connected cluster, it provides some useful additional features: It works with multiple clusters, has a GUI, a CLI, as well as a sleep mode to save cost, and it provides a virtual cluster technology that can be used to create self-service Kubernetes work environments that are even better isolated than namespaces.
If you enable your engineers to create namespaces independently and on-demand, this will change how Kubernetes is used in your organization. Especially if you have already adopted Kubernetes and now want to spread its usage among further people in your organization, a self-service namespace system is a very good solution. It answers the fundamental question of how to provide an easy and independent Kubernetes access to engineers, while it is still also admin-friendly because admins can easily manage it and so have more time to care for the underlying cluster’s stability.
To get a self-service namespace system, you need to decide if you want to make or buy it. Making it is the right solution for companies with very special needs, but even then, you can build upon already existing open-source components that will make your life much easier. For most companies, buying is still a more practical approach because you get a full solution from a specialized vendor without a huge upfront investment.
No matter how you decide, having a self-service namespace platform will help you to take the next step towards more effective use of Kubernetes at your organization.
#Additional Articles You May Like:
- Kubernetes Multi-Tenancy – A Best Practices Guide
- Making Self-Service Clusters Ready for DevOps Adoption
- Kubernetes Multitenancy: Why Namespaces aren’t Good Enough
- Kubernetes Multi-Tenancy with Argo CD And Loft
- Kubernetes Multi-Tenancy: Why Virtual Clusters Are The Best Solution
- [Video] Beyond Namespaces: Virtual Clusters are the Future of Multi-Tenancy
- 5 Tips for Dealing with Kubernetes Day 2 challenges
- Getting the most out of your Delivery Pipeline with Loft & Argo CD
- How Codefresh Uses vcluster to Provide Hosted Argo CD
- What is GitOps and Kubernetes