Kubernetes Development Environments – A Comparison

Lukas Gentele
Daniel Thiry
17 min read

Kubernetes has left the state when it was mostly an ops technology behind and now is also very relevant for many developers. As I wrote in my blog post about the Kubernetes workflow, the first step for every developer who starts to directly work with Kubernetes is to set up/get access to a Kubernetes development environment.

A Kubernetes work environment is not only the first step but also a basic requirement to be able to work with Kubernetes at all. Still, the access to such an environment is often a problem: A VMware study even found out that “access to infrastructure is the biggest impediment to developer productivity”. For this, Kubernetes development environments should have a high priority for every team that plans to use the technology.

In this article, I will describe and compare four different Kubernetes development environments and explain when to use which dev environment.

  1. Local Kubernetes Clusters
  2. Individual Cloud-Based Clusters
  3. Self-Service Namespaces
  4. Self-Service Virtual Clusters

#6 Evaluation Criteria For Dev Environments

To make the different Kubernetes dev environments comparable, it makes sense to first define the evaluation criteria used. I will rate every environment using the following criteria:

Developer Experience: How easy is it for developers to get started with and to use the environment? This includes factors such as the speed of setup, the ease of use, and the required knowledge by the developers.

Admin Experience: How easy is it for admins to manage the environments and to manage the system? Here, I will consider the complexity of the system, the effort to manage it, and to add additional users.

Flexibility/Realism: How realistic is the dev environment compared to the production environment and how flexible is it for different use cases? A good development environment should be very similar to the production environment to avoid “it works on my machine”-problems and it should also be freely configurable and useable for many different use cases (e.g. coding, testing,…).

Scalability: How scalable is the environment itself and how scalable is the approach if many users are using the system? Especially for complex applications, a lot of computing resources are needed, so the dev environment should be able to provide them. Additionally, the general approach to provide this kind of environment to developers should be feasible also for large teams.

Isolation/Stability: How are users isolated from each other and how vulnerable is the system? Developers should be able to work in parallel without interfering with each other and the system they use should be stable and secure to reduce inefficient outages.

Cost: How expensive is this approach? This category should be quite self-explanatory but still is an important factor when choosing the right development environment for your team.

Now that the evaluation criteria are clear, we can start with the comparison of the Kubernetes development environments:

#1. Local Kubernetes Clusters

Local Kubernetes clusters are clusters that are running on the individual computer of the developer. There are many tools that provide such an environment, such as Minikube, microk8s, k3s, or kind. While they are not all the same, their use as a development environment is quite comparable.

#Developer Experience: -

Local development environments need to be set up by the developers themselves as they run on their computers. This can be quite challenging, especially as the local setup is always slightly different (different hardware, different operating systems, different configurations, etc.) which makes it harder to provide a very simple setup guide. After the setup is completed, the developers are also responsible to care and manage their environments themselves, which they are often not used to if they do not have previous Kubernetes experience.

Therefore, the general developer experience is relatively bad for developers (at least without Kubernetes knowledge).

#Admin Experience: o

Admins are not involved in the setup and the management of local Kubernetes clusters. That means that they have no effort here. However, they also do not know if the developers are able to work with their clusters and are generally excluded from the setup and management of the clusters. Still, the admins probably have to support the developers in case of problems and questions.

Overall, the admin experience is mediocre because the admins do not face their typical challenges but rather have to educate and support the developers individually.

#Flexibility/Realism: o

On the one hand, local clusters are always somewhat different from “real” clusters in a cloud environment. They are often pared-down Kubernetes versions that lack some features which cannot be replicated locally (and are often not needed locally). Exemplarily, this can be seen in the name “k3s”, which is an allusion to the original Kubernetes’ “k8s”. On the other hand, the engineers are able to do whatever they want with their local cluster, so they can also flexibly configure it.

In sum, local clusters score high in terms of flexible configuration but low on realism as they do not have all Kubernetes features and so cannot be used for any use case.

#Scalability: - -

Since local clusters can only access the computing resources that are available on the engineer’s computer, they reach their limit for complex applications relatively fast. Also, the approach to let engineers create their local clusters themselves is not really scalable as the same process has to be repeated for every engineer with little options for automation.

Scalability is thus a clear weakness of local Kubernetes clusters.

#Isolation/Stability: ++

Every developer has a separate environment that is completely disconnected from any other environment. In theory, they can even be used without an internet connection. For this, the isolation of local clusters is perfect. This disconnection also ensures that only the individual environment can fail and never all environments at the same time, which minimizes the vulnerability of this approach to provide developers with a Kubernetes environment.

Isolation and security are definitely a strength of local clusters.

#Cost: ++

Local Kubernetes clusters do not require sometimes costly cloud computing resources but only use the locally available computing resources. The different local Kubernetes solutions are all open-source and free to use.

Using the local Kubernetes cluster for development does not have any direct cost, so it is the cheapest solution possible.

#2. Individual Cloud-Based Clusters

Individual clusters running in the cloud are the second type of Kubernetes dev environment. They can either be created by the admins who then give an individual access to the developers or the developers are enabled to create them themselves if they have their own account for the cloud provider.

#Developer Experience: o

The developer experience can be very different and depends on the way the individual clusters are created: If developers have direct access to the cloud, e.g. with an elaborated Identity and Access Management (IAM), they can create their work environment on-demand and the setup is quite easy (especially in public clouds) as it is always the same. Still, they must do this themselves and might need some help for the management of the cluster.

If admins create the clusters and distribute the access to the developers, the dev experience can become quite bad. While the management of the cluster is now cared for, the admins become a bottleneck. Here, you will face the previously mentioned problem of waiting for central IT to provide the dev environments.

Overall, in the best case, the dev experience is sufficient if developers have direct cloud access.

#Admin Experience: - -

No matter in which way the developers get their cluster, the admin experience is always quite bad. If every developer has an own cloud account, the admins will have a hard time getting an overview of the whole system (What is still used? Who is using what?). In this case, they also have to support the developers in managing the clusters. Since the number of clusters grows proportionally with the number of engineers, the effort also grows with the team size.

In the case of a central creation and distribution of the clusters by the admins, the administrators will also have a lot of effort. They will have to answer all requests by developers for clusters and configuration changes and have to be always available for them because they are critical for the developers’ performance. In general, many clusters lead to more management effort for admins.

The individual cloud-based cluster approach is a bad solution from the admin’s perspective and necessarily leads to a lot of work on their side that can even become impossible for them to handle.

#Flexibility/Realism: ++

Since the production systems usually also run in Kubernetes in the cloud, having such an environment for development is perfectly realistic. The individual environments can also be freely configured, so they exactly match the needs of the developers or are identical to the production system’s settings.

Individual cloud-based clusters are the best solution to get a highly realistic development environment.

#Scalability: o

In terms of scalability, it is important that the clusters are running in a cloud environment, which allows you to scale them up almost infinitely. Still, the scalability criterion also includes the scalability of the general approach for larger teams and here, individual clusters can reach a limit as the admin effort grows with the team size.

Scalability in terms of computing resources is not a problem for individual clusters in the cloud but rolling out such a system in larger organizations will often be infeasible.

#Isolation/Stability: +

Having an isolation of developers on a cluster level is very secure. If you are using a public cloud, the isolation of developers is almost the same as the isolation of different companies, which of course is a high priority for the cloud providers.

100% stability and isolation will probably never be reached in the cloud, but they are as good as possible with individual clusters.

#Cost: - -

Running many clusters is very expensive. This is due to several factors: At first, you will have a lot of redundancy because every cluster will have its own control plane. Secondly, having oversized or unused clusters is almost inevitable with this approach as either developers are responsible for right-sizing and shutting down clusters or admins have to do it centrally but they do not have the oversight and knowledge what is still used.

Additionally, dev environments are also only used if developers are working, so many clusters will probably run idle at night, during holidays and weekends. Finally, public cloud providers charge a cluster management fee that needs to be paid for every cluster, i.e. for every developer in this case.

Individual clusters for every engineer in the cloud are a very expensive approach to provide Kubernetes development environments.

#3. Self-Service Namespaces

Instead of giving every developer a whole cluster, it is also possible to just give them Kubernetes namespaces. Again, these can either be created centrally by the admins or developers are provided with a tool to create self-service namespaces on-demand. Providing them centrally comes with many disadvantages I already managed for individual clusters, so I will focus on the self-service namespace approach here.

#Developer Experience: +

As engineers can create the namespaces themselves, they are independent of the admins and never have to wait to get a Kubernetes development environment. At the same time, the namespaces are running on a cluster that is managed by admins, so the developers do not have to care for the management of the environment. Namespaces as constructs within clusters will often be enough for simpler development work, so developers will be able to do most standard tasks and are only limited in some situations, e.g. when they need CRDs or want to install Helm charts that use RBAC.

Therefore, the developer experience with self-service namespaces is very good for “standard” development tasks and developers without special Kubernetes configuration requirements.

#Admin Experience: +

Admins need to set up an internal, self-service Kubernetes platform once, which may take some time if they want to build it from scratch, which companies such as Spotify did. Alternatively, it is also possible to buy solutions that add this self-service namespace feature to any cluster, such as Loft. In any case, the admins can focus on other tasks such as the security and stability of the underlying cluster once the system is properly set up. Additionally, it is relatively easy to get an overview of the whole system as everything is running in just one cluster.

Self-service namespaces are an admin-friendly solution that requires some initial setup effort.

#Flexibility/Realism: -

Since namespaces are running on a shared Kubernetes cluster, it is not possible to configure everything individually by the developers. For example, all engineers have to use the same Kubernetes version and cannot modify cluster-wide resources. Still, namespaces are running in a cloud environment that resembles the production environment, which at least makes namespaces a relatively realistic work environment.

Overall, namespaces may restrict the flexibility of developers in some situations but are generally not an unrealistic dev environment.

#Scalability: ++

The scalability of a self-service namespace system is very good in both aspects: It is possible to scale up the resources of the namespaces because they are running in the cloud (it is also possible to limit developers to prevent excessive usage, of course). At the same time, it is also no problem to add additional users to the system, especially if it provides a Single-Sign-On option.

Namespaces are an efficient way of providing many developers with a Kubernetes environment that can be flexibly scaled up or down.

#Isolation/Stability: -

Namespaces are a native solution for Kubernetes multi-tenancy but the isolation is not perfect and rather a form of soft multi-tenancy. However, since the tenants (developers) are trusted, this is not necessarily a problem for development environments. Additionally, namespaces share the same underlying cluster, which means that all namespaces fail if the cluster is down, so the stability of the cluster is essential.

Namespaces are a Kubernetes-native isolation solution, but it is certainly not perfect. However, if the underlying cluster is running solidly, namespaces are still a good solution for trusted engineers within an organization.

#Cost: o

To get the self-service experience, you might need to buy a self-service namespace software. Additionally, namespaces running in a cloud environment are not free as they also require cloud computing resources. However, the underlying cluster and its resources can be shared by many developers, which drives utilization up and prevents unnecessary redundancies. It is also easier to get a central overview of what is running idle, so these namespaces can be shut down. This process can even be automized by a sleep mode.

Overall, namespaces are a very cost-efficient approach to provide developers with Kubernetes access.

#4. Self-Service Virtual Clusters

Virtual clusters (vClusters) are a solution that lets you create Kubernetes clusters within a Kubernetes cluster. Like namespaces, virtual clusters run on a single physical cluster and can be created on-demand by developers if they have access to a vCluster platform.

#Developer Experience: ++

The developer experience with virtual clusters is similar to namespaces. Developers can easily create them on-demand and are so independent of central IT but still do not have to manage the underlying cluster themselves. At the same time, vClusters feel like “real” clusters for developers, so they will usually not be limited by them at all.

Therefore, the dev experience with vClusters is similarly good as with namespaces but even gives the developers more freedom to do and configure what they want.

#Admin Experience: ++

Considering the admin experience, it is again very similar for self-service namespaces and vClusters. After the initial setup, the management effort for admins is very limited, so they can focus on other tasks again. However, compared to namespaces, vClusters isolate users better and so make it less likely that developers can get the underlying cluster to crash. Additionally, most of the Kubernetes configuration and installation can happen in the vCluster so that the underlying cluster can be very simple and just has to provide the basic features, which makes the admins’ job even easier.

A self-service vCluster platform thus also provides a very smooth admin experience once it has been set up properly.

#Flexibility/Realism: +

Virtual Clusters run in the cloud, which makes them quite realistic dev environments, especially because the developers can configure them individually to fit their needs. However, vClusters are not exactly the same as real clusters, so the realism is not as perfect as with individual clusters.

Overall, vClusters can be flexibly configured to meet the requirements of different use cases. Since they are a virtual construct, they are still some minor differences to physical clusters.

#Scalability: ++

The scalability of vClusters is as good as for namespaces. vClusters can have different and basically endless computing resources in the cloud. The self-service provisioning on a platform that runs on a single cluster also makes it possible to use vClusters with many engineers.

A self-service vCluster solution will fulfill all needs in terms of scalability for development environments.

#Isolation/Stability: o

The isolation of virtual clusters is better than the isolation on a namespace-level, but vClusters are still a form of Kubernetes multi-tenancy and as such, the vClusters share a common physical cluster. A benefit of virtual clusters is that the underlying cluster can be very basic, which makes it easier to get it stable.

Overall, the isolation of vClusters is decent and the stability of the whole system can be quite good. However, a lot of the stability is determined by the stability of the underlying cluster.

#Cost: o

A virtual cluster platform is not free because it requires cloud computing resources and a software for the platform. In this category, vClusters are again very similar to namespaces: The cluster sharing improves the utilization and makes it easier to get an overview and to shut down unused virtual clusters, which can again even be automized by a sleep mode.

A virtual cluster platform is as cost-efficient as a namespace platform, but all cloud-based solutions will necessarily not be completely free.

#When to use which dev environment

After having described the four different types of Kubernetes development environments, the question remains which environment is right for your situation.

From my experience, many companies and engineers start with local dev environments. The fact that they are free and run on local computers reduces the initial hurdle as no complicated budget approvals are needed. Local environments are also a good solution for hobby developers and small applications but also for Kubernetes experts who know how to handle and set up these environments.

As organizations progress on their cloud-native journey, they want to roll out Kubernetes to more developers who do not have any experience with Kubernetes. These organizations often start with the “obvious” solution: Just give every developer an own cluster. After some time, they then often realize that this approach is very expensive and becomes more complex with a growing number of developers working with it. For this, the individual cloud-based cluster solution is often just a temporary solution unless the number of developers is relatively low and the cost so does not matter too much.

To avoid the high cost and the management effort for larger teams, many organizations want to provide developers with either namespaces or virtual clusters (virtual clusters are relatively new, so namespaces are still more common). However, as these companies have realized that scalability of the approach matters a lot, they want to do this in an automized fashion and therefore either start developing own internal Kubernetes platforms as Spotify did or just buy existing solutions, such as Loft. Thereby, it depends on the complexity of the application and the expertise and requirements of the developers if namespaces are sufficient or if virtual clusters are a better solution.


As more companies want their developers to work with Kubernetes, also more developers need to have access to a Kubernetes work environment. For this, there are several options that all have their strengths and weaknesses.

While local development clusters are a good and cheap starting point, they are often not the right solution for inexperienced developers or larger organizations.

These organizations then turn to the “obvious” solution of individual cloud-based clusters, which are unbeatable in terms of flexibility and realism but are also hard to manage for admins and can become very expensive.

Finally, shared clusters, which are the basis for either self-service namespaces or virtual clusters, are a solution that combines cost-efficiency with a good developer and admin experience. Although these solutions are not free and require some initial setup effort, they are a long-term solution even for larger companies.

Photo by RawFilm on Unsplash

#How to Create Self Service Development Environments on Kubernetes

Sign up for our newsletter

Be the first to know about new features, announcements and industry insights.