Kubernetes: Virtual Clusters For CI/CD & Testing

Daniel Thiry

The use of virtual clusters (vClusters) in Kubernetes, i.e. running virtual clusters inside of physical clusters, has the potential to disrupt the use of Kubernetes in a variety of settings, from cloud-native development to machine learning experiments. (For an overview of the general benefits of virtual clusters and their use cases, refer to this article). One of these settings is continuous integration/deployment (CI/CD) and the related testing of software. In this post, I will describe how virtual clusters can be used for CI/CD and testing and what their benefits are for this use case.

#The Status Quo

To understand the case for virtual clusters for CI/CD and testing, one needs to understand the current state of the art and the basic challenges:

Many companies nowadays have implemented CI/CD pipelines in their development process to enable engineers to simply run their developed software in a real-life environment. This is also important to properly test the software as only in a real-life environment, realistic testing is possible. The benefit of such a pipeline-based approach is that the engineers do not need to care for or be experts about the environment their software is running in at the other end of the pipeline.

For this, it is possible to even use the most realistic environment, which is the environment of the actual running system, also behind the pipeline and to run tests. And this environment is for many companies now a Kubernetes environment.

Still, one question needs to be answered: Where does this environment come from? There are generally two approaches to solve this: The engineers can share one environment, that is managed centrally by an admin, or they can use individual Kubernetes clusters. Sometimes, the approaches are also mixed when micro-teams are sharing clusters. Unfortunately, all of the approaches come with some downsides:

#Shared Environments

  • One-Size-Fits-All / Inflexibility. When all engineers use the same environment, this environment needs to work for all situations. However, this is not always the case because sometimes configuration changes are needed. Since these changes would affect the whole environment, all other engineers would also be affected, which is why engineers are not allowed to make any config changes. This makes the use of shared environments very inflexible.

  • Instability / Vulnerability. Even if engineers are not allowed to modify the cluster configurations, clusters can crash. And in these situations, all engineers are affected at the same time. This means that such an occasional crash will often completely halt the software development teams and so makes the shared system very critical. This can lead to postponing updates to prevent any risk associated with such an update. In turn, this can lead to workflow disruptions if some engineers have to wait for a necessary update. Additionally, to make the use of a shared environment for CI/CD and testing feasible in the first place, engineers need to be isolated, which is usually done with namespaces that provide a rather weak form of isolation.

#Individual clusters

  • Cost Inefficiency / Expensiveness. The use of individual clusters for developers is a much more secure and flexible solution. However, it is also very expensive and cost inefficient. Since most CI/CD pipelines have a per-minute-pricing, it is normally not possible to start clusters on demand in the pipeline process because this would hold up the pipeline and increase the cost. For this as well as for general convenience, clusters for CI/CD and testing often keep running all the time, no matter if they are used or not. Given the high cost for computing resources, especially in public clouds, this is a very wasteful way of providing an execution environment.

  • Complexity / No oversight. Additionally, clusters are usually only created by administrators who have more extensive rights that allow them to start or shut down clusters. With the abundance of clusters especially for large teams, it becomes very hard for these admins to keep control as it is not so easy to figure out which clusters are still used and which are running idle and should be shut off. Since engineers will tend to ask for new clusters from time to time, this problem becomes worse over time leading to a situation of many unused clusters that nobody really knows about but still create cost.

Combining the two approaches of many clusters that are shared only in very small teams can even be worse. This will only increase the confusion about which clusters are still needed as not every engineer knows exactly what their teammates are doing and may even have never used a cluster that is still needed by others. In these situations, the admins may often hear answers like “What cluster?”, “I don’t know if XYZ is still needing this”, or “Oh yes, you can delete that” (when the cluster is actually still needed).

#Virtual clusters for CI/CD and testing

If any of these problems sounds familiar to you, you should take a look at vClusters. Virtual clusters have some advantages that can solve the challenges of using Kubernetes in CI/CD and for testing:

  • Quick Startup: One benefit of virtual clusters is that they are ready to use in just 3-5 seconds. This makes it possible to start fresh vClusters when they are really needed, e.g. when the CI/CD pipeline is triggered. For this, you will not need any standby clusters anymore that are running all the time even when unused, which saves a lot of time and cost while it also perfectly integrates into engineering workflows.

  • Easy Cleanup and Restoring: Since vClusters can be easily started, it is also possible to clean them up or throw them away, e.g. after a test if finished. If the engineer needs a cluster again, they can restore or recreate it with the exact same specification. So, standby clusters are even not needed when one needs the same configuration later again.

  • Shared Physical Cluster: As virtual clusters are just an abstraction, still only one physical cluster is needed, although the engineers have the experience of individual clusters. The physical cluster even does not need to have extensive installations and can be quite “pure” and simple. Since managing and overseeing one simple cluster is much easier, the management effort for admins is significantly reduced.

  • Strong Isolation: Virtual clusters are similar to physical clusters and so provide a stronger isolation than namespaces. Engineers thus work and test in encapsulated environments that feel like individual clusters, so breaking out of a vCluster is much less likely.

  • Individual Configuration: In virtual clusters, engineers have the freedom to configure everything as they need it because they will not affect other virtual clusters. For this, it is for example even possible that engineers work with different Kubernetes versions in their virtual clusters on a shared physical cluster. This also allows them to experiment with configurations and even if they make a mistake, they can start from scratch again with a fresh vCluster in no time.

Virtual Clusters so solve all current problems with Kubernetes access and environment provisioning for CI/CD and testing.

From an administrator perspective, a virtual cluster setup is easy to manage and maintain as only one simple physical cluster exists that is also stable because every engineer is encapsulated in a separate sandbox environment. To get this setup, the admin only has the to create a Kubernetes cluster and install a virtual Cluster software, e.g. loft, so the whole setup can be done in less than an hour.

From an engineer perspective, virtual clusters feel like physical clusters giving them the freedom to experiment and make config changes, if needed and desired. They are also never interrupted in their workflow to wait for a cluster to be newly provisioned as they can create virtual clusters easily on demand, especially in solutions that are made for these use cases and even provide engineering-friendly GUIs, such as loft.

From a business perspective, virtual cluster solutions minimize cost for computing resources as idle standby clusters are eliminated. Additionally, the engineering team’s productivity can be expected to improve as the Kubernetes environment issues are solved by standardized solutions.


Overall, virtual clusters have the potential to disrupt the use of Kubernetes for CI/CD and testing as they solve all current challenges with providing a Kubernetes environment for these cases. By eliminating concerns about cost, stability, and productivity at the same time, all involved stakeholders can benefit from introducing virtual Kubernetes clusters. This again can help to drive Kubernetes adoption in many organizations where there are still doubts if Kubernetes should be integrated in current CI/CD and testing processes.

Photo by Just a Couple Photos from Pexels