Kubernetes Sandboxes – Easy Development in a Realistic Environment

thumbnail for this post

Using sandbox environments is very common for software developers because it allows them to work, test, and experiment in an environment that is isolated from the production system but still provides a realistic experience. As a consequence of this early validation, the software quality improves and the number of bugs decreases, for example because “it-works-on-my-machine”-problems can be ruled out.

Now that the production environment is often Kubernetes, the engineers should therefore also start to work with Kubernetes. However, to establish efficient development workflows with Kubernetes, you need special development tools and you should also use a Kubernetes sandbox environment, which will be the focus of this article.

Great Cloud-Native Dev Tools Already Exist

The good news is that there are many open-source tools that solve the problem of how to interact with Kubernetes if you want to develop software for it. Examples for (mature) tools in this area are Skaffold, DevSpace, Tilt, Telepresence, and Okteto. They are all open-source and have a slightly different approach to solve a common problem: How to streamline development and deployment processes with Kubernetes.

While it is worth looking at and comparing all of these Kubernetes tools, each of them will eventually allow developers to develop faster and more efficiently with Kubernetes without having to configure everything themselves.

For a comparison, take a look at the Kubernetes development tool section of my article about Kubernetes development workflows.

No matter which tool you choose, you still face an essential challenge that none of the cloud-native tools addresses: How can developers easily get a Kubernetes sandbox to work in.

Local Clusters as Kubernetes sandbox?

One approach to get a Kubernetes sandbox environment is to use local clusters with tools such as kind, Minikube, or k3s. These are open-source tools that allow engineers to run Kubernetes on their local computer. This easy and cost-free setup makes local clusters a good solution to get started fast. The engineers do not even need to have a cloud platform access because everything happens locally.

However, this also means that the developers become admins of their cluster. Even if a local clusters obviously does not need to run very stable and secure as only one developer is working with it, the effort associated with this should not be underestimated, especially if the developers have no admin and Kubernetes experience. Additionally, there are some differences between the different local Kubernetes solutions and between the local setups of the engineers (hardware, operating system, configurations,…), which makes it difficult to standardize the admin tasks completely. This is the reason why some developers struggle to work with local Kubernetes.

Another problem with such a solution is that local environments are not completely realistic copies of the production environment, which is an important element of good dev sandboxes. On the one hand, this is due to the special Kubernetes solutions used locally and on the other hand due to missing scalability and other cloud features (especially of public cloud providers). The missing scalability of computing resources on local computers also means that the computers of the engineers are often under heavy load and reach their limits quite fast, which can result in annoyingly loud fans (best case) or system crashes (worst case).

Therefore, local clusters are not really developer-friendly (for non-experts) and as such not perfectly suited as Kubernetes sandboxes. However, as I wrote in my article about the comparison of Kubernetes development environments, I still think that local Kubernetes solutions have a valid right to exist as they are very useful in some scenarios. For example, they are great to get a first experience with Kubernetes if you want to learn it or to try a new tool or setup without having to start a cluster in the cloud. Still, as Kubernetes sandbox, they are only suitable for more experienced engineers who are not working on very computing-intense applications.

Shared Development Clusters as Kubernetes Sandboxes

Another approach to provide Kubernetes sandboxes to engineers is to use shared development clusters. This approach is easier to scale and can be used quite easily by inexperienced engineers, which makes it more suitable for larger teams and teams with different technical backgrounds.

With shared clusters as sandboxes, the developers do not have to set up and manage their work environments anymore as this can be centrally done by admins who are more experienced with Kubernetes. This allows the developers to get started very fast and to get a real Kubernetes experience immediately.

Since the sandboxes are running in a scalable cloud environment, they have almost infinite computing resources available, so that they can be used even for very complex applications. This allows engineers to even work form weak laptops without long waiting times or the fear of crashing the environment. It is also possible to switch between devices for development as they only need to be connected to the sandbox and do not require much local configuration, which can be particularly useful in flexible work settings, e.g. when working from home with a different computer than at the office.

Another advantage of cloud-based Kubernetes sandboxes is that they provide new opportunities for collaboration and sharing. The fact that Kubernetes is declarative and all sandboxes are very similar makes it easy to replicate a scenario and problem, so colleagues can help each other to solve a problem together. It is also possible to share the access to the same environment, which allows for collaborative debugging. The easy replicability can also be useful if engineers have to repeat tests and experiments multiple times such as it often is the case for Machine Learning applications.

For this, I believe that shared development clusters are a perfect solution for Kubernetes sandboxes that can be used for any setup, team, and use case.

If you want to learn more about the two different options for Kubernetes sandboxes, take a look at this article comparing local clusters and remote cluster for Kubernetes-based development.

How to Provide Cloud-Based Kubernetes Sandboxes to Engineers

While they are a powerful solution, providing cloud-based sandboxes to engineers requires to overcome some technical challenges. These challenges are mostly about how to get Kubernetes multi-tenancy right:

At first, you need to ensure that the cluster is shared securely, i.e. that the tenants of your clusters are isolated. This is the basis to get actual sandboxes that engineers can work in without impacting their colleagues.

Secondly, you need to implement a user management system to determine who has the right to create and use the sandboxes and to assign limits to their usage. The enforcement of such limits on a per-user level is another challenge as you need to prevent one user to consume all available computing resources leaving nothing for the others. (This can be solved with the open-source Kubernetes multi-tenancy extension kiosk.)

Then, you need to provide the engineers with the sandbox environments. The most efficient way to do this is to allow them to create the sandboxes themselves, e.g. with self-service-namespaces, as this process can be easily integrated in the development workflows.

While namespaces are enough for many development use cases, you may alternatively use Kubernetes virtual Clusters (vClusters) that isolate users even better and provide them more flexibility in terms of Kubernetes configuration. For this, using virtual Clusters as development environment can even be used by Kubernetes experts who need access to more Kubernetes features such as CRDs or want to experiment with Kubernetes configuration.

Finally, you need to enable developers but also admins to work with and manage the sandboxes. This requires some form of user interface, such as a GUI or a CLI. With this interface, engineers should not only be able to create the sandboxes, they also need to manage their limits and admins need to manage the users. Since this interface is critical for the adoption and acceptance of Kubernetes in your teams, it should be very user-friendly and easy to understand.

Overall, you are essentially going to build an internal Kubernetes platform, that companies such as Spotify and Datadog have already built for their engineers.

Since this requires some effort not every company is willing to invest, solutions such as Loft provide all the previously mentioned features out-of-the box. With this solution, you only need to install the Loft Kubernetes extension to your cluster and you can then let your engineers create their Kubernetes sandboxes, that run on your clusters, themselves.

Conclusion

Together with cloud-native tools, Kubernetes development sandboxes are a great way to enable engineers to work with Kubernetes directly and safely. Using such sandboxes can increase the quality and stability of your software as the target environment Kubernetes is already included throughout the development phase.

There are two options for Kubernetes sandboxes: They can either run on local clusters or on shared clusters in the cloud. While local clusters are a great solution for more experienced engineers or developers who want to learn more about Kubernetes, sandboxes in shared clusters are also appropriate for “average” engineers who want to keep their focus on software development and simply use Kubernetes without going into its details.

To provide developers this experience, you need to offer them a simple way to create and manage Kubernetes sandboxes, which often results in an internal Kubernetes platform that some larger organizations have already built but that are now also available off-the-shelf.


Photo by Markus Spiske on Unsplash