More developers than ever before are now working with Kubernetes. This means that also their workflows have to change to account for this technology that originally was not made for developers. However, integrating Kubernetes into efficient development workflows is not easy and comprises several aspects that I will discuss in this article.
Not all companies are using Kubernetes in the same way and to the same extent. To get an overview of the different stages of Kubernetes adoption and cloud-native maturity, read my article about the journey of adopting cloud-native development
#1. Setting Up Kubernetes Work Environments
The first step to set up an efficient Kubernetes development workflow is to decide which kind of work environment shall be used. Here, the question is not only about which cloud environment or managed Kubernetes service to use, but also if one should use a cloud environment at all. Compared to production systems, it is also possible to only work with local Kubernetes environments for development.
#Local Kubernetes or cloud-based Kubernetes
Of course, local and cloud-based work environments have advantages and disadvantages: While local environments such as Minikube are free to use, cloud environments cost money. Local environments are also available offline and are fully independent of other developers and other infrastructure.
Cloud environments have the advantage that they provide more computing resources, run “standard” Kubernetes (not only versions made to run on local computers), and are easier to start. The provisioning of such environments can even be automized with internal Kubernetes platforms, so they do not require any effort or knowledge on the developer’s side.
For more information about the pros and cons of local and remote clusters, take a look at this article: Local Cluster vs. Remote Cluster for Kubernetes-Based Development
#Setup process for Kubernetes work environments
From a workflow perspective, it is important to establish a standardized way of setting up the work environments for developers. This of course depends heavily on the type of work environment you use. Local environments need to be set up individually by every developer because they are only running on local computers, which prevents a central setup. This is why you should provide detailed instructions on how to start the local environment. It also makes sense to determine which local Kubernetes solution to use.
If you use a remote Kubernetes environment, you face a wholly different challenge: While it is easy to create cloud-based work environments, you need to determine how the developers get access to them. Some companies may decide to let admins centrally create the environments and individually give the developers access to them. However, this turns the creation process into a bottleneck that can slow down the whole development workflow. Therefore, it is more efficient to let developers create these environments on-demand. Companies such as Spotify have built internal self-service namespace platforms for exactly this use case. However, there are also off-the-shelf software solutions such as loft that provide this functionality for any Kubernetes cluster.
After the developers have access to a Kubernetes work environment, the actual development phase needs to be figured out. With development phase, I refer to what is sometimes described as the “inner loop” of software engineering, i.e. coding, building, and observation/testing of results.
#How to interact with Kubernetes
While most engineers have no experience in setting up a Kubernetes environment (Step 1), they are very familiar with the software development phase. Still, their workflows may significantly change compared to what they are used to when Kubernetes is introduced.
Developers normally know a lot about the peculiarities of the programming languages, frameworks, and tools they use, but Kubernetes brings in some new challenges that are mostly independent of the actual software: How can the software be containerized? How can the containers be built and started in Kubernetes? How can code changes be deployed into the containers, so developers see their changes? How can the software be debugged?
All of these questions need to be solved and all the solutions should become standardized workflows. For this task, it often makes sense that not every developer is involved in solving the initial one-time problems (such as initial containerization) and that only one engineer or a small team sets up the new workflows. (In another blog post, I describe this role of a Developer Experience Owner (DXO) in more detail.)
#Kubernetes development tools
Since many companies face the same issues by introducing Kubernetes into the development phase, several open-source tools have been developed that address the issues in this area. Examples for this are DevSpace, Skaffold, Tilt, and Telepresence.
All of them solve similar problems but with slightly different conceptual and technical approaches: DevSpace provides a bi-directional, real-time code synchronization that allows for hot reloading of containers, so that code changes are visible without restarting the containers. (The other tools are partly offering this now, too.) It is also focused on development scenarios in cloud-based Kubernetes environments. In contrast to this, Tilt is rather specialized on development with local Kubernetes environments, while Telepresence enables the development of a locally running application that can interact with other parts running remotely. Finally, Skaffold is similar to DevSpace and Tilt but has a bit more focus on fast deployment workflows.
In general, all of these tools serve a similar purpose and are relatively versatile (e.g. Tilt can also be used with remote environments and DevSpace also works with local environments or in CI/CD pipelines). To decide which solution is best for your situation, you should look at all of them yourself as the decision really depends on your preferences and needs.
In any case, you should have a common configuration for the tool you want to use in your team so that it is very easy for developers to use. For example, that means that a developer should only have to use very few commands such as
devspace dev or
skaffold debug and directly can start to work with Kubernetes efficiently. Of course, this requires some initial configuration and documentation effort, but this effort will pay off very fast.
#How to deploy to a Kubernetes system
The last step that is very relevant for the development workflow with Kubernetes is deployment. This means that the developers need to have an easy opportunity to push their code to a staging or testing environment and eventually to production.
This challenge in the Kubernetes workflow should be relatively easy to solve as most developers and companies are used to this and already have solutions in place. Still, the process in this phase should be easy and fast for developers, so that they are encouraged to deploy their applications when it is appropriate.
#Kubernetes deployment tools
Possible solutions for deployment to Kubernetes are the previously mentioned development tools, most notably Skaffold and DevSpace, which can also be integrated into more complex CI/CD pipelines. DevSpace, for example, can be configured in a way that a developer only has to use the command
devspace deploy and their code will be deployed to a pre-specified Kubernetes cluster where it will be executed. This can be very practical for fast deployments, e.g. to run tests.
To deploy an application to production, more complex continuous integration and deployment solutions exist. Since Kubernetes is so common now that almost all CI/CD tools support it, it does not really matter if these solutions are particularly specialized on Kubernetes or not. You should rather compare different solutions again and see which best fits your needs. A good starting point are these tools: Jenkins, Codefresh, Travis CI, and Circle CI.
To establish an efficient Kubernetes development workflow, several workflow steps need to be defined and facilitated. The first is to provide the developers with a Kubernetes work environment, which can either run locally or in the cloud. Then, they need easy-to-use Kubernetes dev tools that support the “inner loop” of development, i.e. coding, quick deploying, and debugging. Finally, developers must have an easy way to deploy their developed code to a production environment.
All these steps have in common that they should be standardized and easy for the developers so that the adoption of Kubernetes becomes as smooth as possible. In doing so, you should never underestimate how complicated Kubernetes can be if you have never used it before. Therefore, documentation and support is critical throughout the whole process within your organization.