Development in Kubernetes - Local vs. Remote Clusters

Lukonde Mwila
Minute Read

Containerization is on an upward trajectory when it comes to enterprise adoption. This is primarily due to how containers solve the problem of application delivery and portability. They offer a number of benefits over the main alternative, virtual machines (VMs). They're smaller, lighter, faster, can run anywhere, and have better resource utilization.

In the context of enterprise software, running containers in production could result in managing up to thousands of containers over time. These containers need to be deployed, managed, connected and updated. This can be achieved manually with a few containers, but a large enterprise application in the cloud would require an entire team dedicated to this.

The problem of managing multiple containers has made room for container orchestration tools like Kubernetes, "an open-source system for automating deployment, scaling, and management of containerized applications."

Cloud-native instructor Nigel Poulton often says, "Kubernetes is Kubernetes is Kubernetes," highlighting the agnostic nature of the platform regardless of where it's running. Despite this platform agnosticism, application development in Kubernetes is not that simple. In this post, you'll learn about the Kubernetes deployment model, then explore the differences between local and remote cluster development, along with their respective advantages and disadvantages.

The Kubernetes Deployment Model

Before considering where your development workload will run, developers and DevOps teams should be aware of the deployment paradigm that Kubernetes is built on.

Most cloud infrastructures that existed before Kubernetes provided a procedural approach (individual steps are defined, then automated) for automating deployment activities. By contrast, Kubernetes uses a declarative approach (only the end goal is defined) to describe the system's desired state.

Teams developing in Kubernetes can consider this a silver lining amid the complexities of a cluster setup. As an orchestration tool based on a declarative model, it offers a layer of consistency whether you're developing on your local machine or in a remote cloud environment.

Developing in Kubernetes with Local Clusters

Options for local Kubernetes development: Minikube, k3s and kubeadm

The Kubernetes journey typically begins on a local machine for most software developers. This does come with hardware and software requirements to get Kubernetes up and running. There's no denying that a local Kubernetes environment doesn't offer the same operational agility as a remote cloud setup. However, it provides users the opportunity to develop in-depth practitioner skills when working with the platform.

A Kubernetes cluster consists of some bare minimum components to make up its architecture. Some software teams go through the arduous process of setting up this architecture of control plane and worker nodes using VMs on a local machine. This will no doubt enhance your knowledge of the underlying pieces.

On the other hand, if velocity is a high priority, some tools help streamline creating a local cluster environment with most of the necessary configurations in place. Some of these tools include Kubeadm, K3s, Minikube, MicroK8s, and Docker Desktop.

Each of these tools simplifies setup and configuration to varying degrees. However, local Kubernetes development still requires a good understanding of the control plane and how it interacts with the nodes in the data plane.

When to Use Local Kubernetes Environments

Development on a local Kubernetes cluster can be beneficial for software teams in certain use cases:

  • Proof of concepts and experimentation: Local environments eliminate the need to provide any of the required cloud infrastructure, security configurations, and other related administrative tasks. Users can focus on experimentation and Proof of Concepts (POCs) in a very low-risk environment.
  • Small teams: Making use of local clusters in a team will require alignment and standardization. In larger teams, there's a higher chance of configuration drift with the differences in local machines and their respective software and configuration setups. Local clusters are suitable when you have a small team of experienced Kubernetes practitioners who can standardize their cluster configurations based on the hardware being used.
  • Team of experts: Cluster management and configuration are complex. When the user owns every component of the cluster, potential setup issues and customizations are more suitable for seasoned or knowledgeable Kubernetes practitioners.
  • Low computation requirements: If the development environment for an application doesn't have high computation requirements, then a local cluster will be suitable. Local clusters are a viable solution if the limitations of a local setup won't hinder development.
  • Advantages of a Local Cluster

  • No operational costs: Making use of local clusters removes the cost of development. Cloud environments present operational costs which require budgeting and reporting to monitor and track efficient usage of resources. These business-related processes can get in the way of development velocity. Thus local clusters can be helpful for teams to focus on optimizing the cluster environment and the application running on it.
  • Environment isolation: When working with Kubernetes clusters, software teams should understand the blast radius if something goes wrong due to a configuration error. Teams working with local clusters don't have this concern because each environment is specific to their respective local machine, isolating users from the faults and failures of others' mistakes.
  • Full cluster access: Cluster management is complex, and best practices dictate that users have controlled access to the resources that can be created on the cluster. However, in a development environment, users will require more freedom and access to deploy resources without constraints that may hinder them from making progress.
  • Disadvantages of a Local Cluster

  • Hardware constraints: Local clusters set up by software teams will be constrained by the user's hardware. Kubernetes clusters are compute-intensive and thus will have to be modified and configured to run optimally within the limitations of the local machine. The alternative will be to increase capital expenditure by investing in the hardware used by software teams to run clusters locally.
  • Kubernetes knowledge is required: Despite there being tools to simplify spinning up a cluster with the relevant configurations, cluster management is complex. Development teams working with local clusters must know the Kubernetes architecture and how its various components interact.
  • Potential environment disparity: Running a cluster locally presents potential challenges for configuration drift when eventually deploying to a cloud environment. For example, cloud environments will have more complex network architectures, features, and components that have implications for the cluster configuration. As a result, the cluster declarations on the local machine will not be a realistic setup for the cloud environment. Developers are likely to run into issues developing applications for a local cluster environment that is very different from the cloud destination. This could lead to time-consuming and laborious tasks to bridge the gaps between the development and production environments.
  • Developing in Kubernetes with Remote Clusters

    Options for k8s remote clusters: EKS, GKE and AKS

    Public cloud providers such as GCP, AWS, and Azure offer managed Kubernetes platforms as a service, namely GKE, EKS, and AKS respectively. These services make it easy to run Kubernetes in the cloud without setting up, providing, or maintaining the control plane.

    These managed clusters are Kubernetes compliant by default. The respective cloud providers have teams of experts responsible for the provisioning, running, managing, and auto-scaling of the Kubernetes master and etcd nodes.

    With this model, software developers aren't required to have in-depth working knowledge of Kubernetes to run applications on it. The goal is to take away the management overhead of configuring, maintaining, and scaling the cluster so that software teams can focus on optimizing their software applications.

    When to Use Remote Kubernetes Clusters

  • Specific hardware requirements: Software teams working on a compute-intensive application can benefit from the on-demand resources in the cloud. Remote clusters make it easier for teams to spin up resources that meet the application's requirements. In addition to this, the cluster's control plane will automatically scale up and down to meet the application's demands.
  • Minimal Kubernetes knowledge and experience: Remote clusters are designed to take the responsibility of cluster management out of the hands of software development teams. As mentioned earlier, the goal is to enable developers to focus on application development. Remote clusters reduce the need to have a deep understanding of k8s.
  • Microservice teams: Microservice teams with a strong ownership model will, in some cases, choose technology stacks that are best suited for their respective microservices. Therefore, teams have more autonomy to piece together the tools that they believe will enhance the service. A remote cluster would complement this technical diversity to cater to the needs of the different microservices while still maintaining configuration standards across environments.
  • Advantages of a Remote Cluster

  • Control plane management: In the case of the managed services by cloud providers, development teams are no longer hindered by having to know Kubernetes in-depth to develop and deploy an application to a remote cluster. In addition to this, optimal control plane configuration, security, and scalability are taken care of so that developers can focus on application development.
  • Realistic environment: One of the most significant advantages of a remote cluster environment is that it can be set up to mirror the production environment as much as possible. The disparities between the different environments will be less than that between a local setup and a remote setup. As a result, less time will be spent on configuration changes when deploying the application from development to production.
  • No compute limitations: The operational agility the cloud offers is one of its major value propositions. Development teams have access to a variety of computation offerings to meet the application's needs. This computing power is available to developers on-demand.
  • Access management: Remote clusters allow you to control access to the control plane's API. For example, Amazon EKS will enable you to map your account's Identity and Access Management (IAM) users and roles to Kubernetes role-based access control (RBAC). This fine-grained access increases security and governance by integrating the cluster permissions with pre-existing cloud security measures that are in place for developers. Furthermore, cluster administrators can specify the actions that can be performed by developers who can access the cluster.
  • Disadvantages of a Remote Cluster

  • Operational compute costs: Cost optimization is one of the top lures for a cloud environment. However, the associated costs can become significant with the ongoing operational usage of resources for a remote cluster. Software teams can make use of the cloud provider's native tools to help manage costs. This can be done by adding budget constraints, sending alert notifications, and setting the relevant limits on scaling configurations for development environments. Another optimal way to manage computation costs is to use Loft. It enables software teams to create virtual clusters (vClusters) and namespaces for the respective development teams. Administrators can place computation limits on the vClusters used by the various development teams.
  • Access management: As much as there are benefits to access management in remote clusters, it can present some challenges at scale. With large teams that have different access requirements, this can become difficult and complex to manage. Loft can also be used as a solution in this scenario. As highlighted above, these virtual clusters can help manage multiple development teams that need access to a remote cluster by restricting them to virtual clusters and setting the relevant limits on those vClusters.
  • Developing in Kubernetes with a Hybrid Approach

    A hybrid approach: kubeadm locally and EKS remotely

    In some instances, software development teams may want to rapidly iterate changes to application source code locally while maintaining a realistic reflection of the production environment. Teams aspiring to such a balance can take on a hybrid approach for developing in Kubernetes.

    This involves working with a minimally configured local cluster and deploying to a remote cluster environment. Following such a paradigm to cluster development is helpful for teams trying to leverage the advantages provided by both local and remote setups highlighted earlier.

    Advantages of a Hybrid Approach

  • Benefits of local clusters: With this approach, software developers can benefit from the velocity of local cluster development without straying too far from what the final environment should look like.
  • Minimize risk: Having continuous remote deployments as part of the development cycle will minimize the risk of significant configuration drift when deploying to production.
  • Disadvantages of a Hybrid Approach

    High proficiency in Kubernetes required: Software teams must have a high level of proficiency in working with Kubernetes to develop for the two different contexts and switch between them on an ongoing basis without it becoming an impediment.

    Conclusion

    When it comes to Kubernetes development, both local and remote cluster environments have their advantages and disadvantages. It depends on the application use case, the development team structure, and its Kubernetes knowledge. As detailed above, local cluster development is best suited in a small team of experts. However, teams should be aware that standardization and configuration drift from production environments are hurdles that the team will have to overcome.

    Remote cluster development, on the other hand, offers a more realistic environment setup to production. In addition, it provides operational flexibility and speed of development, allowing teams to focus less on cluster and infrastructure management. However, as more development teams get involved in application deployments, access control becomes difficult to manage in remote clusters. That's where a tool like Loft can help mitigate risks when it comes to cost management and access control in clusters.

    Photo by Guillaume Bolduc on Unsplash.

    Sign up for our newsletter

    Be the first to know about new features, announcements and industry insights.