Table of Contents
#Kubernetes Multi-Cluster Series
- Part 1: Introduction to Kubernetes Multi-cluster Deployments
- Part 2: Managing Kubernetes Cluster Lifecycle
- Part 3: Authentication and Access Control in Kubernetes
- Part 4: Kubernetes Self-Service and User Experience
- Part 5: Kubernetes Cost Optimization
This article is the fourth part of a series focused on Kubernetes multi-cluster technology. Part one gave an introduction to the series, and the goals and responsibilities of multi-cluster setups; part two covered how to manage the cluster lifecycle; and part three looked at access control.
Many organizations want deployment environments that are feature-rich, reliable, and accessible from anywhere—tapping into the power of the multi-cluster. With the consistent migration of workloads to Kubernetes, the key question now is how to manage the multi-clusters within an organization focusing on self-service and user experience.
As the previous articles illustrate, cluster setups must involve sound planning, design, and deployment. This can be challenging without self-service and good user experience. As a continuation of a multi-part series focused on Kubernetes multi-cluster setups, this article focuses on the importance of the user/developer experience and how to optimize it using techniques such as self-service.
#What is Kubernetes Multi-Cluster?
Kubernetes is an open source platform that creates clusters for running and managing workloads. It supports automated deployments of containerized workloads and services.
Kubernetes multi-cluster is a concept of spinning up several clusters to achieve resource isolation, continued availability, and scalability—allowing organizations to provision their workloads in several clusters rather than just one. This strategy is called Kubernetes multi-cluster orchestration. It means, simply, provisioning workloads in several Kubernetes clusters (going beyond a single cluster).
Multi-cluster potential goes beyond simple static workload deployments. Besides, multi-cluster topologies orchestrate services across multiple locations while increasing infrastructure access flexibility for large teams.
Since multiple cluster topologies can be created in a single physical host, within multiple hosts in the same data center, or even in a single cloud provider across different regions, they provide the possibility of migrating workloads from one cluster to another, easily and transparently. Ease in migrating workloads helps manage cluster disasters, scaling, critical infrastructure downtimes, and optimization.
Kubernetes multi-cluster is achieved when a business coordinates the planning, delivery, and management of several Kubernetes clusters using appropriate tools and processes.
#Why Do You Need Kubernetes Multi-Cluster?
As your organization grows, so too do your business needs, and issues concerning cluster maintenance become increasingly likely.
Therefore, organizations need Kubernetes multi-cluster topologies to handle workloads that are needed to run in different regions, eliminate cloud blast radius, manage regulatory requirements, introduce isolation, and enforce security around clusters and tenants.
Running your workloads in a single cluster is fine for smaller deployments. However, some business needs require advanced deployment models, and for such scenarios, a multi-cluster architecture is suitable and improves the performance of your workloads.
Traditionally, setting up a Kubernetes environment required a company-wide Kubernetes cluster and a dedicated team managing it to give developers access to the cluster, often through isolation and namespaces.
This way, teams could deploy their applications, but the administration of shared clusters among different teams posed challenges. For example, how do you grant several teams and individual developers access to this shared cluster, or how do you restrict the access (role and task) to ensure each team only has access to its project resources? Even more, how do you control cluster resources utilization so that each team has access while also ensuring no one team uses up most of the cluster resources?
Enter self-service provisioning as a solution to achieving excellent user experience while addressing traditional provisioning pain points.
The end game of self-service provisioning is to enable an environment where developers can spin up Kubernetes cluster resources with great ease and at any time.
The first level of working with Kubernetes is setting up or having access to a Kubernetes development environment. Orchestrating Kubernetes clusters may be relatively easy. However, achieving self-service Kubernetes for developers at the enterprise level remains a challenge.
Achieving a secure self-service Kubernetes provisioning environment requires defined workload policies, running virtual clusters, and managing add-on services. Self-service provisioning can help solve problems arising from single or shared cluster environments.
#Benefits of Self-Service Provisioning
Allowing for self-service virtual clusters and namespace provisioning has advantages for both developers and cluster admins.
Self-service provisioning gives developers the independence and velocity needed for building fast-paced cloud-native applications, CI/CD pipelines, and other experimentations. In self-service namespaces, developers no longer need to wait for admins to handle all the setting up of their working environments.
For cluster admins, self-service provisioning gives them stability, security, and reduced maintenance costs. The use of managed service providers to create namespaces and virtual clusters all but eliminates the potential for human error, which improves the stability of the underlying Kubernetes cluster. Additionally, cluster admins need only install the self-service platform once and can then manage any clusters through it. This reduces the burden on admin who often have to manage large clusters and it also reduces the risk of human error.
#Loft for Self-Service
Developers often require non-siloed access to Kubernetes environments for flexibility and scalability when building high-quality cloud-native services, a solution that Loft can provide. Loft makes it possible and cost effective for your developer teams to experience the full value of Kubernetes beyond cluster management.
Loft supports any Kubernetes cluster environment, whether local Kubernetes setup, cloud setups, or any other, by providing a self-service system that allows developers to create virtual clusters and namespaces seamlessly on-demand.
Loft self-service is easy to use. Once you have your Kubernetes cluster running in the cloud or a local setup, with the proper Kubernetes context, you can inject Loft as an admin to allow your developers to create the multiple virtual clusters they require. Loft’s mechanism for self-service provisioning is through virtual clusters, a cluster inside a cluster. Loft implements virtual clusters when a developer creates a virtual cluster in this shared company-wide Kubernetes cluster.
This tool creates a new namespace and deploys a lightweight Kubernetes cluster, based on K3s, inside that new namespace. Whether you are running your existing Kubernetes environment from EKS, OpenShift, or even Rancher, and already have some cluster-wide resources configured, Loft enables connecting to all other clusters.
Loft supports self-service provisioning through Loft UI, Loft CLI, Kubectl, and even GitOps.
#Maintaining a Frictionless System
Friction is common in the developer and operations space. A simple example is when the dev team requires a new set of tools to be installed or even a test environment to be created, and the only way to have their request met is through putting up a service ticket and waiting a day or two. That wait is friction. Having manual processes in a couple of services being run by a team also introduces friction.
An organization’s best effort at maintaining a frictionless system (or at least reducing friction) is to implement self-service provisioning for everything. Furthermore, an organization will need to support developers through multiple tools, such as Skaffold, that automatically build and ship their applications to Kubernetes clusters, providing a good developer experience.
Many enterprises are already using CI/CD mechanisms and tools. Friction due to making changes to services and deploying them manually to staging or production environments can be eliminated by shifting the development approach and using CI/CD tools for better development and management.
#Implementing Self-Service Through Vcluster
Loft vcluster is an open source tool that simplifies Kubernetes cluster operations. A vcluster (or virtual cluster) is logically a complete Kubernetes cluster that operates inside the namespace of some other physical Kubernetes cluster (host cluster).
Vcluster is the first virtualization concept in Kubernetes that supports the creation of virtual Kubernetes clusters orchestrated in isolated namespaces without any need for admin privileges.
Additionally, Loft virtual clusters run your tenants with entirely separate control planes so that any upgrade can be achieved independently.
Loft self-service provisioning involves on-demand creation of virtual Kubernetes clusters that run inside regular namespaces (lightweight multiple clusters). A self-service vcluster solution fulfills all needs related to flexibility and scalability of development, testing, and production environments, even more so where multiple users are involved.
Using vclusters creates a higher level of isolation, enabling teams to have more customizable Kubernetes environments without starting new physical clusters each time. Vclusters feel like “real” clusters to developers and usually provide all capabilities available or required as in normal clusters. Therefore, you can use vcluster to spin up new Kubernetes clusters inside an existing and operational cluster.
Self-service provisioning with vclusters also minimizes the effort required from admin after the initial setup, thereby providing a good admin experience.
#Improving Overall Developer Experience
When working with a self-service provisioning environment, you may need to define boundaries and limits to most of the owned resources available to your team. Defining resource quotas within teams and groups can help keep your budget in check.
Besides boundaries and limits, setting alerts and triggers when teams have reached the threshold for some of the allocated resources will also improve the developer experience by making their consumption levels transparent.
Developing a product involves writing logic for new features, troubleshooting, and fixing bugs. If the right tools are not put in place for a developer’s local environment, the developer might as well ignore Kubernetes and only interact with it at the end of the pipeline.
As an admin, you need to integrate tools that will allow developers to work better and easily with Kubernetes in their development workspace. Providing local Kubernetes cluster tools such as minikube doesn’t create as good a developer experience as do tools like Loft. However, a tool like minikube is much better compared to DIY Kubernetes setups.
It would help if you simplified developers’ operations by allowing them to install tools that can automatically build and ship their applications to Kubernetes clusters without being locked in by company tools.
To get the best out of situations where developers work with Kubernetes for extended periods, an organization needs to ensure they have implemented self-service and a good user experience. Teams are, therefore, obligated to implement the right tools and platforms, eliminate friction, champion a self-service provisioning mindset, and be aware of their developers’ working environments.
This article explained self-service provisioning and how to achieve good developer experience. The article also discussed why user/developer experience is important and how it can be optimized using self-service vclusters and other techniques.