High-Velocity Engineering with Virtual Kubernetes Clusters

Vishnu Chilamakuru
Minute Read

Kubernetes, an open-source container-orchestration system for automating your application deployment, scaling, and management, has matured so much recently that it’s expanded beyond its original operations usage and will likely continue to do so.

While this fast growth is impressive, it also means that the ecosystem needs to quickly evolve to solve the challenges of using Kubernetes for other scenarios, such as development or testing.

As your organization grows and you integrate Kubernetes more fully into your daily workflow, your needs may grow more complex. You probably began with a single cluster for everything, but now, you need multiple clusters, such as one for testing and one for specific workloads. As the number of clusters increases, your work also increases to address factors like isolation, access, admin effort, cost-efficiency, or management of more environments. Having virtual Kubernetes clusters, which can be created and disposed of in seconds, is one solution to this issue.

In this post, you will learn what vclusters are and how to use them to enable high-velocity engineering, efficiently addressing key challenges like the following:

  • Creating and disposing of new environments
  • Launching and managing environments with minimal admin effort
  • Utilizing resources in a cost-efficient manner
  • Treat Resources like Cattle

    A popular mindset is to treat the cloud as “cattle, not pets”—meaning, infrastructure resources should be cared for but replaced when things go wrong. This phrase was coined by Microsoft engineer Bill Baker in 2012 during his presentation on scaling up versus scaling out. The phrase explains how server treatment has changed over time. Gavin McCance later popularized this when he talked about the OpenStack cloud at CERN.

    Pets Model

    Each server is given a name in the pets service model, like zeus, ares, hades, poseidon, and athena. They are unique and lovingly cared for, and when they get sick, you nurse them back to health. You scale them up by making them bigger, and when they are unavailable, everyone notices.

    Examples of pet servers include mainframes, solitary servers, load balancers and firewalls, and database systems.

    Cattle Model

    In the cattle service model, the servers are given identification numbers, like web01, web02, web03, web04, and web05, just as cattle are tagged. Each server is almost identical to the others, and when one gets sick, you replace it with another one. You scale them by creating more of them, and when one is unavailable, no one notices.

    Examples of cattle servers include web server arrays; NoSQL clusters; queuing clusters; search clusters; caching reverse proxy clusters; multimaster data stores, like Cassandra and MongoDB; and big-data cluster solutions.

    Evolution of the Cattle Model

    The cattle service model has evolved from the Iron Age (bare-metal rack-mounted servers) to the Cloud Age (virtualized servers that are programmable through a web interface).

  • Iron Age of computing: there was no concept of hardware virtualization. Robust change configuration tools, like Puppet or Chef, allowed operations to configure systems using automation.
  • First Cloud Age: virtualization was extended to offer Infrastructure as a Service (IaaS) that virtualized the entire infrastructure (networks, storage, memory, and CPU) into programmable resources. Popular platforms offering IaaS are Amazon Web Services, Microsoft Azure, and Google Cloud Platform.
  • Second Cloud Age: automation was built to virtualize aspects of the infrastructure. This allows applications to be segregated into isolated environments without the need to virtualize hardware, which in turn duplicates the operating system per application. Examples of this are Linux Containers and Docker.
  • Kubernetes and similar technologies have now evolved to allocate resources for containers and schedule these containers across a cluster of servers. These tools give rise to immutable production, where disposable containers are configured at deployment.

    Introducing vclusters

    “Virtual clusters are fully working Kubernetes clusters that run on top of other Kubernetes clusters. Compared to ‘real’ clusters, virtual clusters do not have their own node pools. Instead, they are scheduling workloads inside the underlying cluster while using a separate control plane.”—“What are Virtual Kubernetes Clusters?”

    vcluster architecture

    High-Velocity Engineering with vclusters

    In the software development and release cycle, the software moves through multiple environments, like local, dev, test, and preproduction, before it’s released to production. These environments should be close to the production environment to avoid any libraries or dependency version conflicts.

    With local Kubernetes environments, such as minikube or K3s, developers can create Kubernetes clusters on their local computers. The upside of this approach is that you have complete control over your environment. But this can frequently leave you struggling with management and setup. It also does not resemble cloud-based environments closely enough when there is a dependency on multiple downstream or upstream services.

    A cloud-based environment setup can address this issue by removing the setup pain points. But it also slows down the deployment and testing of multiple versions of your software. For example, your team is simultaneously working on three new features. You can deploy and test only one version at a time, making this a sequential process. Launching three parallel environments is resource-intensive and costly.

    Using vcluster, you can quickly launch and delete these environments, thus speeding up the development and testing process.

    How vcluster Works

    Each developer gets an individual virtual cluster with full admin access to use however they please. They can change all available configurations, even the Kubernetes version, independently from other users working on the same physical cluster.

    Since every development or QA environment is running on the same physical cluster, only one or two clusters are needed for all engineers, which significantly reduces the workload on the sysadmins. The isolation helps prevent the underlying cluster from breaking due to developer misconfigurations, and the cluster won’t require additional installations and add-ons.

    Developers and testers can create virtual clusters as needed, as long as they stay within their resource limits, instead of waiting for IT to provide infrastructure access.

    The virtual cluster consists of core Kubernetes components, like an API server, controller manager, and storage backend (such as etcd, SQLite, or MySQL). To reduce virtual cluster overhead, vcluster builds on K3s, a fully working, certified, and lightweight Kubernetes distribution that compiles the components into a single binary and disables all unneeded features.

    That lightweight configuration makes vcluster fast, and it uses few resources because it’s bundled in a single pod. Because pods are scheduled in the underlying host cluster, there is no performance degradation.

    vcluster splits up large multi-tenant clusters into smaller vclusters to reduce overhead and increase scalability. This dramatically decreases pressure on the underlying Kubernetes cluster since most vcluster API requests and objects will not reach the host cluster.

    Virtual clusters also save on cloud computing costs, as the underlying environment is shared. Additional features, such as automatic sleep mode, make this approach even more cost-efficient since the idle times seen in other cloud-based approaches can be nearly eliminated.

    Quick Start Guide

    vcluster works with any Kubernetes clusters, like Amazon Elastic Kubernetes Service (EKS), Google Cloud, Microsoft Azure, and DigitalOcean. To install the vcluster, here are the prerequisites:

  • kubectl: check via kubectl version command
  • helm v3: check via helm version command
  • a working kube-context with access to a Kubernetes cluster: check via `kubectl get namespaces’ command
  • 1. Download vcluster CLI

    vcluster can be downloaded using one of the following commands based on your operating system:

  • Mac (Intel/AMD)
  • curl -s -L "https://github.com/loft-sh/vcluster/releases/latest" | sed -nE 's!.*"([^"]*vcluster-darwin-amd64)".*!https://github.com\1!p' | xargs -n 1 curl -L -o vcluster && chmod +x vcluster;
    sudo mv vcluster /usr/local/bin;
    
  • Linux (AMD)
  • curl -s -L "https://github.com/loft-sh/vcluster/releases/latest" | sed -nE 's!.*"([^"]*vcluster-linux-amd64)".*!https://github.com\1!p' | xargs -n 1 curl -L -o vcluster && chmod +x vcluster;
    sudo mv vcluster /usr/local/bin;
    
  • Windows (PowerShell)
  • md -Force "$Env:APPDATA\vcluster"; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.SecurityProtocolType]'Tls,Tls11,Tls12';
    Invoke-WebRequest -UseBasicParsing ((Invoke-WebRequest -URI "https://github.com/loft-sh/vcluster/releases/latest" -UseBasicParsing).Content -replace "(?ms).*`"([^`"]*vcluster-windows-amd64.exe)`".*","https://github.com/`$1") -o $Env:APPDATA\vcluster\vcluster.exe;
    $env:Path += ";" + $Env:APPDATA + "\vcluster";
    [Environment]::SetEnvironmentVariable("Path", $env:Path, [System.EnvironmentVariableTarget]::User);
    

    To install in other operating systems, refer to these steps. Alternatively, you can download the binary for your platform from the GitHub Releases page and add it to your PATH.

    2. Verify Installation

    To confirm that the vcluster CLI is successfully installed, test using this command:

    vcluster --version
    

    3. Create a vcluster

    Create a virtual cluster vcluster-1 in namespace host-namespace-1:

    # By default vcluster will connect via port-forwarding
    vcluster create vcluster-1 -n host-namespace-1 --connect
    
    # OR: Use --expose to create a vcluster with an externally accessible LoadBalancer
    vcluster create vcluster-1 -n host-namespace-1 --connect --expose
    

    Check the vcluster docs to find out how to deploy a vcluster using Helm or Kubectl instead.

    4. Use the vcluster

    Run this in a separate terminal:

    export KUBECONFIG=./kubeconfig.yaml
    
    # Run any kubectl, helm, etc. command in your vcluster
    kubectl get namespace
    kubectl get pods -n kube-system
    kubectl create namespace demo-nginx
    kubectl create deployment nginx-deployment -n demo-nginx --image=nginx
    kubectl get pods -n demo-nginx
    

    5. Clean up resources

    vcluster delete vcluster-1 -n host-namespace-1
    

    Conclusion

    Using vclusters can help you tackle challenges, like environment setup, configurations, and dependency management, when you use Kubernetes. Virtual clusters give developers secure, flexible, and cost-efficient Kubernetes access without consuming too many resources, thus increasing your organization’s engineering capability.

    To learn more or get started with virtual clusters, check out the vcluster site.

    Photo by Universal Eye on Unsplash

    Sign up for our newsletter

    Be the first to know about new features, announcements and industry insights.