Ephemeral Kubernetes Environments on CI/CD Systems with vCluster

Damaso Sanoja
Minute Read

An increasing number of GitOps teams are recognizing the need to construct their own automated Kubernetes CI/CD platform. This trend arises from the desire to faithfully replicate the production environment, create customized app testing workflows, gain better control over resources, achieve better security, and manage costs effectively.

In this sense, vCluster is a powerful open source solution developed by Loft that allows organizations to launch virtual Kubernetes clusters with a single command. Such convenience can be useful for several use cases, including achieving true Kubernetes multitenancy isolation or building ephemeral Kubernetes environments for CI/CD purposes. This tutorial focuses on the latter.

This tutorial provides an overview of the challenges involved in building a Kubernetes-based CI/CD platform. Additionally, you'll learn what makes vCluster an ideal tool for Kubernetes CI/CD pipelines. Finally, you'll gain firsthand experience leveraging vCluster for efficient CI/CD in your Kubernetes environment.

Challenges in Building a Kubernetes-Based CI/CD Platform

Building a Kubernetes-based CI/CD platform presents several challenges. The time it takes to tear down and start up environments can substantially disrupt your workflow. Complexity in resource management often arises due to the dynamic nature of Kubernetes, with resource allocation and optimization becoming a tedious task.

Moreover, achieving consistency and security across multiple environments is another hurdle, as each environment may require different dependencies or access control levels. Managing persistent data could also pose a significant challenge due to the ephemeral nature of Kubernetes pods, which can lead to data loss if not handled correctly.

So, how can vCluster help overcome these challenges?

Why Consider vCluster for Your CI/CD Pipeline?

Introducing you to vCluster deserves a dedicated blog post. Therefore, the best way to review its suitability for CI/CD pipelines is to contrast its features with the challenges described in the previous section.

  • Deployment speed: Ideally, your team should be able to launch CI/CD pipelines quickly. As you will see shortly, vCluster makes this possible with a simple command.
  • Resource management: Each virtual cluster is fully customizable either via Helm charts or by applying Kubernetes manifests on initialization. This allows your team to assign the resources it deems appropriate to each Kubernetes CI/CD pipeline on a case-by-case basis.
  • Consistency across environments: vCluster enables your team to deploy fully isolated Kubernetes environments that live within a namespace. This makes it easy to create environments that are consistent with the production environment or any specialized CI/CD pipeline intended for development, A/B testing, etc.
  • Persistent data. Behind the scenes, vCluster uses K3s. This allows GitOps teams to configure virtual clusters that use datastores other than etcd, with or without persistent volumes, simplifying persistent data management.
  • Security and performance: For vCluster, security and performance are paramount. In this sense, you can configure the virtual cluster to use isolated mode and completely separate the API endpoint and data storage from the host Kubernetes cluster.
  • The list of vCluster benefits goes on and on. However, nothing beats experiencing such benefits firsthand.

    Getting Started with vCluster

    In this section, you'll install the vcluster CLI, create a virtual cluster within your Kubernetes cluster, and configure vCluster to suit your specific organization's requirements.

    Prerequisites

    You'll need the following for the tutorial:

  • An up-and-running Kubernetes cluster. This tutorial was tested using k3d on an Amazon EC2 t2.medium instance; however, it should work on any Kubernetes distribution locally.
  • Access to the Kubernetes cluster using kubectl from your local machine.
  • Helm, Terraform, and the Argo CD CLI configured on your local machine.
  • Setting Up the vCluster CLI

    You can use the vCluster documentation to install the vcluster command line tool on Mac (Intel/AMD/ARM), Linux (AMD/ARM), or Windows machines.

    For instance, if you have a Mac, you can install vcluster locally using Homebrew as follows:

    brew install vcluster
    

    To check that everything went as expected, run vcluster --version:

    $ vcluster --version
    vcluster version 0.15.7
    

    Spinning Up Your First Virtual Cluster

    You can now start deploying virtual clusters to your Kubernetes cluster.

    You'll notice that the documentation mentions three different ways to create a virtual cluster: the CLI, Helm charts, and kubectl. This provides you with the flexibility to use vCluster in different scenarios.

    If you want to experiment with vCluster, the vcluster CLI is ideal. Just run vcluster create my-vcluster on your machine. The output should be similar to the following:

    info   Creating namespace vcluster-my-vcluster
    info   Detected local kubernetes cluster k3d. Will deploy vcluster with a NodePort & sync real nodes
    info   Chart not embedded: "open charts/vcluster-0.15.7.tgz: file does not exist", pulling from helm repository.
    info   Create vcluster my-vcluster...
    info   execute command: helm upgrade my-vcluster https://charts.loft.sh/charts/vcluster-0.15.7.tgz --kubeconfig /var/folders/rc/0g2r30c52j152ph518g25mc40000gn/T/3979167366 --namespace vcluster-my-vcluster --install --repository-config='' --values /var/folders/rc/0g2r30c52j152ph518g25mc40000gn/T/1848473749
    done √ Successfully created virtual cluster my-vcluster in namespace vcluster-my-vcluster
    info   Waiting for vcluster to come up...
    info   Starting proxy container...
    warn   Error exposing local vcluster, will fallback to port-forwarding: error starting kind proxy:  exec: "docker": executable file not found in $PATH
    done √ Switched active kube context to vcluster_my-vcluster_vcluster-my-vcluster_k3d-loft-tutorial
    warn   Since you are using port-forwarding to connect, you will need to leave this terminal open
    - Use CTRL+C to return to your previous kube context
    Forwarding from 127.0.0.1:10624 -> 8443
    Forwarding from [::1]:10624 -> 8443
    Handling connection for 10624
    

    As you can see, several things happen behind the scenes:

  • If no further arguments are provided, vCluster expects to find kubeconfig credentials in the default location of ~/.kube/config. These credentials allow it to create the namespace vcluster-my-vcluster using kubectl.
  • Next, vcluster checks if your machine has the appropriate Helm chart installed, downloads it if it's not present, and then creates the vcluster my-vcluster in the namespace vcluster-my-vcluster using helm.
  • Finally, vCluster exposes my-vcluster using port forwarding, which is the default method used for remote Kubernetes clusters.
  • Now, all you have to do is open another terminal, as indicated. If you run kubectl get ns, you will get an output like this:

    $ kubectl get ns  
    NAME              STATUS   AGE
    default           Active   14m
    kube-system       Active   14m
    kube-public       Active   14m
    kube-node-lease   Active   14m
    

    Congratulations! You're now inside a virtual cluster! To verify it, close the terminal where the port-forward is running and list namespaces again:

    kubectl get ns
    NAME                   STATUS   AGE
    default                Active   24h
    kube-system            Active   24h
    kube-public            Active   24h
    kube-node-lease        Active   24h
    vcluster-my-vcluster   Active   16m
    

    The namespace vcluster-my-vcluster is where the virtual cluster lives. Alternatively, you can discover active virtual clusters using vcluster list:

    $ vcluster list
    NAME          NAMESPACE              STATUS    CONNECTED   CREATED                         AGE     CONTEXT            
    my-vcluster   vcluster-my-vcluster   Running               2023-09-17 12:26:46 -0400 -04   20m6s   k3d-loft-tutorial
    

    To recap, the vcluster CLI uses kubectl and helm to facilitate deploying virtual clusters. Put another way, you could also deploy a virtual cluster by running the following:

    kubectl create namespace vcluster-my-vcluster
    helm template my-vcluster vcluster --repo https://charts.loft.sh -n vcluster-my-vcluster | kubectl apply -f -
    

    This approach is interesting because it opens the doors for you to deploy virtual clusters programmatically.

    Deploying Virtual Clusters Programmatically

    Deploying virtual clusters using the vcluster CLI is straightforward. However, manually managing your team members' virtual clusters is inefficient. In this sense, GitOps teams can leverage Terraform's Kubernetes and Helm providers to deploy virtual clusters programmatically.

    Suppose you need to deploy three virtual clusters. To do so, create a new folder, navigate to it, and create a file called main.tf with the following code:

    terraform {
    
      backend "kubernetes" {
        secret_suffix = "common"
        config_path   = "~/.kube/config"
      }
    }
    
    provider "helm" {
      kubernetes {
        config_path = "~/.kube/config"
      }
    }
    
    resource "helm_release" "multiple-vcluster" {
      count = 3
      namespace = "vcluster-${count.index}"
      name      = "vcluster-${count.index}"
      repository       = "https://charts.loft.sh"
      chart            = "vcluster"
      create_namespace = true
    }
    

    The first two code blocks instruct Terraform to use kubernetes and helm providers. The third code block declares the multiple-vcluster resource responsible for deploying three virtual clusters using Helm.

    You'll also need to initialize Terraform using terraform init and apply the changes with terraform apply. Run kubectl get ns when the process finishes, and you will see the namespaces corresponding to each virtual cluster:

    ...
    vcluster-my-vcluster   Active   3h48m
    vcluster-0             Active   2m12s
    vcluster-1             Active   2m12s
    vcluster-2             Active   2m12s
    

    As before, you can connect via port-forward to any virtual cluster using vcluster connect. You could also use other methods to expose the virtual cluster, such as LoadBalancer, NodePort, or ingress, if required.

    For instance, to connect to vcluster-1 using the CLI, run vcluster connect vcluster-1:

    $ vcluster connect vcluster-1
    done √ Switched active kube context to vcluster_vcluster-1_vcluster-1_k3d-loft-tutorial
    warn   Since you are using port-forwarding to connect, you will need to leave this terminal open
    - Use CTRL+C to return to your previous kube context
    - Use `kubectl get namespaces` in another terminal to access the vcluster
    Forwarding from 127.0.0.1:10404 -> 8443
    Forwarding from [::1]:10404 -> 8443
    Handling connection for 10404
    

    Thanks to Terraform, you can create and destroy virtual clusters at your convenience by simply changing count = 3 to the desired number of virtual clusters and running terraform apply afterward. That said, deploying virtual clusters using Terraform and Helm charts provides additional benefits, such as the flexibility to configure vCluster to your requirements using values.yaml.

    Customizing vCluster Settings

    In the previous examples, vCluster was deployed using its default values. However, let's say your team needs to test an application on a different version of Kubernetes. You can use values.yaml to tell helm which version of Kubernetes vCluster will use.

    Let's test it. Edit main.tf to look like the following:

    terraform {
    
      backend "kubernetes" {
        secret_suffix = "common"
        config_path   = "~/.kube/config"
      }
    }
    
    provider "helm" {
      kubernetes {
        config_path = "~/.kube/config"
      }
    }
    
    resource "helm_release" "vcluster-k3s1281" {
      namespace = "vcluster-k3s1281"
      name      = "vcluster-k3s1281"
      repository       = "https://charts.loft.sh"
      chart            = "vcluster"
      create_namespace = true
      values = [file("${path.module}/vcluster-k3s1281.yaml")]
    }
    

    Next, create a file called vcluster-k3s1281.yaml in the same directory and paste the following inside:

    vcluster:
      image: rancher/k3s:v1.28.1-k3s1 # Image to use for the virtual cluster
    

    If you run terraform apply now, it will create a new virtual cluster that will use the settings specified in vcluster-k3s1281.yaml. Once you connect to the virtual cluster, you can verify it:

    $ kubectl version
    Client Version: v1.28.2
    Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
    Server Version: v1.28.1+k3s1
    

    This is a simple example, but you can adjust many other parameters. For instance, you can dramatically increase security in multitenant virtual cluster environments by enabling isolated mode or ensure that no virtual cluster consumes too many resources in the host cluster by changing the default ResourceQuota values. To look at all the values that you can adjust, download the vCluster Helm chart using helm pull and check values.yaml.

    Now that you have a virtual cluster running the latest version of Kubernetes, you just have to deploy the app you want to test. The best way to do this is to implement a CI/CD pipeline within the virtual cluster.

    How to Use vCluster to Create CI/CD Pipelines

    Implementing a CI/CD pipeline within the virtual cluster allows you to create an automated workflow with steps that will be executed every time you push a change to the app Git repository.

    vCluster enables organizations to embrace GitOps and DevOps best practices without any extra effort from your team. In fact, you've already seen how vCluster fits seamlessly into your Terraform IaC workflow. This allows organizations to manage their entire infrastructure declaratively and even create CI/CD pipelines to deploy virtual clusters on demand.

    Furthermore, you could use vCluster to create customized ephemeral CI/CD workflows for your developers and address use cases like the one described in the previous section. Argo CD is a great tool for this multitenant scenario.

    Start by deploying the Argo CD operator within the last virtual cluster you created (vcluster-k3s1281):

    kubectl create namespace argocd
    kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/core-install.yaml
    

    Once all components are installed, use the argocd CLI to extract the temporary password generated during Argo CD deployment. Copy the password, as you'll need it to access the Argo CD UI:

    argocd admin initial-password -n argocd
    

    Next, port-forward argocd-server to access the web UI:

    kubectl port-forward svc/argocd-server -n argocd 8080:443
    

    Argo CD UI login

    Enter admin as the username, and use the password you just obtained. This should take you to the main dashboard:

    Argo CD dashboard

    Click the NEW APP button located at the top left.

    Follow the instructions to create apps via UI to deploy and synchronize the guestbook demo app on your virtual cluster. Basically, you'll need to enter guestbook as the application name, use the default project, leave the sync policy as "Manual," and set the repository URL to https://github.com/argoproj/argocd-example-apps.git.

    Once deployed, you can sync the app by pressing the SYNC button. The result should look similar to the following:

    Guestbook app

    That's all! You just implemented CI/CD within a virtual cluster that uses the latest Kubernetes version.

    Argo CD dashboard

    From here on, the possibilities are endless. For example, your organization could build an end-to-end CI/CD pipeline that launches virtual clusters with Argo CD or any other dev or testing environment you require, all programmatically.

    Conclusion

    This tutorial demonstrated the flexibility and convenience that vCluster offers when creating ephemeral CI/CD environments in Kubernetes. Loft can significantly amplify these capabilities, empowering GitOps teams and providing enterprise features to elevate the virtual cluster experience. Loft provides high availability (HA), a top-tier UI, CLI, API, and air-gapped and VPC environment support. Furthermore, Loft's multicluster, multicloud, and multiregion support and cross-cluster sync capabilities provide a robust platform for your virtual cluster environments. Loft isn't just a tool but an essential partner that takes your virtual cluster experience to new heights of efficiency and productivity. Embrace Loft and experience a seamless, comprehensive approach to managing your virtual cluster environments.

    If you want to know more about vCluster and Loft, you can check out how to create virtual Kubernetes clusters with vCluster and Helm or explore the benefits of virtual clusters for CI/CD testing.

    Sign up for our newsletter

    Be the first to know about new features, announcements and industry insights.