GitOps + Kubernetes: Managing Kubernetes Secrets

Lukas Gentele
Hrittik Roy
13 min read

#GitOps + Kubernetes Series

GitOps is a modern approach to software delivery and infrastructure management. It emphasizes version control and collaboration with Git as the source of truth to manage code, configurations, and infrastructure as code (IaC) from a repository. GitOps enables developers to deploy infrastructure and applications in a consistent, automated, and auditable manner.

Teams can use GitOps to automate the entire software delivery pipeline, lowering the risk of human error and ensuring consistency across environments. This is particularly helpful in use cases such as continuous delivery, continuous deployment, and continuous compliance. GitOps also allows teams to bring back features from Git and effortlessly roll back to previous versions or recover from failures.

In this article, you’ll learn how to use Sealed Secrets, a Kubernetes custom resource to manage secrets when using GitOps with Kubernetes through a demo application, along with Argo CD, one of the most popular GitOps tools from the CNCF Cloud Native Landscape.

#Why Sealed Secrets?

GitOps relies heavily on Git for storing application logic, infrastructure manifests, and application manifests. However, securely managing secrets can be a challenge, as storing them in Git repositories can pose a security risk if they are not encrypted and managed properly.

You can use third-party vaults, manually enforce secrets, or use the Sealed Secrets tool from Bitnami, which can help you with encrypting and storing the secrets in your Git repositories, making it easy to follow GitOps principles for all of your objects.

The kubeseal utility, which is bundled with Sealed Secrets, employs public-key cryptography to encrypt your secrets. This ensures that only authorized parties, such as the sealed secret controller installed in your cluster, can decrypt the secrets. The best part about the utility is that it manages the RSA certificate internally, making the process simpler for you.

#Managing Secrets with GitOps

With an understanding of how kubeseal and Sealed Secrets work, getting hands-on experience to understand things in depth will be helpful. This section describes how to create a basic Python application that can read secrets and how to use Sealed Secrets to encrypt the secret store. The deployment process involves using the Argo CD GitOps tool.

Sealed Secrets architecture

If you need to update your secrets, Git commits can be made to update your sealed secrets, which will trigger new deployments and update the secrets.

#Prerequisites

You’ll need a Kubernetes cluster and a code editor to complete the tutorial. For the Kubernetes cluster, this example uses Azure Kubernetes Service (AKS) to deploy the application, but you have the option to run any cluster you prefer. It is important to ensure that your cluster has at least two nodes, as tools like Argo CD can require significant compute. As for the code editor, any editor will do, but the tutorial uses VS Code.

You’ll also need all the following tools to build the application, so make sure you have them installed:

  • kubectl: A command line tool for managing Kubernetes clusters and resources.
  • Helm: A package manager for Kubernetes that will help you install the kubeseal controller.
  • Docker: The go-to platform for building, shipping, and running applications in containers. Don’t forget to create a Docker Hub account to store your container images.
  • Git: The most popular version control system for tracking changes in source code.
  • GitHub: One of the most popular web-based platforms for hosting and managing Git repositories, with many additional benefits for your GitOps engine to monitor. Create an account if you don’t already have one.
  • Python (>=3.7): A popular high-level programming language used for building the sample applications.

#Creating Your Python Application

To understand how secrets work, you first need to create an application that can use the secrets and display them on an endpoint. This tutorial focuses on using Flask, a popular Python web framework.

You can find all the code for the tutorial in this GitHub repository if you want to clone the repository and follow along.

#Importing Packages

To get started, you’ll need to install Flask. You can do this by running the following in a terminal or command prompt:

pip install flask

Once Flask is installed, you can import it into your main.py file, along with the built-in os module, which will be used for accessing environment variables:

# Import Flask module from flask package
from flask import Flask

# Import os module for accessing environment variables
import os

#Creating Your Flask Logic

Create a new instance of a Flask application using the Flask(__name__) constructor, which will be used to define the application’s routes and other settings:

# Create a new instance of the Flask application
app = Flask(__name__)

The following code defines a route for the default home page of the Flask application using the @app.route() decorator:

# Define a route for the default home page of the application
@app.route("/")
def index():
   # Retrieve the value of the COLOR_ENV environment variable
   env_var = os.environ.get("COLOR_ENV")
   # Return a string that includes the value of the COLOR_ENV environment variable
   return f"Hey {env_var}!"

The "/" argument specifies the URL path for the route, and def index() defines a Python function that will handle requests to this route.

Here, the Python function takes the COLOR_ENV environment variable with the help of the os package and then returns the value of the environment variable to the home page.

#Starting the Flask App

With the functions ready to serve your users, you need to add the following block of code to allow Flask to run on a specified port, which helps you assign ports while creating the port mapping:

if __name__ == "__main__":
    app.run(host='0.0.0.0', port='5000')

You can run your application with the command below:

python main.py

#Dockerizing Your Application

With your application ready and running, the next step is to containerize it for your container orchestrator to use. You can do this by using a Dockerfile and building your Docker image. First, freeze your dependencies using pip freeze > requirements.txt and put them in the same directory as main.py so that the correct dependencies are installed during the image build.

With that, the Dockerfile for this tutorial looks like the following:

# Use the official Python 3.8 image from the Alpine Linux repository
FROM python:3.8-alpine

# Copy the contents of the "deploy/" directory from the build context to the "/deploy" directory in the container
COPY deploy/ /deploy

# Set the working directory to "/deploy"
WORKDIR /deploy

# Install the Python dependencies listed in the "requirements.txt" file using pip
RUN pip install -r requirements.txt 

# Expose port 5000 for incoming connections
EXPOSE 5000

# Set the default command to run when the container starts
CMD ["python", "app.py", "--host=0.0.0.0"]

The next step is to build and push your Docker image, which you can do using the following commands if you’re authenticated to your Docker Hub account:

sudo docker build -t <your-username-here>/test-tester:latest . && docker push <your-username-here>/test-tester:latest

If successful, you’ll get the following output:

Docker build and push

#Installing kubeseal

With your Kubernetes cluster up and running, you’ll need to install the kubeseal utility for signed secret creation. This tutorial uses the Linux system on AMD64 architecture. For other systems and architectures, you can read the official documentation.

Use the following code to start the installation on Linux:

wget https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.20.1/kubeseal-0.20.1-linux-amd64.tar.gz &&
tar -xvzf kubeseal-0.20.1-linux-amd64.tar.gz kubeseal &&
sudo install -m 755 kubeseal /usr/local/bin/kubeseal

To verify the installation, use kubeseal version:

hrittik@hrittik:~$ kubeseal --version
kubeseal version: 0.20.1

With your command line tool up and running, the next step involves installing the sealed secrets controller on your cluster, which is required for sealing and unsealing your secrets.

#Installing the Sealed Secrets Controller

Your controller is responsible for decrypting your secrets when they’re applied to your cluster. Installing it is an easy process with tools like Helm:

helm repo add sealed-secrets https://bitnami-labs.github.io/sealed-secrets && 
helm install sealed-secrets -n kube-system --set-string fullnameOverride=sealed-secrets-controller sealed-secrets/sealed-secrets

Run the following command to see the status of your deployment:

kubectl get deployment.apps/sealed-secrets-controller -n kube-system

The output should show 1/1 under READY if things are working as expected:

hrittik@hrittik:~$ kubectl get deployment.apps/sealed-secrets-controller -n kube-system
NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
sealed-secrets-controller   1/1     1            1           42h

#Encrypting Secrets with kubeseal

Now that you’ve set up all the necessary tools for creating your sealed secrets, you can create them in two steps.

#Creating a Secret Object

Create a normal Kubernetes secret with the required secret data. For example, let’s say you need to create a secret named “color” with the value “blue” for the key COLOR_ENV. You can create this secret using the following command:

kubectl create secret generic color --dry-run=client --from-literal COLOR_ENV=blue -o yaml > mysecret.yaml

This command creates a Kubernetes secret with the name color and the data COLOR_ENV=blue. The --dry-run=client flag ensures that the secret is not actually created, but the YAML for the secret is output to the console.

The output is then redirected to a file named mysecret.yaml:

apiVersion: v1
data:
  COLOR_ENV: Ymx1ZQ==
kind: Secret
metadata:
  creationTimestamp: null
  name: color

Note: This file is not secure and shouldn’t be committed to Git as the data can be easily decoded with Base64 decoding.

#Creating a SealedSecret Object

Use the following command to have the kubeseal utility encrypt the secret data from the mysecret.yaml file and create a sealed secret in YAML format:

kubeseal --format yaml --controller-name=sealed-secrets-controller --controller-namespace=kube-system -f mysecret.yaml > mysealedsecret.yaml

In the command above, the --format yaml flag specifies that the output should be in YAML format, and the --controller-name and --controller-namespace flags specify the name and namespace of the Kubernetes controller that will manage the sealed secret.

The output of this command is then redirected to a file named mysealedsecret.yaml, which should look something like this:

apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
  creationTimestamp: null
  name: color
  namespace: default
spec:
  encryptedData:
    COLOR_ENV: AgAldUW38Ja8d7dGko2wYwMxJn7cLXmq8WRV0JzAXvkpvgmAud2d4hm9+aWz4iaeGITdM35CYABl7EuwTVgIk9A1FvhPMbIdloixSkKQ8OumBKw0G8SjYUTY4hGtWxqDDO7G3eikZFRjwsx8wayplEVk4YnnJEscBQxeCBd156gBruCf923oiYqwdV9oOQ+CoLGv9cfU1Z6vsb7ISIi+5R+P61U+dYNYA4afKYyEQASYupmf8cZ+kBCN+UNud5S0ucKCgr/S+r23EHF1LKtDp7irV/Tkl6teEq/5PUccHhfx6FZFkhDNvZzEFEiYPSlTw/4hOh1xvvpN4TJpBY0tYH2ckmXd02DW0PWPRIIXm6w7JPpvSAvv2yNybuG/QMi/itJWn3TfGgb4DEuei0DXDWLdg6DMWs31HNVqIXHpN/UJNCyHRfOFLf0paBciqd+egFw2hEZB/bHUPWgvtb092JNcYklTPDYTpTLArvxbuCkSdMbBRzLXTP1GxETrXVp9PAXyhbGfXHRX8FP5rMmWctRiiDaxPerKd7uAspnNgzzoTKJVt2rJyfBi1CD1azWloffWJGK36/GCahx6ih4EChQW7KPZrFkm0vMry1EmpkceQRsqmhXRRKrR5a4a5CwbeaUndG2io7sp9DOrFcPJH4M2VhK/VMI9lV5a0LLDN/ae8uiDJjqjty85SN9SWA5Ocx7OWMa8
  template:
    metadata:
      creationTimestamp: null
      name: color
      namespace: default

This sealed secret can now be committed to your source control system and deployed to your Kubernetes cluster without the risk of compromise.

#Creating Manifests

With your sealed secrets and Argo CD set up, you’ll need to create manifests for your application to run.

Deploying your application allows you to generate and control the targeted configuration of your application’s pods. The deployment’s manifest will resemble the following:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-deploy
spec:
  replicas: 1
  selector:
    matchLabels:
      app: app-demo
  template:
    metadata:
      labels:
        app: app-demo
    spec:
      containers:
        - name: app-demo
          image: hrittik/test-tester:latest
          ports:
           - containerPort: 5000
          env:
          - name: COLOR_ENV
            valueFrom:
              secretKeyRef:
                name: color
                key: COLOR_ENV

Take special note of the env: section, as it’s used to reference your color secret from the previous steps and is necessary to reference your secrets in the demo:

          env: # Environment variables to set
          - name: COLOR_ENV
            valueFrom:
              secretKeyRef: # Reference to the "color" secret
                name: color
                key: COLOR_ENV

The service manifest will provide a stable endpoint for the pods in your deployment, and this endpoint enables external users or services to access your pods. In this demo, a LoadBalancer is used in the manifest to expose the application because it provides a simple option for obtaining an IP endpoint on a cloud infrastructure. However, depending on your specific requirements, you can use any other Kubernetes service type to expose your application.

The code for the service manifest is as follows:

apiVersion: v1
kind: Service
metadata:
  name: app-lb-service
spec:
  selector:
    app: app-demo
  ports:
    - port: 5000
      targetPort: 5000
  type: LoadBalancer

Make sure to keep these three manifests in one folder (this demo uses manifests), as this will be required in the upcoming steps. The manifests should also be stored in your source code management (SCM) tool as has been done in this GitHub repository.

#Setting Up GitOps with Argo CD

Argo CD is a popular open source tool used for continuous delivery and deployment of applications to Kubernetes clusters. You’ll be using it to deploy the secret reader that you created in the previous stage. Installing Argo CD is a simple process. It involves creating a namespace for it and applying the installation manifest from the Argo CD stable repository:

kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

After installing Argo CD, you can expose its web user interface using a load balancer, which is useful when using Argo CD as a managed cloud provider. This can be done by patching the service with the type LoadBalancer:

kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'

To access the web UI, you need the IP address, which you can get by running the following command:

kubectl get svc -n argocd argocd-server  --output jsonpath='{.status.loadBalancer.ingress[0].ip}'

Visiting the resulting IP address should take you to a login screen:

Argo CD portal

The default username for Argo CD is admin, and to obtain the password, you can run the following command:

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d

Log in to your portal using the credentials above, and you’re ready to deploy your application. 🚀

#Deploying Your Applications

To deploy the application, access your dashboard and click CREATE APPLICATION or NEW APP to start your deployment process:

Argo create application

Choose any name for the Application Name field and set the Project Name to default:

Basic details

To configure the source, you should specify the GitHub repository link that contains the manifests and select the folder that contains the manifests as the path, as Argo CD will deploy your application using the manifests found in this folder:

Source repository

Once you have configured these details, you can click CREATE in the top left corner to register the application as below:

Repository added

For the next step, start the sync process to implement the manifest in your cluster. You can do this by clicking SYNC and then SYNCHRONIZE:

Synchronize state

After a short wait, the application will be successfully deployed to your cluster, and you can observe all the objects in the cluster:

Cluster deployment complete

To access the website, navigate to the load balancer, which can be found in the network tab:

Network flow

Visit the page, and then you can see your application and read your sealed secret:

Endpoint showing secret

#Updating Your Secrets

When you need to update your secrets, there are two steps involved. First, you need to create a new sealed secret with the updated information you want to use in your application. You can follow the same steps as you did for creating a sealed secret with COLOR_ENV=night or something else of your choice.

Once you have created the new sealed secret, you need to push the changes to the Git repository where your application’s manifests are stored. This will notify Argo CD that there has been a change in the state of the application.

After you have pushed the changes, you can go to the Argo CD dashboard and check the state of your application. You should see that your cluster is out of sync:

Change state

To update the secrets, you need to trigger redeployment. For that, click SYNC and then SYNCHRONIZE with the FORCE and REPLACE flags:

Secret update

The steps will trigger Argo CD to apply the changes to the Kubernetes cluster, including updating the secrets and triggering redeployment of dependent resources like your pods.

Finally, access your web page using the new endpoint to view the updated secrets:

Updated manifests

You manually applied the changes in this example, but in different environments, you can force Argo CD to automate the whole process.

#Conclusion

Sealed secrets offer an effective solution for securing sensitive information in Kubernetes clusters. They ensure that secrets are protected and only accessible to authorized users and your cluster, while maintaining the advantages of Git and GitOps without any security issues. kubeseal and similar tools have made the process of creating and managing secrets much simpler, allowing companies to easily adopt them.

However, managing multiple virtual Kubernetes clusters can be more challenging for enterprises, particularly in terms of cost efficiency and scalability.

Loft addresses this challenge by offering a virtualization solution that enables enterprises to create virtual Kubernetes clusters and gain multitenancy, velocity, and self-service for their developers without compromising governance. With Loft, enterprises can manage their virtual clusters easily, reduce infrastructure costs, and ensure that their developers have the tools they need to be productive and efficient.

#Additional Articles You May Like

Sign up for our newsletter

Be the first to know about new features, announcements and industry insights.