Skip to main content
Version: master

GitOps And Loft

If you are already "GitOps-ing" all of your applications deployed into your Kubernetes cluster(s) you may want to do the same with Loft! This short guide is intended to give you a quick rundown of the basics of getting Loft deployed in a GitOps style. Note that there is nothing inherently "special" or different about Loft compared to other application, so general GitOps practices should apply nicely to Loft! Note that this guide will refer to ArgoCD, however any GitOps tooling can be used to deploy Loft, and the general approach should remain the same!

Basic deployment

Before continuing, you should take a look at the installation with helm section of the docs. As you may guess from the title, the installation with helm doc section covers the basics of installing Loft with Helm! In general, we recommend using Helm for GitOps deployments as well, so it is worth a few minutes to review the helm installation docs.

The most basic GitOps Loft deployment would be to simply create a helm based application and deploy the Loft chart with any desired values. Below is a simple ArgoCD example of just that:

---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: loft
namespace: argocd
spec:
destination:
name: ""
namespace: loft
server: "https://kubernetes.default.svc"
source:
path: ""
repoURL: "https://charts.loft.sh"
targetRevision: 3.0.0
chart: loft
helm:
parameters:
# admin
- name: admin.create
value: "true"
- name: admin.username
value: admin
- name: admin.password
value: password

# ingress
- name: ingress.enabled
value: "true"
- name: ingress.host
value: "loft.example"
- name: ingress.name
value: "loft-ingress"

# audit
- name: audit.enableSideCar
value: "true"
- name: config.audit.level
value: "1"

# config
- name: config.loftHost
value: "https://loft.example"

project: default
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true

The above ArgoCD application would deploy Loft into the local cluster, in the "loft" namespace. In this example there are values being passed to set some basic configurations including an admin user/password, ingress configuration, and some basic audit configuration.

When deploying the Loft chart as above, the Loft deployment itself will deploy the Loft agent into the local cluster for you. If you would like to manage the agent deployment yourself, see the subsequent agent section.

At its most basic, that's it!

ArgoCD and Helm Releases

If you are administering your Loft instance with ArgoCD you will be unable to update the Loft config in the Loft UI. This should not be an issue as if you are managing Loft in a GitOps fashion you are (hopefully) avoiding manual updates like this, but it is important to note! This is due to ArgoCD not deploying the Helm release secret.

Connected Clusters

One of the great benefits of Loft is that you can easily manage resources located in many physical clusters by adding them to Loft and using Loft as your central point of management. When you add a "connected" cluster to Loft, Loft creates a Cluster resource and installs a Loft Agent in the cluster to handle local reconciliation tasks. If you are managing Loft via a GitOps style, you may also wish to manage these connected clusters in a similar fashion (rather than letting Loft install and manage the Agent). This section outlines some basics of how to accomplish this.

Cluster Resources

If you would like to manage Loft and Loft agents via your GitOps tooling, you likely also want to manage the connected cluster configurations that live inside Loft. These configuration elements inform Loft of the remote connected clusters, and how to connect to those clusters to validate the agent installation, and to proxy kubernetes commands from the central Loft instance to the remote cluster(s).

Each cluster requires two resources, a Cluster object that simply defines the cluster name and the owner of the cluster inside of Loft, and an associated Secret that contains relevant configuration information for Loft to connect to the cluster.

A common strategy for bundling cluster data with your primary Loft Application (as in ArgoCD Application object) is to write a simple "parent" Helm chart that includes Loft as a dependency. This parent chart can contain anything you'd like of course, but in this case would be used to include the Cluster(s) objects and their associated Secret objects.

Cluster Secrets

The secret associated with a Cluster object necessarily includes authentication data to access the remote cluster -- take care to ensure that this information is handled appropriately!

A simple Chart.yaml that includes the base Loft chart as a dependency may look similar to this:

apiVersion: v2
name: loft-manager
description: A parent Helm chart for Loft
type: application
version: 1.0.0
dependencies:
- name: loft
version: 3.0.0
repository: https://charts.loft.sh

Values that need to be passed to the dependent Loft chart can be passed by referring to the dependency name, in this case, "loft". For example, if you wanted to set the replicaCount value on the Loft chart, you could do as follows in a values.yaml file:

loft:
replicaCount: 3
Who is managing the Loft agent?!

Remember, if you want to manage the Loft agent via your GitOps workflow, make sure you set the DISABLE_AGENT environment variable to "true" for your Loft deployment!

The "parent" Helm chart can now include any additional resources that you may want to deploy with your Loft instance -- in this case, the chart should include both the Cluster and Secret resources for any connected clusters. You can accomplish this by having a simple template that iterates over an array of clusters that users can provide via values, something like the following:

{{ range .Values.clusters }}
---
apiVersion: management.loft.sh/v1
kind: Cluster
metadata:
name: {{ .name }}
annotations:
argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
spec:
access:
- subresources:
- '*'
users:
- admin
verbs:
- '*'
config:
secretName: loft-cluster-config-{{ .name }}
secretNamespace: loft
displayName: {{ .name }}
owner:
user: admin
{{ end }}

{{ range .Values.clusters }}
---
apiVersion: v1
data:
config: {{ .config | b64enc }}
kind: Secret
metadata:
name: loft-cluster-config-{{ .name }}
namespace: loft
type: Opaque
{{ end }}

With the above template, users would provide an array of maps containing a "name" and a "config" field. For example:

clusters:
- name: my-connected-cluster
config: |
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: YOUR-CA-DATA-HERE
server: https://1.2.3.4:6443
name: my-connected-cluster
contexts:
- context:
cluster: my-connected-cluster
namespace: kube-system
user: my-connected-cluster-user
name: my-connected-cluster-context
current-context: my-connected-cluster-context
kind: Config
preferences: {}
users:
- name: my-connected-cluster-user
user:
token: YOUR-TOKEN-HERE
Missing CRDs?!

You may have noticed a strange annotation on the Cluster resource above -- "argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true". This annotation tells ArgoCD to skip the dry run/check of the resource. We need this for initial deployments as Loft itself will deploy the Cluster CRD into the cluster. Without this annotation the Application will be unable to be synced!

Agents

If you have disabled agent installation on your Loft deployment as outlined in the Helm installation guide here, you will need to ensure that you are installing the Loft agent into each connected cluster. Without doing this, Loft will not be able to do anything in the connected cluster!

The most obvious difference with managing the agents as compared to the Loft manager is that the agents will of course need ot be installed in the connected ("remote", as in non-local to Loft manager) cluster(s). Assuming you have successfully added the relevant cluster(s) as available clusters in your ArgoCD configuration, you can simply create another ArgoCD Application to manage the agent. The following is a basic example of this:

---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: loft-agent-my-other-cluster
namespace: argocd
spec:
destination:
name: ""
namespace: loft
server: "https://my-other-cluster:6443"
source:
path: ""
repoURL: "https://charts.loft.sh"
targetRevision: 3.0.0
chart: loft-agent
helm:
parameters:
- name: env.SOMEVAR
value: myvalue

project: default
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true

Once again, note that if you want to manage the agent deployments via ArgoCD or your GitOps tooling of choice, you will want to ensure that the DISABLE_AGENT environment variable is set to "true" for your Loft deployment!

Everything Else

Love managing Loft itself with your GitOps tooling? Want to manage the resources in Loft with that same tooling? Great news! You can do just that! One of the core tenets of Loft is that all Loft resources are just "normal" Kubernetes resources. This means that you can manage any Loft objects in the same way you'd manage any other Kubernetes resources in your GitOps workflow.

To generate the appropriate manifests to manage each Loft resource type, you may want to check out the API documentation where you can find example manifests and argument references for nearly all Loft resource types. Alternatively you can use the Loft UI to build out your required objects and simply snag the YAML output from the build pane and use that as your manifest content.

Here is a small example of creating a Team, and a Project that the team is a member of. You could manage teams and projects like this in your GitOps platform, then allow project users to create resources in Loft as they wish. This would "GitOps-ify" the core bits and allow teams to control their own environments in Loft manually if they wish. The following manifests could be added into a custom Helm chart, or simply used as manifests in a repo connected to ArgoCD. This is a contrived example of course, but the main point here is that all Loft resources are just "normal" Kubernetes (custom) resources that can be managed with your GitOps tooling, or any other Kubernetes-centric tooling!

---
apiVersion: management.loft.sh/v1
kind: Team
metadata:
name: acme-team
spec:
displayName: acme-team
owner:
user: admin
access:
- verbs:
- "*"
subresources:
- "*"
users:
- admin
- name: loft-access
verbs:
- get
- bind
subresources:
- clusters
teams:
- acme-team

---
kind: Project
apiVersion: management.loft.sh/v1
metadata:
name: acme-team-project
spec:
displayName: acme-team-project
owner:
user: admin
quotas: {}
allowedClusters:
- name: "*"
allowedTemplates:
- kind: VirtualClusterTemplate
group: storage.loft.sh
name: "*"
- kind: SpaceTemplate
group: storage.loft.sh
name: "*"
members:
- kind: Team
group: storage.loft.sh
name: acme-team
clusterRole: loft-management-project-admin
access:
- name: loft-admin-access
verbs:
- get
- update
- patch
- delete
subresources:
- "*"
users:
- admin
teams:
- acme-team