Loft supports syncing secrets from a HashiCorp Vault instance into both project secrets and global secrets.
This feature allows you to use secrets from Vault in your projects, vclusters and global Loft tenant without copying them manually or using third-party tools.
This guide will walk you through setting up global and project-based Vault integrations with Loft, syncing Vault secrets to global and project secrets, and storing the Vault token in a Kubernetes secret for access by Loft.
Line 4 of this secret requires one to either specify the namespace of the Loft installation, if the Vault token will be used for global secrets syncing, or the namespace of the project you want to use the Vault Token in.
Name must be unique within a namespace. Is required when creating resources, although
some resources may allow a client to request the generation of an appropriate name
automatically. Name is primarily intended for creation idempotence and configuration
definition.
Cannot be updated.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#names
GenerateName is an optional prefix, used by the server, to generate a unique
name ONLY IF the Name field has not been provided.
If this field is used, the name returned to the client will be different
than the name passed. This value will also be combined with a unique suffix.
The provided value has the same validation rules as the Name field,
and may be truncated by the length of the suffix required to make the value
unique on the server.
If this field is specified and the generated name exists, the server will return a 409.
Namespace defines the space within which each name must be unique. An empty namespace is
equivalent to the "default" namespace, but "default" is the canonical representation.
Not all objects are required to be scoped to a namespace - the value of this field for
those objects will be empty.
UID is the unique in time and space value for this object. It is typically generated by
the server on successful creation of a resource and is not allowed to change on PUT
operations.
An opaque value that represents the internal version of this object that can
be used by clients to determine when objects have changed. May be used for optimistic
concurrency, change detection, and the watch operation on a resource or set of resources.
Clients must treat these values as opaque and passed unmodified back to the server.
They may only be valid for a particular resource or set of resources.
CreationTimestamp is a timestamp representing the server time when this object was
created. It is not guaranteed to be set in happens-before order across separate operations.
Clients may not set this value. It is represented in RFC3339 form and is in UTC.
DeletionTimestamp is RFC 3339 date and time at which this resource will be deleted. This
field is set by the server when a graceful deletion is requested by the user, and is not
directly settable by a client. The resource is expected to be deleted (no longer visible
from resource lists, and not reachable by name) after the time in this field, once the
finalizers list is empty. As long as the finalizers list contains items, deletion is blocked.
Once the deletionTimestamp is set, this value may not be unset or be set further into the
future, although it may be shortened or the resource may be deleted prior to this time.
For example, a user may request that a pod is deleted in 30 seconds. The Kubelet will react
by sending a graceful termination signal to the containers in the pod. After that 30 seconds,
the Kubelet will send a hard termination signal (SIGKILL) to the container and after cleanup,
remove the pod from the API. In the presence of network partitions, this object may still
exist after this timestamp, until an administrator or automated process can determine the
resource is fully terminated.
If not set, graceful deletion of the object has not been requested.
Number of seconds allowed for this object to gracefully terminate before
it will be removed from the system. Only set when deletionTimestamp is also set.
May only be shortened.
Read-only.
List of objects depended by this object. If ALL objects in the list have
been deleted, this object will be garbage collected. If this object is managed by a controller,
then an entry in this list will point to this controller, with the controller field set to true.
There cannot be more than one managing controller.
If true, AND if the owner has the "foregroundDeletion" finalizer, then
the owner cannot be deleted from the key-value store until this
reference is removed.
See https://kubernetes.io/docs/concepts/architecture/garbage-collection/#foreground-deletion
for how the garbage collector interacts with this field and enforces the foreground deletion.
Defaults to false.
To set this field, a user needs "delete" permission of the owner,
otherwise 422 (Unprocessable Entity) will be returned.
Must be empty before the object is deleted from the registry. Each entry
is an identifier for the responsible component that will remove the entry
from the list. If the deletionTimestamp of the object is non-nil, entries
in this list can only be removed.
Finalizers may be processed and removed in any order. Order is NOT enforced
because it introduces significant risk of stuck finalizers.
finalizers is a shared field, any actor with permission can reorder it.
If the finalizer list is processed in order, then this can lead to a situation
in which the component responsible for the first finalizer in the list is
waiting for a signal (field value, external system, or other) produced by a
component responsible for a finalizer later in the list, resulting in a deadlock.
Without enforced ordering finalizers are free to order amongst themselves and
are not vulnerable to ordering changes in the list.
ManagedFields maps workflow-id and version to the set of fields
that are managed by that workflow. This is mostly for internal
housekeeping, and users typically shouldn't need to set or
understand this field. A workflow can be the user's name, a
controller's name, or the name of a specific apply path like
"ci-cd". The set of fields is always in the version that the
workflow used when modifying the object.
APIVersion defines the version of this resource that this field set
applies to. The format is "group/version" just like the top-level
APIVersion field. It is necessary to track the version of a field
set because it cannot be automatically converted.
Time is the timestamp of when the ManagedFields entry was added. The
timestamp will also be updated if a field is added, the manager
changes any of the owned fields value or removes a field. The
timestamp does not update when a field is removed from the entry
because another manager took it over.
Subresource is the name of the subresource used to update that object, or
empty string if the object was updated through the main resource. The
value of this field is used to distinguish between managers, even if they
share the same name. For example, a status update will be distinct from a
regular update using the same manager name.
Note that the APIVersion field is not related to the Subresource field and
it always corresponds to the version of the main resource.
If true, all users within the project will be allowed to create a new instance without a template.
By default, only admins are allowed to create a new instance without a template.
Enabled indicates if the ArgoCD Integration is enabled for the project -- this knob only
enables the syncing of virtualclusters, but does not enable SSO integration or project
creation (see subsequent spec sections!).
VirtualClusterInstance defines the name of virtual cluster (instance) that ArgoCD is
deployed into. If provided, Cluster will be ignored and Loft will assume that ArgoCD is
running in the specified virtual cluster.
Enabled indicates if the ArgoCD SSO Integration is enabled for this project. Enabling this
will cause Loft to configure SSO authentication via Loft in ArgoCD. If Projects are not
enabled, all users associated with this Project will be assigned either the 'read-only'
(default) role, or the roles set under the AssignedRoles field.
Host defines the ArgoCD host address that will be used for OIDC authentication between loft
and ArgoCD. If not specified OIDC integration will be skipped, but vclusters/spaces will
still be synced to ArgoCD.
AssignedRoles is a list of roles to assign for users who authenticate via Loft -- by default
this will be the read-only role. If any roles are provided this will override the default
setting.
Project defines project related values for the ArgoCD Integration. Enabling Project
integration will cause Loft to generate and manage an ArgoCD appProject that corresponds to
the Loft Project.
Enabled indicates if the ArgoCD Project Integration is enabled for this project. Enabling
this will cause Loft to create an appProject in ArgoCD that is associated with the Loft
Project. When Project integration is enabled Loft will override the default assigned role
set in the SSO integration spec.
Roles is a list of roles that should be attached to the ArgoCD project. If roles are provided
no loft default roles will be set. If no roles are provided and SSO is enabled, loft will
configure sane default values.
Enabled indicates if the Vault Integration is enabled for the project -- this knob only
enables the syncing of secrets to or from Vault, but does not setup Kubernetes authentication
methods or Kubernetes secrets engines for vclusters.
SyncInterval defines the interval at which to sync secrets from Vault.
Defaults to 1m.
See https://pkg.go.dev/time#ParseDuration for supported formats.
RoleMapping indicates an optional role mapping from a rancher project role to a rancher cluster role. Map to an empty role to exclude users and groups with that role from
being synced.
RoleMapping indicates an optional role mapping from a rancher role to a loft role. Map to an empty role to exclude users and groups with that role from
being synced.
User is the quota status for each user / team. An example status
could look like this:
status:
quotas:
user:
limit:
pods: "10"
spaces: "5"
users:
admin:
used:
spaces: "3" # (- calculated in our apiserver
pods: "8" # (- the sum calculated from clusters
clusters:
cluster-1: # (- populated by agent from cluster-1
users:
admin:
pods: "3"
cluster-2:
users:
admin:
pods: "5"