Connect a Cluster
A new cluster can be connected to Loft through the UI, CLI, or Helm:
- UI
- CLI
- Helm
- Go to the Clusters view using the menu on the left.
- Click on the button at the very right.
- In the drawer that appears from the right, give your cluster a name in the Display Name field. Optionally give a name for the underlying kubernetes resource in the Kubernetes Name (ID), or leave it empty to have it autogenerated for you. Then click on the button.
- In this section, please copy & execute the displayed Loft CLI command or Helm command
- Please wait until Loft installs the agent in your connected cluster.
- Once successful, you may create a space / virtual cluster in the newly connected cluster by using the displayed Loft CLI command.
- Click on the button to go to the Clusters view.
When connecting a cluster from the Loft CLI, you will need to provide a name for the cluster. This name will be used for the Cluster resource that gets created in Kubernetes. Optionally, you can use the display-name
flag to specify a name for this cluster, when it gets displayed under the Clusters view in Loft.
loft connect cluster cluster-name --display-name [display-name]
The above command will automatically install the Loft Agent in the cluster that is one's current context. If you want to connect a cluster that is not your current context, you can use the --context
flag to specify the context of the cluster you want to connect.
loft connect cluster cluster-name --display-name [display-name] --context [context]
If you prefer to use Helm, you need to make sure that you have a valid Access Key.
You can either reuse the CLI Access Key or create a new one by following the Access Key guide.
Reusing the CLI Access Key
export ACCESS_KEY=$(loft token | jq -r .status.token)
Create new Access Key by following the Access Key guide.
export ACCESS_KEY="[Newly Created Access Key]"
After that, you need to provide a name for the cluster.
export CLUSTER_NAME="cluster-name"
Next, you must create the Cluster custom resource within the platform using the following code:
cat <<EOF | kubectl apply -f -
apiVersion: management.loft.sh/v1
kind: Cluster
metadata:
name: $CLUSTER_NAME
spec:
displayName: $CLUSTER_NAME
networkPeer: true
EOF
Then, you need to set the Platform Host using the following code:
export PLATFORM_HOST="YOUR_LOFT_HOST"
To get the Platform version, you should run the following command:
export PLATFORM_VERSION=$(curl -s "https://$PLATFORM_HOST/version" | jq -r '.version | .[1:]')
To get the Cluster Connect token, you should run the following command:
export CLUSTER_ACCESS_KEY=$(curl -s "https://$PLATFORM_HOST/kubernetes/management/apis/management.loft.sh/v1/clusters/$CLUSTER_NAME/accesskey" -H "Authorization: bearer $ACCESS_KEY")
Finally, you can connect the cluster using the following command:
helm upgrade loft loft --install \
--repo https://charts.loft.sh/ \
--version $PLATFORM_VERSION \
--namespace loft \
--create-namespace \
--set agentOnly=true \
--set url=https://$PLATFORM_HOST \
--set token=$(echo $CLUSTER_ACCESS_KEY | jq -r .accessKey) \
--set additionalCA=$(echo $CLUSTER_ACCESS_KEY | jq -r .caCert) \
--set insecure=$(echo $CLUSTER_ACCESS_KEY | jq -r .insecure)
This command will automatically install the Platform Agent in the cluster that is your current context.
However, if you want to connect a cluster that is not your current context, you can append the --kube-context
flag to specify the context of the cluster you want to connect.
Starting with version 3.4, the Loft agent and control plane interaction can be an egress-only connection initiated from the agent to the control plane.
On an upgrade from a previous version to version 3.4, existing connected clusters managed by the control plane will get automatically upgraded and transformed into egress-only agents. Following the agent upgrade, firewall/security group rules allowing direct access to the Kubernetes API server can be removed.
When connecting a new cluster, the user creates a new cluster resource and obtains a pre-shared key that the user then uses to bootstrap the agent. The agent then utilizes this PSK to reach the control plane, authenticate itself, and establish a secure WireGuard-based, user-space secure tunnel.
If the agent cannot establish a direct WireGuard-based connection, the agents will utilize the control plane as a Designated Encrypted Relay for Packets. The control plane relay is comparable to the same role as TURN servers in the ICE standard, using HTTPS streams (or WebSockets) and WireGuards keys instead.
If you encounter issues while configuring agent values or deploying it manually, you might want to take a look at the Cluster troubleshooting guide.