Quickstart Guide
Goal: Install Loft to Kubernetes, evaluate the core features, and decide if Loft is what your team needs
Estimated time: 5 minutes
Requirements:
- any local or remote Kubernetes cluster (minikube, kind, Docker Desktop, GKE, EKS, AKS, Rancher, ...). Allocate at least 4 GB of memory for single node clusters like Docker Desktop and minikube.
kubectl
(check viakubectl version
)helm
v3 (check withhelm version
)- a kube-context with admin access to this Kubernetes cluster (check with
kubectl auth can-i create clusterrole -A
)
1. Download Loft CLI
Use one of the following commands to download the Loft CLI binary from GitHub:
- Mac Terminal
- Linux Bash
- Windows Powershell
npm install -g loft.sh
yarn global add loft.sh
curl -L -o loft "https://github.com/loft-sh/loft/releases/latest/download/loft-darwin-amd64" && sudo install -c -m 0755 loft /usr/local/bin
curl -L -o loft "https://github.com/loft-sh/loft/releases/latest/download/loft-linux-amd64" && sudo install -c -m 0755 loft /usr/local/bin
md -Force "$Env:APPDATA\loft"; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.SecurityProtocolType]'Tls,Tls11,Tls12';
Invoke-WebRequest -URI "https://github.com/loft-sh/loft/releases/latest/download/loft-windows-amd64.exe" -o $Env:APPDATA\loft\loft.exe;
$env:Path += ";" + $Env:APPDATA + "\loft";
[Environment]::SetEnvironmentVariable("Path", $env:Path, [System.EnvironmentVariableTarget]::User);
Line 4 of this install script adds the install directory %APPDATA%\loft
to the $PATH
environment variable. This is only effective for the current powershell session, i.e. when opening a new terminal window, loft
may not be found.
Make sure to add the folder %APPDATA%\loft
to the PATH
environment variable after installing Loft CLI via Powershell. Afterwards, a reboot might be necessary.
To confirm that Loft CLI is successfully installed, reboot and test via:
loft --version
Alternatively, you can simply download the binary for your platform from the GitHub Releases page and add this binary to your PATH.
2. Deploy Loft + Login
Ensure that you are in the correct kube-context and then run this command:
loft start
When Loft CLI asks you how you How do you want to access loft?
, answer with via port-forwarding
. We will see later how you can connect a domain and configure SSL.
The output of loft start
will look like this:
[info] Welcome to the Loft installation.
[info] This installer will guide you through the installation.
? Seems like your cluster is running locally (Docker Desktop, minikube, etc). Is that correct?
> Yes
? How do you want to access Loft?
> via port-forwarding (only you can access Loft)
? Enter an email address for your admin user: admin@example.com
[info] Executing command: helm install loft -n loft ...
[done] √ Successfully deployed Loft to your kubernetes cluster!
[done] √ Loft pod has successfully started
########################## LOGIN ############################
Username: admin
Password: XXXX-XXXX-XXXX-XXXX <-- COPY THIS, YOU WILL NEED IT IN THE NEXT STEP
Login via UI: https://localhost:9898
Login via CLI: loft login --insecure https://localhost:9898
!!! You must accept the untrusted certificate in your browser !!!
Follow this guide to add a valid certificate: https://loft.sh/docs/administration/ssl
#################################################################
Loft was successfully installed and can now be reached at: https://localhost:9898
Thanks for using loft!
loft start
You may re-run loft start
and it will automatically detect that Loft has already been deployed and just start port forwarding again.
After Loft has been deployed and port forwarding has been started, open a second terminal window and run the following command to log in to your Loft instance:
loft login --insecure https://localhost:9898
The CLI will then open the browser, so you can log in with the admin password shown in the output of loft start
(see step 2 above).
After you logged in via the UI, you will also be logged in via the CLI because you started the login process via loft login
.
3. Main Features
3.1 Spaces
Spaces in Loft are the equivalent of regular Kubernetes namespaces with the difference that users can create spaces without having to be cluster admin.
- Create Spaces
- Delete Spaces
- List Spaces
- Use Spaces
- Inspect Spaces
While users will be able to create spaces via the UI, it is much easier to create them via the CLI:
loft create space my-microservice-1
Because this command adds a kube-context to the local kube-config file for this space, the user will now be able to immediately use this namespace with any tool that is working with the current kube-context.
Here are just a few example commands of what a user could do with their space:
kubectl get pods # use regular kubectl (no --namespace flag needed)
helm install ... # install helm charts to the namespace
skaffold run # deploy something with skaffold
devspace dev # start development with DevSpace
Users will also be able to delete spaces but only the ones that they own:
loft delete space my-microservice-1
Although non-admin users will typically not be able to list all namespaces in a cluster, they can list spaces because they are "virtual" objects and listing spaces filters and only shows those namespaces that the user has the RBAC permission to create objects in.
loft list spaces
Users can have multiple spaces and switch between them. Try it:
loft create space my-microservice-2 # we are not working in this new namespace now
loft use space my-microservice-1 # switch back to the kube-context of the first namespace
While users can use kubectl
or any localhost UI (e.g. Lens) to explore what is running in their spaces, Loft also lets users take a quick look inside their spaces using the Loft UI.
3.2 Virtual Clusters
What are virtual clusters?
- Fully functional Kubernetes clusters (run with the certified Kubernetes distribution k3s)
- Run inside a single namespace of another Kubernetes cluster
- Very lightweight and start within a few seconds (much faster than regular EKS/GKE clusters)
- Much cheaper than "real" Kubernetes clusters (no extra cost for API server + sleep mode, see below)
How to work with virtual clusters?
Using virtual clusters is very similar to using spaces:
- Create Virtual Clusters
- Delete Virtual Clusters
- List Virtual Clusters
- Use Virtual Clusters
- Inspect Virtual Clusters
While users will be able to create Virtual Clusters via the UI, it is much easier to create them via the CLI:
loft create vcluster my-dev-stack-1
Because this command adds a kube-context to the local kube-config file for this virtual cluster, the user will now be able to use the virtual cluster with any tool that is working with the current kube-context.
Here are just a few example commands of what a user could do with this virtual cluster:
kubectl get namespaces # list namespaces of the virtual cluster with kubectl
kubectl create namespace backend # create a namespace `backend`
kubectl get pods -n kube-system # list
Especially looking at the last example command above, you can see that the user really is admin of this virtual cluster. Although the user only has very restricted access to a single namespace of the host cluster, the user is admin of the virtual cluster that runs inside this namespace of the host cluster.
Users will also be able to delete virtual cluster but only the ones that they own:
loft delete vcluster my-dev-stack-1
Users can also create multiple virtual clusters (e.g. for different microservices that they are working on). To get a list of all their virtual clusters, they can run:
loft list vclusters
Users can have multiple virtual clusters and switch between them. Try it:
loft create vcluster my-dev-stack-2 # we are not working in this new virtual cluster now
loft use vcluster my-dev-stack-1 # switch back to the kube-context of the first virtual cluster
While users can use kubectl
or any localhost UI (e.g. Lens) to explore what is running in their virtual clusters, Loft also lets users take a quick look inside their virtual clusters using the Loft UI.
All virtual cluster inspection views you can see in the Loft UI are essentially using the kubectl --all-namespaces
flag because virtual clusters tend to have a much smaller amount of namespaces and resources than regular clusters, so it is very fast to show all resources across all namespaces.
Why virtual clusters?
Sometimes, your engineers may need more than just namespaces, e.g. when:
- they need to deploy Helm charts that configure RBAC
- they deploy applications to multiple namespaces (e.g. database is supposed to run in a different namespace)
- they want to install something that needs to add Custom Resource Definitions
In such cases, users can create virtual clusters instead of spaces/namespaces. Learn more about virtual Kubernetes clusters.
3.3 Sleep Mode + Auto-Delete
Loft automatically tracks idle namespaces and lets you configure if they should either be put to sleep or be automatically deleted after a certain period of inactivity.
- Sleep
- Wake-up
- Using UI
- Automatic
- Automatic For All Spaces
Run this command to manually put a space to sleep:
loft sleep [SPACE_NAME] # optional flag: --cluster=[CLUSTER_NAME]
The default RBAC rules set up by Loft will not permit non-admin users to edit spaces/namespaces.
You can always wake up a space by running a kubectl command:
# IMPORTANT: Make sure you are in the context of this space (using the correct namespace)
kubectl get po
Instead of running a kubectl command, you can also run this command to manually wake up a space:
loft wakeup [SPACE_NAME] # optional flag: --cluster=[CLUSTER_NAME]
The default RBAC rules set up by Loft will not permit non-admin users to edit spaces/namespaces.
While triggering sleep mode manually is great, you can also configure that a certain namespace goes to sleep automatically after a certain period of inactivity.
While automating sleep mode for single spaces is great, you can also automate things even further and configure that all namespaces of a certain user or team go to sleep automatically after a certain period of inactivity. Additionally, you can configure that spaces will be deleted after a certain period of inactivity.
To learn more about sleep mode and auto-delete for Kubernetes namespaces.
To learn more about how sleep mode works, take a look at the docs page about sleep mode for namespaces and virtual clusters.
4. Give Your Team Access
The features you have seen in this quickstart guide are great, but the idea of Loft is not that you as an admin are going to use them, instead the idea is to let your engineers use these features to get self-service access to Kubernetes. With Loft, you can add a pool of clusters and then give engineers access to these clusters.
In Loft, access to a cluster is defined by a cluster account. A cluster account is an object created in each connected cluster that references a user or team and defines which access and restrictions apply. Every one of these clusters can run in a different availability zone or even in a different cloud platform etc. So, you can connect, for example, EKS clusters from AWS and GKE clusters from Google Cloud and then give people access depending on what they need to work with.
Loft provides a lot of powerful team features that will make it easy for you to set up a Kubernetes self-service platform for your team:
- Integration with Authentication Providers (LDAP, SAML, GitHub, GitLab, Micrsoft, Google, ...)
- Multi-Cluster Access Management & Secure Multi-Tenancy
- Automatic Tenant and Namespace Isolation via Security Templates
- Namespace-Based Kube-Context Management (as seen in the examples above)
To set up Loft for your team on top of a public or private cloud Kubernetes platform such as GKE, EKS, AKS, RKE, or DOKS, follow the Full Setup Guide for setting up Loft for your team.
If you have questions about Loft or need help setting it up for your use case, schedule a call with one of our Kubernetes experts to get a full demo of Loft and to discuss your use case.
5. Cleanup
If you installed Loft to a test cluster and you are now ready to depoy it to a different cluster for your team, use this guide to fully remove everything and uninstall Loft from your test cluster with just a few commands.