We are excited to announce that Loft v3 has shipped. Loft v3 brings new features, self-service virtual clusters at scale, and deeper GitOps integrations. With this new release we have introduced projects, an Argo CD integration, an overhauled UI, and many other improvements. Let’s take a look at some of the updates and new features added to this release.
Projects have been introduced as a way to manage access to virtual clusters, spaces, templates, and project secrets. With projects, you can now define user and team access to each resource type. Projects are the highest level organizational unit in Loft. In the simplest form, projects can be thought of as a collection of environments (spaces and virtual clusters), however, they also play an important role in enforcing role based access and quotas within Loft.
Users and teams are defined as members. Members are assigned roles that will be granted within each of the allowed clusters of the specific project. Only users and teams that are members of the project (and of course global admins of the entire Loft instance) will have access to the project’s environments and resources.
Roles can be used to define permissions within a project for members. By default, there are three new roles provided: Project Admin, Project User, and Project Viewer. New roles can be created, or you can use the provided roles and edit them as needed.
Projects will better organize resources and reduce time spent managing users and teams as well as permissions. With the API reference updates, and the ability to view YAML based on wizard driven configurations, the management of projects and resources within them can be automated.
#Argo CD Integration
Connecting virtual clusters to Argo CD has never been easier. With Loft v3, you can add your Argo CD instance to a project, enable Argo Login through Loft SSO, and add virtual clusters as available cluster targets in Argo CD with a single click.
The Argo CD integration can be enabled on a per-project basis. Select where Argo CD is installed, update the namespace where it is running, and then save changes. That’s all it takes to enable the Argo CD integration.
After being enabled in the Project, virtual clusters can be selectively imported into Argo CD. Importing a virtual cluster will register it as a valid destination within Argo CD, so you can deploy applications to the virtual cluster.
Want to make login easier for Argo CD? The ability to use Loft as an SSO provider for Argo CD is as easy as enabling it within the project settings. Once enabled you will need to set the Argo CD URL and select the “Assigned Roles” for your project members.
After enabling the Argo CD integration, the “Open Argo” link will appear in the UI at the bottom of the project menu.
When you click the “Open Argo” link, it will open Argo CD in a new tab. If Loft-based SSO is enabled, there will now be two options for logging in, “Log in via Loft” or the default Argo CD username and password sign-in form.
#GitOps Friendly Deployment
If you are already using GitOps for everything then you are in luck, the Loft Control plane can be managed and deployed using GitOps. Interested in managing the resources within Loft with the same tooling? You can do that too. One of the core tenets of Loft is that all Loft resources are just “normal” Kubernetes resources. This means that you can manage any Loft objects in the same way you’d manage any other Kubernetes resources in your GitOps workflow.
In Loft v2 there were a few pieces that made managing Loft fully via GitOps harder than our users would have liked it to be:
- License keys had to be manually applied via the UI rather than via GitOps. In Loft v3, we fixed this without forcing you to add license keys directly as Helm values which seemed like a security concern.
Deploying Loft from scratch via GitOps was relatively easy but deploying all Loft-managed objects such as virtual clusters and spaces wasn’t trivial with v2 because you had to compile all the resources yourself and also talk to multiple Kubernetes clusters. With v3, we have made creating Loft manifests easier by providing examples in our API reference docs and we introduced a central Kubernetes API under the hood that you can use to manage any resource in any connected cluster. Additionally, our UI now has a very nice YAML editor embedded, so you can see any changes to the YAML side-by-side as you are editing resources in Loft which makes constructing manifests incredibly easy - even without the need to apply anything to the cluster. More about this below in our UI revamp!
#Version Control and Syncing Templates
Templates for virtual clusters, spaces, and apps can now be versioned. This provides the flexibility of adding a new version while retaining the information for existing instantiations of previous template versions. Use templates to create different app versions, then upgrade running applications using the new template.
Let’s take a look at how this feature is implemented for virtual clusters. Virtual clusters that have been deployed with a template will show a warning that a newer template version is available. Virtual clusters can be upgraded to the new template version or kept on their current version.
Provide options to your users with parameters. This adds a bit of controlled flexibility for configuration options needed for certain applications. Users will be prompted to select an option when deploying from a virtual cluster template. These values are defined in the template and available values are restricted to the provided options.
A new navigation menu has been added to the UI. With this update, a few menu items have been moved to make them more accessible and provide better grouping for related features.
Navigation is now broken out into Projects, Templates, and Clusters. Some features have been moved to the bottom of the navigation Menu: User Management, Global Secrets, Audit Logs, and Administration.
#Resource Management with YAML
Create and edit resources within the Loft UI using the built-in YAML Editor. Now resources, such as virtual cluster configurations, will show the YAML configuration while you are creating or editing. This can be useful if you want to construct a Kubernetes manifest for a Loft managed resource (e.g. VirtualClusterTemplate) to create the resource outside of the UI or use the configuration to create resources with automation, e.g. to manage virtual cluster templates via GitOps.
Custom Branding has been added as a configurable option within the Loft Config. Configurable settings include options for a Terms of Service, adding a logo, and updating the primary/sidebar/accent colors. Now branding can be more in line with your company’s brand and other tooling.
#Other Minor Updates
#Multiple SSO Providers
Multiple SSO providers can now be used. This update also allows for multiple providers of the same type. Now you can configure multiple SSO providers to better support users who may have different providers based on teams or geographical regions.
#Improved Quota Control
Quotas now provided expanded options allowing for better control within projects. Configure quotas at the project level to restrict the amount of resources available within the project. Take it a step further and configure quotas per user or team to further split up the resource pool.
Quotas can be configured across multiple clusters. With this update you can expose multiple clusters to a team, within a project, while still enforcing quotas across all of them.
We have introduced the ability to track active and inactive clusters. Use sleep mode to reduce the active cluster count, allowing for more total clusters. Set a quota for virtualclusterincances.active to 10 while virtualclusterinstances is set to 20, allowing for 20 total clusters but restricting running clusters to 10.
The ability to set quotas for Kubernetes resources (https://kubernetes.io/docs/concepts/policy/resource-quotas/) has also been added. Each project can configure quotas based on CPU, Memory, PersistentVolumeClaims, and more.
#Central HostPath Mapper
While it is possible to install a common logging solution such as FluentD or ELK stack on the host cluster and then use it inside the virtual cluster, there are certain use cases where you would want to install them directly on the virtual cluster. These solutions mount the node host paths directly to access the pod and container logs and rely on the filenames to be in a certain format in order to map the logs to the appropriate pods. However, inside a vcluster, this would not work as the vcluster rewrites the names of the pods to a vcluster specific format when it syncs the pods down to the host cluster.
Hostpath mapper was developed to support this by resolving the correct virtual pod and container names to their physical counterparts via the use of symbolic links. It can be enabled per vcluster and is installed as a Daemonset. This works well for a smaller cluster with fewer nodes and virtual clusters. However, as the number of nodes and virtual cluster increases, the number of pods required just for the hostpath mapper increase as we would have one daemonset per vcluster per node.
With Central Hostpath Mapper, this is optimized further where we now have just one Central Hostpath Mapper daemonset installed on the host cluster that manages the hostpath remapping for all the vclusters created on this host cluster.
#Complete API Reference
Looking to automate more of your configuration or the creation of different resources in Loft? An updated and expanded API reference has been added to our docs. Automate the creation of new projects, virtual clusters, spaces, templates, apps, and more.
With all of the great new features, there has never been a better time to try Loft. If you want to get started head over to github or check our documentation. Need help? Join us on slack.