Making Self-Service Clusters Ready for DevOps Adoption

Jason English
Minute Read

Kubernetes Platform Series

  • Part 1: Kubernetes Self-service
  • Part 2: Kubernetes Multi-tenancy
  • Part 3: Kubernetes Cost Optimization with Virtual Clusters
  • History is littered with cautionary tales of software delivery tools that were technically ahead of their time, yet were ultimately unsuccessful because of a lack of end user adoption.

    In the past, the success of developer tooling vendors depended upon the rise and fall of the major competitive platforms around them. An upstart vendor could still break through to grab market share from dominant players in a space by delivering a superior user experience (or UX) and partnering with a leader, until such time as they were acquired.

    A great UX generally includes an intuitive UI design based on human factors, especially important in consumer-facing applications. Human factors are still important in software development tooling, however, the UX focus is on whether the tools will readily deliver value to the organization, by empowering developers to efficiently deliver better software.

    Kubernetes (or k8s) arose from the open source foundations of Linux, containerization, and a contributed project from Google. A global community of contributors turned the enterprise space inside out, by abstracting away the details of deploying and managing infrastructure as code.

    Finally, development and operations teams could freely download non-proprietary tooling and orchestrate highly scalable cloud native software architecture. So what was holding early K8s adopters back from widespread use in their DevOps lifecycles?

    The challenge: empowering developers

    A core tenet of the DevOps movement is self-service automation. Key stakeholders should be fully empowered to collaborate freely with access to the tools and resources they need.

    Instead of provisioning through the approval process of an IT administrative control board, DevOps encourages the establishment of an agile platform team (in smaller companies, this may be one platform manager). The platform team should provide developers with a self-service stack of approved on-demand tooling and environments, without requesting an exhaustive procurement process or ITIL review cycles.

    At first glance, Kubernetes, with its declarative abstraction of infrastructure, seems like a perfect fit for orchestrating these environments. But much like an early sci-fi spaceship where wires are left hanging behind the lights of control panels, many specifics of integration, data movement, networking and security were intentionally left up to the open source community to build out, rather than locking in design assumptions in these key areas.

    Because the creation and configuration of Kubernetes clusters comes with a unique set of difficulties, the platform team may try to reduce rework by offering a one-size-fits-all approach. Sometimes, this may not meet the needs of all developers, and may exceed the needs of other teams with excess allocation and cloud cost.

    You can easily tell if an organization’s DevOps initiative is off track if it simply shifts the provisioning bottleneck from IT to a platform team that is backlogged and struggling to deploy k8s clusters for the right people at the correct specifications.

    Handing over the keys

    The ability to usurp the limitations of physical networks and IP addressing is the secret weapon of Kubernetes. With configuration defined as code, teams can call for namespaces and clusters that truly fit the semantics and dimensions of the application.

    The inherent flexibility of k8s produces an additional set of concerns around role-based access controls (RBAC) that must be solved in order to scale without undue risk.

    In today’s cloudy and distributed ecosystem, engineering organizations are composed differently than the siloed Dev and Ops teams in traditional IT organizations. Various teams may need to access certain clusters or pods within as part of their developmental or operational duties on specific projects.

    Even with automated provisioning, a request would by default generate a cluster with one ‘front door’ key for an administrator, who may share this key among project team members. Permissioned individuals can step on each other’s work in the environment, inadvertently break the cluster, or even allow their credentials to get exposed to the outside world.

    To accelerate delivery without risk, least-privilege rights should be built into the provisioning system by policy and leverage the company’s single-sign-on (SSO) backend for resource access across an entire domain, rather than being manually doled out by an admin.

    In a self-service solution, multiple people can get their own keys with access to specific clusters and pods, or assign other team members to get them. These permissions can lean on the organization’s authorization tools of choice for access control, without requiring admins to write any custom policies to prevent inadvertent conflicts.

    A self-service Kubernetes storefront

    We already know the cost of not getting self-service right. Unsatisfied developers will sneak around procurement to provision their own rogue clusters, creating costly cloud sprawl and lots of lost and forgotten systems with possible vulnerabilities.

    As consumers, we’re acclimated to using e-commerce websites and app stores on our personal devices. At work, we can use a credit card to buy apps, plugins and tooling from marketplaces provided by a SaaS vendor or public cloud.

    The storefront model offers a good paradigm for self-service cluster provisioning. One vendor, Loft Labs, offers a Kubernetes control plane built upon the open source DevSpace tool for standing up stacks. An intuitive interface allows domain-level administrators to navigate automated deployments and track usage.

    Kubernetes self-service clusters with Loft

    More importantly, developers can use their own filtered view of Loft Labs as a storefront for provisioning all available and approved K8s cluster images into new or existing namespaces. Or they can make the provisioning requests via a CLI and drill down into each cluster’s details with the kubectl prompt.

    The system provides guardrails for developers to provision Kubernetes clusters and namespaces in the mode they prefer, without consuming excess resources or making configuration mistakes.

    The Intellyx Take

    Quite a few vendors are already offering comprehensive ‘Kubernetes-as-a-Service’ management platforms that gloss over much of the complexity of provisioning and access to clusters, when what is really needed is transparency and portability.

    Engineers will avoid waiting on procurement boards, and they hate writing repetitive commands, whether that is launching 100 pods at a time for autoscaling or bringing them down when they are no longer required. But they do still want to directly address kubectl for a single pod, look at the logs for that pod and analyze what is going on.

    The platform team’s holy grail is to provide a self-service Kubernetes storefront that works with the company’s authorization regimes to entitle the right users and allow project management, tracking and auditing, while giving experienced engineers the engineering interfaces they need.

    Next up in this series, we’ll be covering the challenges of multi-tenancy and cost control!


    © 2022, Intellyx, LLC. Intellyx is solely responsible for the content of this article. At the time of writing, Loft Labs is an Intellyx customer. Image sources: Maps, Unsplash. Screenshot, Loft Labs.

    Additional Articles You May Like:

    Sign up for our newsletter

    Be the first to know about new features, announcements and industry insights.