Without automation, launching a new product or improving an existing infrastructure is often an uphill task. Luckily, there are many automation tools available to make your project lifecycle more efficient.
This article will explore two popular automation tools, Terraform and Kubernetes. In this piece, you’ll learn about each tool, and the similarities and differences between them—and how together they make a powerful combination.
Terraform is an open source tool built and managed by HashiCorp. Terraform allows you to use code to define, provision, and maintain infrastructure, a framework known as infrastructure as code. Terraform enables you to develop, automate, and version any infrastructure—regardless of where it’s deployed—safely and efficiently.
You can plan and spin up hundreds of resources, including compute, networking, and storage nodes, through declarative configuration files in Terraform.
Terraform is the go-to tool when you need to repeatedly provision resources for deploying large systems. Terraform allows you to create a high-level infrastructure description that it then uses to generate and execute plans to match any desired end state, all from a single workflow. The tool can also quickly reprovision infrastructure configuration changes when required.
Kubernetes, sometimes abbreviated as K8s, is an industry-leading, open source container orchestration tool for running and maintaining production services and workloads. Developed by Google, in 2014 it was donated to the Cloud Native Computing Foundation and open sourced. Kubernetes manages containerized applications by keeping track of their states.
Kubernetes provides fine-grained control in applications and container lifecycles, especially in dynamic environments. Tasks such as pod tracking, availability, and deployment are simplified by the use of controllers, which are control loops that monitor your Kubernetes cluster and make changes as needed to maintain the expected state.
Kubernetes introduces speed and visibility to the process of building, testing, and releasing/deploying your applications, and is the most popular and widely supported container orchestration tool. It keeps your container workloads running and scaled, and monitors performance to ensure that your application is highly available.
Kubernetes is also incredibly extensible. Developers can use Kubernetes APIs to integrate other platforms and apps, (such as VMware vRealize Operations, GitLab, and many others) to enhance performance, add capabilities, improve monitoring, and more. Kubernetes supports a wide range of workloads and application frameworks, allowing organizations to build and deploy stateless, stateful, and other data-related workloads with ease.
#Working with Terraform
With Terraform, you can define low-level resources like networks and servers, as well as high-level resources such as DNS and SaaS, in human-readable configuration files written in the HashiCorp Configuration Language (HCL). Any infrastructure created from these configuration files can also be versioned, reused, and shared.
For example, suppose you are using AWS as your cloud provider, and you would like to create multiple Elastic Cloud servers of a given type and in a specific region. All you need to do is write a single configuration file that specifies the type and number of instances to be created.
To provision infrastructure with Terraform, you need to understand the scope of your project before writing any configuration file. Terraform then uses the configuration file and connects with your provider’s APIs to spin up the desired instances.
Terraform follows a specific workflow while creating, updating, or destroying resources.
- Write: In this stage, you define any resource you wish to create using code, just as you would write any other project. A resource can be as simple as a virtual machine within a virtual private cloud. Below is a simple Terraform configuration file in the write stage.
# Create repository $ git init test-infra && cd test-infra Initialized empty Git repository in /.../test-infra/.git/ # Write initial config $ vim test.tf # Initialize Terraform $ terraform init Initializing provider plugins... # ... Terraform has been successfully initialized!
Plan: In this stage, Terraform generates an execution object using the
terraform plancommand. The object shows the resources that will be created, versioned, or scaled down.
Apply: When the
terraform applycommand is run, Terraform executes the actions scripted in the configuration plan to create a reproducible infrastructure.
#Common Use Cases
You can use Terraform to perform many tasks, but these are some of the most common.
Terraform keeps track of your deployed resources using a state file, which serves as a point of reference when changes are to be made in the resources. Using a state file, Terraform can determine what changes to your infrastructure are necessary to match any desired end state.
Terraform is a cloud-agnostic platform. You can use a configuration file written in HCL to manage infrastructure resources hosted in multiple cloud providers. It also handles cross-cloud dependencies.
Deploying resources across multiple cloud environments introduces robustness and fault tolerance.
#Managing Multi-Tier Applications
Multi-tier applications have a workflow that separates functionalities by specifying a single logic to execute for every tier. Each tier is defined as a collection of resources, and Terraform automatically takes care of the dependencies required between each tier. To ensure consistency between the tiers, dependencies and plug-ins are needed, which can be challenging to install manually.
Terraform provisioning ensures that these dependencies are installed and implemented correctly, every time. For instance, Terraform confirms the availability of the database tier before provisioning the web servers and load balancer.
Terraform can interact with Software-Defined Networks (SDNs) to automatically configure the network according to the needs of its applications. This lets you move from a ticket-based workflow to an automated one, reducing deployment times.
#Working with Kubernetes
As Kubernetes offers you the means to orchestrate containerized applications, it comes with a lot of flexibility and portability. As a developer, you can choose the operating system, container runtime, CI/CD tools, cloud platforms, among other things. It also allows you the choice of self hosting or using a hosting platform.
With Kubernetes, you can provision a highly resilient infrastructure without any downtime thanks to features like automatic rollback, self-healing of containers, secrets management, platform independence, and load balancing.
Though Kubernetes is a complex system, it can be broken down into several basic components.
Cluster: A group of worker machines (nodes) that run containerized applications.
Node: An instance of a virtual machine or a physical machine that can run a pod.
Pod: The smallest unit of a computing resource that can be managed by Kubernetes. A pod can run a single instance of a containerized application.
Controller: A mechanism to watch the state of a cluster or any Kubernetes resource.
#Common Use Cases
You can use Kubernetes for many common tasks. Some of the most frequent use cases are discussed in this section.
#Orchestrating Containers Across Several Hosts
Because it’s platform independent, Kubernetes can host your container apps across multiple physical machines or cloud instances. It also maintains a failover watch layer to ensure high availability.
#Managing Compute Resources
It’s often less expensive to run a single dedicated Kubernetes cluster than it is to run several separate servers. Additionally, it’s easier to manage such a cluster than to manage different hosts running multiple servers.
#Running CI/CD Platforms
Kubernetes has become a major part of CI/CD methodologies, and is the preferred option for running CI/CD platforms such as Jenkins, Spinnaker, and Drone. As long as the CI/CD platforms can be packaged in a container, Kubernetes can run them.
Kubernetes supports dynamic storage volume provisioning. You can attach storage systems from any platform of your choice, including local storage, cloud storage, and network storage.
#Service Discovery and Load Balancing
Kubernetes exposes a container cluster service to other platforms through a cluster’s DNS endpoint. If network traffic to a cluster is high, Kubernetes can load balance and distribute the traffic to keep the deployed instances stable and available.
#Differences and Similarities of Terraform and Kubernetes
In this section, you’re going to learn about some of the similarities and differences between these two tools. It’s important to note that this isn’t an attempt to determine which tool is “better,” since they serve very different purposes. Kubernetes is a container tool, while Terraform is an infrastructure tool. The difference in application and implementation creates the opportunity for developers to leverage both tools, resulting in a highly resilient system.
#Leveraging both Terraform and Kubernetes
Provisioning Kubernetes clusters is a complex task, especially when multiple cloud providers are involved. A highly cloud-independent tool like Terraform comes in handy in such situations.
Although you could use
kubectl for full lifecycle management of your Kubernetes resources as described in YAML files, orchestration with Terraform offers some benefits.
Through a descriptive, low-code tool such as Terraform, application configurations, deployment pipelines, and even scaling options in Kubernetes are all made easy, regardless of the platforms involved. Since Kubernetes is also highly declarative, Terraform and Kubernetes really complement each other.
Terraform can be used to manage Kubernetes infrastructure, helping you to orchestrate your applications and run them at scale. This alleviates some of the challenges of running Kubernetes, including problems like detecting configuration drift, that is, planned or unplanned changes.
For example, if you update a resource, such as deleting it without changing the original YAML file, Kubernetes will not warn you. To address this, you can integrate Terraform to help manage your Kubernetes resources. The Terraform workflow, specifically in the plan and apply stages, is able to detect configuration changes and gives you a chance to review the changes before applying them to your next orchestration—something
kubectl isn’t able to do.
To define a Kubernetes resource through Terraform requires a provider to manage the Kubernetes APIs and execute resource configurations. A common approach to provisioning Kubernetes resources using Terraform is using the Kubernetes-Terraform provider. With Terraform, you can orchestrate clusters on many popular services, including Amazon Web Services, Azure, and Google Cloud.
Though they’re used for very different tasks, there are some major similarities between the two tools. The following chart is a simplified overview of these similarities.
|Terraform||Open source||Supports multiple cloud providers||Declarative commands (HCL)||Highly scalable||Supports CI/CD pipelines|
|Kubernetes||Open source||Supports multiple cloud providers||Declarative commands (YAML or JSON)||Highly scalable||Supports CI/CD pipelines|
Neither Terraform nor Kubernetes needs additional agents installed in order to function correctly. Platforms like Google Cloud, AWS, and Azure take care of installing, managing, and releasing agent software into parts of the resources you need to provision.
Terraform templates are declarative HCL files that define infrastructure as a desired end state. Likewise, Kubernetes templates are written as declarative YAML files, and the end product is a declarative deployment object.
These declarative templates make your Terraform and Kubernetes codebases small, reusable, and easy to understand, as they capture the full state of an infrastructure workload while abstracting logic and the sequence of events.
Both tools integrate seamlessly with version control systems such as GitHub, providing a single viewpoint to see the status and impact of any change made in the configurations.
You can run Kubernetes and Terraform on on-premises bare metal, OpenStack, or even public clouds. Both tools leverage APIs from multiple environments, avoiding vendor lock-in issues.
Terraform and Kubernetes store state and cluster data about your infrastructure from the configurations you define. You can tie resources to your configurations from the state data and create, update, or push changes to manage your infrastructure.
Terraform stores a representation of your infrastructure objects, as well as their interdependencies and bindings, in a configuration file named
[terraform.tfstate](https://www.terraform.io/language/state). Kubernetes, on the other hand, uses kubectl and the Kubernetes control plane to actively manage every object’s state, matching it to the preconfigured desired end state.
Kubernetes has a number of built-in tools, such as
kubectl, which support different ways of managing Kubernetes cluster objects. However, it can be challenging for organizations to manage these objects because of the level of expertise Kubernetes requires. Managed Kubernetes options, such as third-party tools like kubestack, can help meet this challenge.
While it’s often recommended to use Kubernetes-native tools to manage cluster objects through API calls, management of the same objects through Terraform tools can offer advantages. Some developers prefer using Terraform providers because of the single provisioning language and unified workflow that Terraform provides. This allows users to manage cluster infrastructure and services from a single point.
On the other hand, some developers feel that providers add an unnecessary layer of complexity. Even though providers integrate Kubernetes resources into Terraform, maintaining the Kubernetes resources in Terraform configuration language can be tricky due to the need to translate the schema of each Kubernetes resource to a Terraform schema. If your team has the time and technical knowledge needed for Kubernetes cluster management, using native Kubernetes tools might be the right choice for you. If you’re more interested in having your Kubernetes clusters managed with less overhead, you might do better to consider using a Terraform provider for Kubernetes.
For all their similarities, the tools also have some major differences. The chart below is a simplified summary of these differences.
|Tool||Configuration Language||Workflow||Type||Terminal Tool|
|Terraform||HCL||Simple||Infrastructure as code tool||Terraform CLI|
Terraform focuses on codifying resources that your applications rely on, while Kubernetes is used for automating container tasks.
Terraform defines declarative objects through Hashicorp Configuration Language, or HCL. With an HCL file, you can create resources that will run across multiple cloud platforms. Kubernetes, on the other hand, defines declarative objects as YAML or JSON files that illustrate how to define and manage Kubernetes objects. YAML is preferred to JSON in writing configuration files because it is more user-friendly, but for this purpose, they may be used interchangeably.
The Terraform CLI is a command line interface to Terraform, and supports subcommands and switches such as
terraform plan and
terraform apply. Terraform uses the CLI to run commands that execute and manage declarative configurations to create any desired resources.
Kubernetes has its own command-line tool for manipulating container clusters and Kubernetes resources: kubectl. You can use kubectl for resource creation tasks such as assigning memory and CPU to containers, creating nodes, and deploying applications.
In this article, you have learned about two tools, Terraform and Kubernetes, and examined the similarities and differences between the tools. You’ve seen that Terraform uses a single framework to automate resources declaratively in various cloud platforms, while Kubernetes steps in to orchestrate resource management, deployments, and load balancing for your container environments. You’ve also learned how these two automation tools can work harmoniously, allowing you to use Terraform for creating and deploying Kubernetes resources.
Once you have these two tools coordinated in a project, you’ll be better able to automate infrastructure, application deployments, monitoring, and more.