Kubernetes Deployments: A Definitive Guide

Daniel Olaogun
Minute Read

When you dive into the world of Kubernetes, you'll quickly encounter "deployments." At their core, Kubernetes deployments offer a way to manage and maintain your application's desired state. Think of it as setting a rule, such as "I always want three instances of my app running." Deployments ensure that the rule is upheld, even when unexpected hitches occur.

Consider a scenario where you have an application running on three servers. If one server fails, you'd need to manually restart the application on a different server. That's time-consuming and inefficient. Kubernetes deployments automate this process. When a failure is detected, the system promptly starts your application on another server. It's an automated safety net, ensuring consistent application availability.

By the end of this article, you'll understand:

  • The role and significance of Kubernetes deployments
  • Different deployment strategies and when to use them
  • Best practices and common challenges with Kubernetes deployments
  • The relationship between deployments and other Kubernetes resources
  • Effective methods to scale and oversee your deployments
  • What Are Kubernetes Deployments?

    A Kubernetes deployment acts as a manager for your application pods. It ensures that the pods run the way you've specified. When you set up a deployment, you're giving Kubernetes a model of how you want your app to look. This model includes details like which container image to use, how many pod replicas should be running, and other configuration specifics.

    Deployments are proactive. If a node or a pod goes down, the deployment notices and corrects the discrepancy by creating a new pod, ensuring that the application remains operational and matches the defined state.

    The Relationship between Deployments, Pods, and ReplicaSets

    To visualize the connection between deployments, pods, and ReplicaSets, consider deployments as the top-level managers. Under them are ReplicaSets, responsible for maintaining the correct number of pod replicas. If a pod fails, the ReplicaSet ensures another one is spun up. The pods are the actual workers running your application instances.

    Rough architecture diagram of Kubernetes deployments

    Simply put, deployments look after ReplicaSets, and ReplicaSets oversee pods.

    Understanding the Concept of Declarative Configuration in Deployments

    Kubernetes favors a "tell me what you want, and I'll handle it" approach. You specify your desired outcome, and Kubernetes works out the steps to achieve that. This approach is known as "declarative configuration."

    For deployments, this might look something like:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: app-deployment
    spec:
      replicas: 3
      template:
        metadata:
          labels:
            app: sample-app
        spec:
          containers:
          - name: sample-container
            image: sample-app-image:v1
    

    Here, you're stating your desire: "I want three replicas of a pod running my app's image." Kubernetes takes this declaration and works behind the scenes to make it happen. You don't need to manage the individual steps; the system does it for you.

    Types of Kubernetes Deployment Strategies

    The following sections cover the four different types of Kubernetes development strategies:

  • Recreate strategy
  • Rolling update strategy
  • Blue/green deployment strategy
  • Canary deployment strategy
  • Recreate Strategy

    The recreate strategy is straightforward. When you want to deploy a new version of your application, Kubernetes terminates the existing pods and creates new ones from the updated container image. During this period, your application will experience downtime.

    Imagine you have an e-commerce website. If you use the recreate strategy to update your website to a new version, your website will go down temporarily. When the new version is up and running, users can access the updated site.

    The following code is an example of how you'd implement this strategy:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: ecommerce-website
    spec:
      replicas: 3
      strategy:
        type: Recreate
      template:
        metadata:
          labels:
            app: ecommerce-website
        spec:
          containers:
          - name: ecommerce-container
            image: ecommerce-image:v2
    

    Rolling Update Strategy

    The rolling update strategy minimizes downtime. Instead of taking down all instances of your application, Kubernetes gradually replaces old pods with new ones. At any given time, both old and new versions of your app might be running simultaneously.

    Consider the same e-commerce website. With a rolling update, as users browse products, some might see the old version of a product page, while others see the new version. The transition is smooth, and there's no noticeable downtime.

    The following is an example of this strategy:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: ecommerce-website
    spec:
      replicas: 3
      strategy:
        type: RollingUpdate
        rollingUpdate:
          maxUnavailable: 1
          maxSurge: 1
      template:
        metadata:
          labels:
            app: ecommerce-website
        spec:
          containers:
          - name: ecommerce-container
            image: ecommerce-image:v2
    

    In the rolling update, the fields maxUnavailable and maxSurge are crucial for understanding how the process works.

    maxUnavailable sets the maximum number of unavailable pods during an update. For example, setting it to 1 means only one pod can be down at a time during the update.

    maxSurge specifies the maximum number of extra pods that can be created during the update over the desired number of pods. If set to 1, Kubernetes can create one additional pod during the update.

    These settings provide precise control over application availability during updates, balancing speed and stability as needed. They mitigate deployment risks. For even more control, strategies like blue/green deployments can be used.

    Blue/Green Deployment Strategy

    The blue/green deployment strategy involves running two separate environments: "blue" for the current version of the application and "green" for the new version. Initially, all user traffic is directed to the "blue" environment. After testing the "green" environment thoroughly, traffic is switched, directing users to the new version without any downtime.

    Suppose you have a music streaming service. The "blue" environment runs the version users are currently enjoying. You develop a new feature—perhaps an AI-based music recommendation system—and deploy it to the "green" environment. After testing it extensively, you switch user traffic to "green," instantly giving everyone access to the new feature.

    The following code is an example of this strategy:

    # Blue deployment (current version)
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: music-service-blue
    spec:
      replicas: 3
      selector:
        matchLabels:
          version: blue
      template:
        metadata:
          labels:
            version: blue
        spec:
          containers:
          - name: music-service-container
            image: music-service:v1---# Blue service (current version)
    apiVersion: v1
    kind: Service
    metadata:
      name: music-service-blue
    spec:
      selector:
        version: blue---# Green deployment (new version)
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: music-service-green
    spec:
      replicas: 3
      selector:
        matchLabels:
          version: green
      template:
        metadata:
          labels:
            version: green
        spec:
          containers:
          - name: music-service-container
            image: music-service:v2---# Green service (new version with no initial traffic)
    apiVersion: v1
    kind: Service
    metadata:
      name: music-service-green
    spec:
      selector:
        version: green
    

    When you're ready to switch traffic:

    # Update the main service selector to route traffic to green
    apiVersion: v1
    kind: Service
    metadata:
      name: music-service-main
    spec:
      selector:
        version: green
    

    Canary Deployment Strategy

    Canary deployments, named after the "canary in a coal mine" concept, involve initially releasing a new software version to a subset of users before a full-scale rollout. In contrast to blue/green deployments, where the entire environment is swapped out at once, canary deployments take a more gradual approach. You first release the new version to a small group of users (the "canaries") and then gradually scale up based on feedback and performance data. This incremental release strategy offers fine-grained control, enabling continuous assessment of how the new release impacts system metrics and user behavior.

    Imagine you have a social media platform. You've just developed a new, experimental feature. Instead of rolling it out to all users, you release it to just 5 percent (the "canaries"). If they engage positively and no major issues arise, you increase the rollout percentage until all users have the new feature.

    The following code sample uses weighted routing with an ingress controller, like Istio or Nginx:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: social-media-ingress
      annotations:
        nginx.ingress.kubernetes.io/canary: "true"
        nginx.ingress.kubernetes.io/canary-weight: "5"
    spec:
      rules:
      - host: socialmedia.com
        http:
          paths:
          - path: /
            backend:
              serviceName: social-media-canary-service
              servicePort: 80
    

    The weighted routing with an ingress in this sample is used to control the percentage of user traffic that gets directed to the new, "canary" version of your application. This granularity is crucial for canary deployments because it allows you to test the new version on a subset of your userbase before rolling it out to everyone. Ingress controllers offer a convenient way to implement this kind of traffic splitting.

    Cons of Canary and Blue/Green vs. Rolling Updates

    Both canary and blue/green deployments require more complex configurations compared to rolling updates. For instance, with canary deployments, you might need to integrate with an ingress controller or a service mesh to manage weighted routing. Similarly, blue/green deployments often require additional operational steps to switch traffic between environments.

    Regarding downtime, rolling updates aim to minimize downtime by ensuring that a certain number of pods are always available while the update happens. In contrast, while blue/green deployments can achieve zero downtime during the switch, they may experience downtime if the "green" environment has not been adequately tested. Canary deployments, on the other hand, expose only a portion of users to potential downtime but may still negatively impact the user experience for that subset.

    Best Practices for Working with Kubernetes Deployments

    Deploying applications in Kubernetes can seem daunting at first, but following best practices can streamline the process and mitigate potential challenges.

    Leverage Declarative Configuration

    Always define your application's desired state using declarative configuration. By doing so, you allow Kubernetes to take care of the heavy lifting, ensuring your application meets the defined criteria without you micromanaging the process.

    For example, instead of manually scaling the number of replicas, define the desired replicas in your deployment configuration:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: my-application
    spec:
      replicas: 5
      ...
    

    Implement Health Checks

    A crucial aspect of maintaining application availability is detecting and resolving issues quickly. Health checks let Kubernetes know the status of your pods and take necessary actions if they're unresponsive.

    For example, liveness and readiness probes can be used to check the health of your pods:

    apiVersion: v1
    kind: Pod
    metadata:
      name: my-app
    spec:
      containers:
      - name: my-app-container
        image: my-app:v1
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
        readinessProbe:
          httpGet:
            path: /readiness
            port: 8080
    

    Utilize Labels and Annotations

    Labels and annotations are key-value pairs that you can attach to Kubernetes objects. While labels help to identify and select objects, annotations store additional metadata.

    For example, you can use labels to group related pods:

    apiVersion: v1
    kind: Pod
    metadata:
      name: backend-pod
      labels:
        app: my-app
        tier: backend
    

    Manage Resource Usage

    Define resource requests and limits for your containers to ensure efficient utilization and to prevent any one container from consuming all available resources.

    The following example sets CPU and memory usage for a container:

    apiVersion: v1
    kind: Pod
    metadata:
      name: resource-pod
    spec:
      containers:
      - name: my-container
        image: my-image:v1
        resources:
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "128Mi"
            cpu: "500m"
    

    Build a Rollback Plan

    Mistakes happen. It's important to have a plan to quickly revert a deployment if something goes wrong. Kubernetes facilitates easy rollbacks to a previous deployment version.

    You can roll back a problematic deployment with the following:

    kubectl rollout undo deployment/my-deployment
    

    Understanding Scaling Kubernetes Deployments

    In the world of applications, demands are often dynamic. Sometimes, you may encounter a surge in users or requests, and at other times, the demands may recede. Kubernetes offers robust scaling mechanisms to address these variable demands without compromising performance or resource efficiency.

    Horizontal vs. Vertical Scaling

    Horizontal scaling refers to adding or reducing the number of pods in a deployment. It's like adding more workers to handle an increased workload. In Kubernetes, you achieve horizontal scaling through the replicas field in the deployment configuration.

    You'd use the following code if you wanted to scale out your application to five replicas:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: my-app-deployment
    spec:
      replicas: 5
      ...
    

    You can also use a kubectl command:

    kubectl scale deployment my-app-deployment --replicas=5
    

    Vertical scaling involves increasing or decreasing the resources (CPU, memory) of an existing pod. It's akin to giving your single worker more tools or capabilities to handle a job more efficiently. While Kubernetes supports vertical scaling, it's more disruptive than horizontal scaling because it requires restarting the pod.

    The following increases the CPU and memory for a pod:

    apiVersion: v1
    kind: Pod
    metadata:
      name: my-vertically-scaled-pod
    spec:
      containers:
      - name: my-container
        image: my-image:v1
        resources:
          requests:
            memory: "256Mi"
            cpu: "1"
          limits:
            memory: "512Mi"
            cpu: "2"
    

    Deployments in Scaled Environments

    When a deployment is horizontally scaled, Kubernetes ensures that the specified number of pod replicas are always running. If a pod fails, Kubernetes automatically creates a new one to maintain the desired count. This mechanism enhances fault tolerance and accommodates growing user traffic.

    Vertical scaling might be more suitable for resource-intensive applications (such as databases). However, monitoring these applications closely is essential to ensure they don't max out their allocated resources, leading to potential failures or degraded performance.

    Pros and Cons of Multicluster and Multicloud Deployments

    In the push for more resilient and adaptable infrastructures, businesses are increasingly looking at multicluster and multicloud solutions. Both approaches have their merits and drawbacks. Let's look into them.

    Multicluster Deployments

    In a multicluster deployment, you distribute your Kubernetes workloads across more than one cluster. This approach enhances isolation and allows for the geographical distribution of services, thus reducing latency. However, it does introduce challenges, such as increased complexity and potential configuration drift.

    Benefits of Multicluster Deployments

    Some of the benefits of multicluster deployments include:

  • Isolation: If one cluster fails, it doesn't impact the functioning of other clusters. This is beneficial for mission-critical applications.
  • Geographical distribution: You can place clusters nearer to your users, reducing latency and improving response.
  • Environment segregation: Separate clusters for development, staging, and production ensure environment integrity.
  • Challenges of Multicluster Deployments

    Some of the challenges of multicluster deployments include:

  • Increased complexity: More clusters mean more things for you to oversee and maintain.
  • Cost: You might find costs climbing with more infrastructure to pay for.
  • Configuration drift: You'll need to ensure settings are synchronized across clusters, which can be demanding.
  • Multicloud Deployments

    Multicloud deployments involve distributing your Kubernetes workloads across multiple cloud providers. This approach offers the advantage of avoiding vendor lock-in and leveraging unique services from different providers. However, this flexibility comes at the cost of increased complexity and potential data transfer costs.

    Benefits of Multicloud Deployments

    Here are some of the benefits of multicloud deployments:

  • No vendor lock-in: Distributing workloads across multiple cloud providers gives more flexibility and reduces dependency on a single vendor.
  • Optimized service utilization: You can cherry-pick specialized services from different providers, ensuring the best fit for your tasks. That is, you can use specialized services that excel on specific cloud platforms, thereby leveraging the strengths of each provider.
  • Redundancy: Even if one cloud provider faces an outage, your application can remain accessible via another provider.
  • Challenges of Multicloud Deployments

    Here are some of the challenges of multicloud deployments:

  • Complexity: Managing services across different cloud platforms requires expertise in each platform's nuances.
  • Data transfer costs: Moving data between cloud providers can incur significant costs.
  • Security concerns: Ensuring consistent security policies across cloud providers can be challenging.
  • Strategies for Effective Deployments

    To navigate the complexities and fully leverage the benefits of multicluster and multicloud deployments, consider adopting the following strategies:

  • Unified management: Use tools like Rancher or Google Anthos to manage multicluster and multicloud deployments from a unified interface.
  • Consistent configuration: Adopting tools like Terraform can help you ensure consistent infrastructure provisioning across clusters and clouds.
  • Monitoring and observability: Implement comprehensive monitoring to catch discrepancies and issues. Tools like Prometheus and Grafana can be integrated across multiple clusters and clouds.
  • Network policies: Ensure that communication across clusters and clouds is secure. Implementing network policies and solutions like Istio can help.
  • GitOps and Kubernetes Deployments

    In the modern DevOps landscape, there's an increasing shift towards GitOps—a methodology where Git repositories serve as the singular source of truth for defining the desired state of applications and infrastructure.

    GitOps leverages Git's versioning and collaboration capabilities to drive infrastructure and application deployments, creating a unified source of truth for both code and infrastructure configurations. All changes are made through Git commits, with automated systems ensuring that the actual resource state matches the desired state in the Git repository. This approach treats infrastructure as code, enabling automation, code reviews, versioning, and CI/CD practices for operations workflows. GitOps fosters collaboration between development and operations teams through pull requests, leading to quicker feedback and compliance with industry regulations.

    The elegance of GitOps lies in its simplicity, as it replaces specialized tools with Git as a universal tool, reducing the learning curve and facilitating onboarding for team members familiar with Git workflows. It transforms operations into a code-based, collaborative process, streamlining infrastructure management and deployment practices while enhancing transparency and auditability.

    How Kubernetes Facilitates GitOps

    With its declarative approach to configurations, Kubernetes works seamlessly with GitOps principles. Here's how:

  • Declarative manifests: Kubernetes manifests, written in YAML or JSON, clearly define the desired state of resources. These manifests can be stored in Git repositories.
  • Watchers: There are tools like Argo CD and Flux that watch for changes in your Git repository. When they detect a change, they automatically update the Kubernetes cluster to match the desired state.
  • Rollbacks: If an update fails or causes issues, you can quickly revert the changes in Git, and the watchers will roll back your cluster to its previous stable state.
  • Suppose you have a deployment in Kubernetes, and its manifest is stored in a Git repository. If you want to update the image version, you simply modify the manifest in your Git repository. Once the change is merged, Argo CD or Flux detects this change and updates the deployment in your Kubernetes cluster:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      template:
        spec:
          containers:
          - name: app-container
            image: my-application:v2  # Update the image version here
    

    Benefits of Combining Kubernetes with GitOps

    There are many benefits to combining Kubernetes with GitOps, and you'll learn about some of them in this section. The benefits of combining Kubernetes with GitOps include:

  • Consistency: With configurations versioned in Git, you have a consistent record of all changes and can easily track or revert them.
  • Collaboration: Developers and operators can collaborate using Git's standard tools, such as pull requests, to review and discuss changes.
  • Security: Changes to infrastructure or applications pass through Git's access controls, ensuring unauthorized changes are flagged or prevented.
  • Automated sync: Tools like Argo CD ensure that your cluster's state always aligns with the state defined in your Git repository, automating synchronization and reducing manual interventions.
  • Common Challenges with Kubernetes Deployments

    Even with its multitude of features and capabilities, Kubernetes is not without its challenges. This section highlights some of the typical issues DevOps engineers face and provides strategies to navigate these challenges effectively.

    Complexity

    Kubernetes comes with a steep learning curve, with many components and concepts to understand. This can be daunting, especially for newcomers.

    Mitigation:

  • Training and workshops: Invest in comprehensive training sessions for your team.
  • Documentation: Ensure that your Kubernetes implementations are well-documented.
  • Use managed services: Platforms like Google Kubernetes Engine (GKE), Azure Kubernetes Service (AKS), and Amazon Elastic Kubernetes Service (Amazon EKS) manage some of the complexities for you.
  • Networking Issues

    Networking in Kubernetes, encompassing services, ingresses, and network policies, can sometimes be intricate due to a variety of factors such as complex service-to-service communication, the need for controlled access to services, and the challenges of traffic routing and load balancing. These complexities often require a nuanced understanding of Kubernetes' networking model and may involve additional tools or custom configurations to achieve desired behaviors.

    Mitigation:

  • Standardize networking solutions: Adopt standardized networking solutions like Calico or Cilium.
  • Monitoring and logging: Implement robust logging and monitoring to proactively identify and address networking issues.
  • Security Concerns

    Security misconfigurations or overlooking best practices can leave your deployments vulnerable.

    Mitigation:

  • RBAC: Implement role-based access control (RBAC) to ensure only authorized users can perform specific operations.
  • Security scanning: Regularly scan your container images for vulnerabilities.
  • Network policies: Define network policies that determine which pods can communicate with each other.
  • Storage Management

    Persistent storage in Kubernetes can be tricky, especially when dealing with stateful applications.

    Mitigation:

  • Stateless architecture: Whenever possible, design your applications to be stateless.
  • Use Persistent Volume Claims (PVCs): For stateful applications, use PVCs in conjunction with storage classes to ensure data persistence.
  • Configuration Drift

    When managing multiple environments, there's a risk of configurations drifting apart, leading to inconsistencies.

    Mitigation:

  • GitOps: As discussed earlier, adopting a GitOps approach ensures your configurations are version-controlled and consistent.
  • Configuration management tools: Tools like Helm can help manage and version your Kubernetes configurations.
  • Conclusion

    In this guide, you learned about Kubernetes deployment fundamentals that enable control over application scaling and stability. You explored deployment strategies like recreate, rolling update, blue/green, and canary, each with its own use cases and best practices, including declarative configurations and health checks. You also tackled scaling challenges, both vertical and horizontal, along with insights into multicluster and multicloud deployments. Additionally, GitOps integration for efficient and transparent deployments was covered.

    Staying updated and engaged with Kubernetes' evolving landscape and community is crucial for mastering its deployments and ensuring success in container orchestration.

    Sign up for our newsletter

    Be the first to know about new features, announcements and industry insights.