Kubernetes v1.31: What’s New with Elli?

Saiyam Pathak
Hrittik Roy
7 Minute Read

Kubernetes 1.31 has been released, bringing with it significant contributions. This large project continually adds new features during each release cycle, with new additions to alpha to reflect the ongoing innovation and optimization within Kubernetes.

Kubernetes v1.31 Elli logo
Kubernetes v1.31: Elli Source: k8s/website

The 1.31 release team has introduced “Elli,” a cute and joyful dog, as the theme for this release and this post covers all the things you should know about and how you can run it on your machines!

The 1.31’s main highlight is Kubernetes’s transition to being cloud-neutral through the Kubernetes Externalization [KEP-2395] process. This process removes all cloud provider-specific code from the k8s.io/kubernetes repository while eliminating in-tree integrations with cloud providers and moving it to their separate provider repos.

The Goal? To make Kubernetes more cloud-agnostic and lightweight. With that, let’s dive into other changes:

Features: Beta -> Stable

The Beta to Stable migration includes all the features available in version 1.31 for you to use. These features have been thoroughly tested and prepared for your production use cases. We have identified a few features that you should consider using:

AppArmor Support

AppArmor, a Linux kernel model support, is now GA which helps you to reduce the general attack surface area. It works by restricting certain functions of a program via profiles. These profiles can be attached to your applications or containers via your app configuration like below:

securityContext:
  appArmorProfile:
    type: <profile_type>

Read more in the official blog and the KEP-24 proposal.

Persistent Volume Last Phase Transition Time

The status field "lastTransitionTime" for PersistentVolume is generally available as of version 1.31 after being introduced as an alpha feature in Kubernetes v1.28. This field contains a timestamp indicating the volume’s most recent phase transition (Pending, Bound, or Released).

This feature assists in monitoring and troubleshooting issues more effectively, as now you can sort PVs according to this parameter.

Read more here in the blog and KEP-3762.

Improved Ingress Connectivity Reliability for Kube-proxy

With KEP-3836 implemented, new Ingress services will be more resilient. They will handle ingress connectivity for endpoints on terminating nodes and nodes with unhealthy kube-proxies, focusing on eTP services and reducing the chances of traffic drops during node termination.

This will be implemented by default when you upgrade your cluster. You can read more about it here.

Retriable and Non-Retriable Pod Failures for Jobs

With version 1.31 and KEP-3329, you can control retries based on exit codes and/or pod deletion reasons within a Job. The .spec.podFailurePolicy field enables your cluster to handle Pod failures according to container exit codes and Pod conditions.

apiVersion: batch/v1
kind: Job
metadata:
  name: job-pod-failure-policy-ignore
spec:
  completions: 4
  parallelism: 2
  template:
    spec:
      restartPolicy: Never
      containers:
      - name: main
        image: docker.io/library/bash:5
        command: ["bash"]
        args:
        - -c
        - echo "Hello world! I'm going to exit with 0 (success)." && sleep 90 && exit 0
  backoffLimit: 0
  podFailurePolicy:
    rules:
    - action: Ignore
      onPodConditions:
      - type: DisruptionTarget

This can be used with a Pod backoff failure policy to better control the handling of container- or Pod-level failures within a Job. Read more here.

Features: Alpha -> Beta

These are the features that have been moved for public testing and can be used by the public and testers on a broad scale before they mature to GA or are removed. The four most useful ones are:

nftables Backend for Kube-proxy

The nftables option introduced in KEP-3866, is now in beta. If moved to GA, it can provide improved performance and scalability over iptables, as it can suffice to provide more services in large clusters.

Your kube-proxy can be used with nftables using the following manifest:

apiVersion: kubeproxy.config.k8s.io/v1beta1
kind: KubeProxyConfiguration
mode: "nftables"

The important thing to note is this is still in beta and a lot of network plugins might not work with it and behave differently.

Multiple Service CIDRs

IP exhaustion often occurs in large or long-lived clusters due to the numerous services within them. With KEP-1880, it is possible to dynamically modify Service CIDR ranges. This is achieved through a new allocator logic using two new API objects: ServiceCIDR and IPAddress. Essentially, an API is implemented to manage Kubernetes Services IP Ranges.

To test it out, you can walk through this Gist.

Traffic Distribution for Services

trafficDistribution, a new field in beta, allows you to specify how you want the traffic routed. Currently, only the PreferClose value is available, which, if specified, will route traffic to endpoints closer to each other on a network topology. If not specified, there will be no preference, and normal distribution will occur.

Learn more in the KEP-4444 proposal.

Reclaim Policy for PersistentVolumes

HonorPVReclaimPolicy is now in beta. It honours the reclaim policy before deletion through a deletion protection finalizer. Thus, you can ensure that Persistent volumes with a Delete reclaim policy are deleted only after the backing storage is deleted.

Learn more in the KEP-2644 proposal.

Running 1.31 with vCluster 🎉

With so many features, you can be overwhelmed. To test Kubernetes 1.31, the best way is to simply create a vClusterYes, you heard it right.

You can use your existing cluster and create a vCluster to try out the latest Kubernetes version and its features.

Let’s do it practically with a simple demo! 🥼

I already have a Kubernetes 1.29 Azure Kubernetes cluster and I will try to create a virtual cluster using the vcluster CLI ( Install it here! ) with Kubernetes version 1.31. Let’s create the config file for the 1.31 Kubernetes cluster.


controlPlane:
  distro:
    k8s:
      version: v1.31.0 

Create the vCluster using the config file.

vcluster create demo2 -f demo.yaml 

If everything goes as planned, you can expect to see the following results:

14:36:04 info Using vCluster driver 'helm' to create your virtual clusters, which means the vCluster CLI is running helm commands directly
14:36:04 info If you prefer to use the vCluster platform API instead, use the flag '--driver platform' or run 'vcluster use driver platform' to change the default
14:36:07 info Creating namespace vcluster-demo2
14:36:09 info Create vcluster demo2...
14:36:09 info execute command: helm upgrade demo2 /var/folders/5w/grc9pwq910s_cglf5wc132yw0000gn/T/vcluster-0.21.0-alpha.4.tgz-869091025 --create-namespace --kubeconfig /var/folders/5w/grc9pwq910s_cglf5wc132yw0000gn/T/2861306854 --namespace vcluster-demo2 --install --repository-config='' --values /var/folders/5w/grc9pwq910s_cglf5wc132yw0000gn/T/2832265945 --values demo.yaml
14:36:22 done Successfully created virtual cluster demo2 in namespace vcluster-demo2
14:36:23 info Waiting for vcluster to come up...
14:36:39 warn vcluster is waiting, because vcluster pod demo2-0 has status: Init:2/3
14:37:02 done vCluster is up and running
14:37:02 info Starting background proxy container...
14:38:02 warn Error exposing local vcluster, will fallback to port-forwarding: test connection: context deadline exceeded retrieve default namespace: client rate limiter Wait returned an error: context deadline exceeded
14:38:02 done Switched active kube context to vcluster_demo2_vcluster-demo2_do-nyc3-vcluster-demo
14:38:02 warn Since you are using port-forwarding to connect, you will need to leave this terminal open
- Use CTRL+C to return to your previous kube context
- Use `kubectl get namespaces` in another terminal to access the vcluster
Forwarding from 127.0.0.1:12877 -> 8443
Forwarding from [::1]:12877 -> 8443

On the host cluster, you should be able to see version 1.31 for the newly created vCluster.

kubectl get pods -n vcluster-demo2
NAME                                           READY   STATUS    RESTARTS   AGE
coredns-ddc98489-klrr8-x-kube-system-x-demo2   1/1     Running   0          2m59s
demo2-0                                        1/1     Running   0          3m35s

Describe the vcluster pod if you want to learn more about the internals:

kubectl describe pod demo2-0 -n vcluster

Output:

  Normal   Pulling                 4m                     kubelet                  Pulling image "registry.k8s.io/kube-controller-manager:v1.31.0"
  Normal   Pulled                  3m57s                  kubelet                  Successfully pulled image "registry.k8s.io/kube-controller-manager:v1.31.0" in 3.018s (3.018s including waiting). Image size: 26240868 bytes.
  Normal   Created                 3m57s                  kubelet                  Created container kube-controller-manager
  Normal   Started                 3m57s                  kubelet                  Started container kube-controller-manager
  Normal   Pulling                 3m54s                  kubelet                  Pulling image "registry.k8s.io/kube-apiserver:v1.31.0"
  Normal   Pulled                  3m51s                  kubelet                  Successfully pulled image "registry.k8s.io/kube-apiserver:v1.31.0" in 2.357s (2.357s including waiting). Image size: 28063421 bytes.


With that, you should have a running 1.30 cluster to test all the awesome features in the release without upgrading your entire cluster. That’s one of the powers of vCluster. If you want to learn about the other powers, here’s a quick video!

Have more questions or got stuck? Join our Loft Community Slack to talk to the maintainers and other vCluster users who would love to meet you! 👋

Sign up for our newsletter

Be the first to know about new features, announcements and industry insights.