Table of Contents
Modern-day applications have a lot of functionality that can be difficult to track and manage. Luckily, Ingress Controllers help handle these complexities.
Ingress controllers implement ingress, a Kubernetes object that provides routing rules for external access to services in a cluster. They constantly evaluate all the rules defined in a cluster, manage all redirections, and determine where to direct traffic based on the rules defined in the ingress resource.
However, they are not part of the Kubernetes cluster and must be installed. Many implementations of Ingress controllers exist, but this article focuses on NGINX. Keep reading to learn about NGINX ingress controllers and ten useful configuration options you can add to make your application more dynamic.
Key Takeaway
- NGINX Ingress Controller manages routing rules in Kubernetes, handling traffic redirection and SSL configurations to improve app performance.
- You can use several configurations in your Kubernetes Ingress to make your app more dynamic.
- Annotations and ConfigMaps offer flexibility in managing both global and specific Ingress rules, with annotations taking precedence for custom configurations.
Evaluating Configuration Options
Several configuration options are available for Kubernetes Ingress resources via annotations or ConfigMaps. After extensive research, we have compiled ten of the most popular and reliable options. We have also curated a list with other options, which you can view here.
Ingress Controller Configuration Categories
The NGINX Ingress Controller has additional configuration options. You can customize and configure them to create more dynamic applications. There are two ways to do this:
- Annotations: This lets you create a specific configuration for an ingress rule.
- ConfigMap: This lets you set global configurations for the NGINX Ingress Controller.
Note: Annotations take precedence over a ConfigMap.
Useful Configurations
Here are some configuration options you can use in your application.
WWW Redirects
There are a variety of scenarios where you may want to redirect from www.domain.com
to domain.com
or vice versa. To enable this feature, add the following annotation:
nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
The resource would look like this:
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: minimal-ingress annotations: nginx.ingress.kubernetes.io/from-to-www-redirect: "true" spec: rules: - host: domain.com http: paths: - backend: serviceName: service-www-redirect servicePort: 4000
When using a www-redirect
, note the following:
- If a new ingress is created at some point with the host equal to the one above, the annotation will be omitted.
- For HTTPS to HTTPS redirects, it’s mandatory to define the SSL Certificate in the Secret and add the TLS section of the ingress.
Read more about www-redirects
here.
SSL Redirects
SSL redirects are useful for redirecting traffic from HTTP to HTTPS. If this rule is added in the TLS section of the ingress definition, the NGINX controller will redirect (301) to HTTPS. You can disable this rule by adding the following:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
On the other hand, you can enforce a redirect to HTTPS even when no TLS certificate is available in the case of SSL off-loading. This can be done by using the following:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
You can also preserve the trailing slash in the URI with ssl-redirect
. Here’s an example:
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: test-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/force-ssl-redirect: "true" nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/preserve-trailing-slash: "true" spec: rules: - http: paths: - path: /testpath backend: serviceName: test servicePort: 80
Timeout Settings
There are also several timeout settings you can configure in the ingress resource using annotations:
proxy-connect-timeout
: this defines the timeout for establishing a connection with a proxied server. The default value is 60 seconds, and the timeout typically cannot exceed 75 seconds. Check here for more information.proxy-send-timeout
: this will set a timeout for transmitting a request to the proxied server. The timeout is set only between two successive write operations, not for the transmission of the whole request. According to NGINX ingress documentation, if the proxied server does not receive anything within this time, the connection is closed.
For example, you can add timeout settings to your ingress resource, like the following:
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: cafe-ingress-with-annotations annotations: nginx.org/proxy-connect-timeout: "30s" nginx.org/proxy-read-timeout: "20s" spec: rules: - host: cafe.example.com http: paths: - path: /tea backend: serviceName: tea-svc servicePort: 80 - path: /coffee backend: serviceName: coffee-svc servicePort: 80
Read more about the various timeout settings available here.
CORS
You can also enable cross-origin resource sharing (CORS) in an ingress rule. This allows you to control the methods, headers, origins of requests, and other elements that are allowed to make requests to your cluster. There are several options that can also be activated when CORS is enabled on the ingress resource; for example, the origin of request, the exposed headers, and so forth. To activate CORS on ingress, add the enable-cors
annotation on the ingress. Here’s an example:
nginx.ingress.kubernetes.io/enable-cors: "true"
There are other annotations you can use to control the CORS behavior:
nginx.ingress.kubernetes.io/cors-allow-methods # controls method accepted nginx.ingress.kubernetes.io/cors-allow-headers # controls allowed headers
Here’s an example:
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: test-ingress annotations: nginx.ingress.kubernetes.io/enable-cors: "true" nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS" nginx.ingress.kubernetes.io/cors-allow-headers: "X-Forwarded-For, X-app123-XPTO" nginx.ingress.kubernetes.io/cors-expose-headers: "*, X-CustomResponseHeader" nginx.ingress.kubernetes.io/cors-max-age: 600 nginx.ingress.kubernetes.io/cors-allow-credentials: "false" spec: rules: - http: paths: - path: /testpath backend: serviceName: test servicePort: 80
You can read more about how to control the CORS functionality here.
Rate Limiting
Rate-limiting annotations are useful for defining limits on connections and transmission rates. They can be very helpful for mitigating DDoS attacks.
nginx.ingress.kubernetes.io/limit-connections
: this defines the number of concurrent connections allowed from an IP address.nginx.ingress.kubernetes.io/limit-rps
: rps stands for “request per second”, and it is used to define the number of connections that may be accepted from an IP per second.
Here’s an example:
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: test-ingress annotations: nginx.ingress.kubernetes.io/limit-rps: "5" nginx.ingress.kubernetes.io/limit-rpm: "300" nginx.ingress.kubernetes.io/limit-connections: "10" spec: rules: - http: paths: - path: /testpath backend: serviceName: test servicePort: 80
Note: when you specify both annotations in a single ingress resource, limit-rps
takes precedence. Read more about rate-limiting ingress resources here.
Custom Max Body Size
This rule can be used to set the maximum size of the body in a request. If the body exceeds the maximum size set, NGINX will return a 413 error to the client.
The body size can be configured by using the following:
nginx.ingress.kubernetes.io/proxy-body-size: 8m
Here’s an example:
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: test-ingress annotations: nginx.ingress.kubernetes.io/proxy-body-size: 8m spec: rules: - http: paths: - path: /testpath backend: serviceName: test
Whitelist Source Range
You can configure a Kubernetes Ingress resource to allow requests only from a specific IP address. This feature can prevent unknown or unauthorized requests from reaching your cluster.
To specify whitelist source range, use the annotation below:
nginx.ingress.kubernetes.io/whitelist-source-range
Here’s an example:
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: test-ingress annotations: ingress.kubernetes.io/whitelist-source-range: "10.0.0.0/24,172.10.0.1" spec: rules: - http: paths: - path: /testpath backend: serviceName: test
Note: you can run into an issue where the whitelisted IP can’t access the resource. In this case, you may need to enable externalTrafficPolicy
in your service definition. Review this answer on Stack Overflow for more information.
Default Backend
The default backend is used for handling an unknown request or a request that is not mapped to any path or host in the ingress resource.
If you don’t define this service and enter a path not mapped, it returns an HTTP 404 (page not found) error. To fix this issue, create a service and map it to the default backend. This can be used to show custom 404 pages and error messages.
To configure this rule, add an annotation:
nginx.ingress.kubernetes.io/default-backend: <svc name>
Here’s an example:
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: test-ingress annotations: nginx.ingress.kubernetes.io/default-backend: <svc name> spec: rules: - http: paths: - path: /testpath backend: serviceName: test
Enable Access Log
Access logs allow you to view information about client requests. NGINX writes the logs in a file once the request has been processed. They are enabled by default in NGINX, but they may need to be disabled for a given ingress. To do this, use this annotation:
nginx.ingress.kubernetes.io/enable-access-log: "false"
Here’s an example:
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: test-ingress annotations: nginx.ingress.kubernetes.io/enable-access-log: "false" spec: rules: - http: paths: - path: /testpath backend: serviceName: test
Backend Protocol
You can use the backend protocol to specify how NGINX should communicate with the backend service. Valid values include HTTP, HTTPS, GRPC, GRPCS, AJP, and FCGI. By default, NGINX uses HTTP.
Here’s an example:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
Final Words
The NGINX ingress controller makes it easy to configure the rules and set up a more dynamic application for handling requests and responses from the client. This makes NGINX a great choice for ingress controllers with the available number of configurations and settings that can be applied to your ingress resource.
In this article, you have learned what an ingress is, what the role of an ingress controller is, and how you can configure your ingress rules to be more dynamic. Learn more about configuring ingress resources here.
Next Step : vCluster on EKS with NGINX Ingress
Photo by Denys Nevozhai on Unsplash
Frequently Asked Questions
What are the differences between NGINX Ingress and other Kubernetes Ingress controllers?
There aren't many differences between them. Your app demands, and goals determine which Kubernetes Ingress controller you should use.
For example, since it is the most popular web server, Nginx has more experienced engineers. It is also maintained by the Kubernetes project and is well-documented and supported. It is robust, scalable, and has a comprehensive feature set.
But, its extensive capabilities make it unsuitable for simpler app use cases. In these situations, options like Traefik or HAProxy can be better.
What are the best practices for using NGINX Ingress?
- Rate Limiting.
- Using the controller to distribute non-HTTP requests.
- Auto-scaling configuration using HPA.
- Setting resource limits and the correct number of controller pods.
What are the benefits of using NGINX Ingress?
- Faster app performance. NGINX's traffic routing increases the overall app speed, boosting the user experience.
- It is a cost-effective, robust load balancer.
- It offers advanced scalability, allowing apps to handle simultaneous requests.
- Implements upgrades without downtime.
What are the security considerations for NGINX Ingress?
NGINX's design makes it resilient to security attacks in several ways. For example, it automatically runs a service as non-root to avoid file changes. But you can always improve the resilience of the controller.
One way to do this is to set filesystems to read-only. This reduces the attack surface by limiting changes to libraries and binaries. You can make this change with the Helm using the controller.
What caching options are available for NGINX Ingress?
- Basic Caching
- Cache Splitting
- Advanced Caching
Additional Articles You May Like:
- How to Build an Internal Kubernetes Platform
- Kubernetes Traefik Ingress: 10 Useful Configuration Options
- Docker Compose to Kubernetes: Step-by-Step Migration
- How to Create and Manage Kubernetes Secrets: A Tutorial
- Kubernetes Virtual Clusters
- A Hands-on Tutorial - Kubernetes Virtual Clusters
- Kubernetes Multi-Tenancy – A Best Practices Guide
- Kubernetes Cost Optimization with Virtual Clusters
- A Complete Guide to Kubernetes Cost Optimization
- Kubernetes Namespaces vs. Virtual Clusters
- 10 Essentials For Kubernetes Multi-Tenancy
- Kubernetes: Virtual Clusters For CI/CD & Testing
- Virtual Clusters For Kubernetes - Benefits & Use Cases
- Kubernetes Multitenancy: Why Namespaces aren’t Good Enough
- Kubernetes Multi-Tenancy with Argo CD And Loft
- Kubernetes Multi-Tenancy: Why Virtual Clusters Are The Best Solution
- [Video] Beyond Namespaces: Virtual Clusters are the Future of Multi-Tenancy