How to Setup Ingress Controller
How to Setup Ingress Controller In today’s cloud-native and microservices-driven architecture, managing external access to services within a Kubernetes cluster is both critical and complex. This is where an Ingress Controller comes into play. An Ingress Controller acts as a gateway that routes incoming HTTP and HTTPS traffic to the appropriate services inside your Kubernetes cluster based on rules
How to Setup Ingress Controller
In todays cloud-native and microservices-driven architecture, managing external access to services within a Kubernetes cluster is both critical and complex. This is where an Ingress Controller comes into play. An Ingress Controller acts as a gateway that routes incoming HTTP and HTTPS traffic to the appropriate services inside your Kubernetes cluster based on rules you define. Unlike a simple Service of type LoadBalancer, which exposes a single service externally, an Ingress Controller enables you to manage multiple services under a single IP address, using hostnames and path-based routingmaking it indispensable for modern application deployments.
Setting up an Ingress Controller correctly ensures your applications are accessible, secure, scalable, and performant. Whether youre deploying a web application, API gateway, or multi-tenant SaaS platform, mastering Ingress Controller configuration is a foundational skill for DevOps engineers, site reliability engineers (SREs), and Kubernetes administrators. This guide provides a comprehensive, step-by-step walkthrough to deploy, configure, and optimize an Ingress Controller in production-grade environments.
Step-by-Step Guide
Prerequisites
Before you begin setting up an Ingress Controller, ensure your environment meets the following requirements:
- A running Kubernetes cluster (v1.19 or later recommended)
- kubectl installed and configured to communicate with your cluster
- Cluster administrator or sufficient RBAC permissions to create Ingress resources and deploy controllers
- A domain name (optional but recommended for production use)
- Access to a DNS provider to manage DNS records
For cloud-based clusters (e.g., EKS, GKE, AKS), ensure the underlying infrastructure supports external load balancers. For on-premises clusters, you may need to configure MetalLB or a similar solution to provide an external IP.
Step 1: Choose an Ingress Controller
There are multiple Ingress Controller implementations available, each with distinct features, performance characteristics, and integration capabilities. The most widely used include:
- NGINX Ingress Controller Open-source, highly configurable, and widely adopted. Uses NGINX as the reverse proxy.
- Contour Built on Envoy, designed for Kubernetes-native workflows and dynamic configuration.
- HAProxy Ingress Controller High-performance, enterprise-grade, ideal for high-traffic applications.
- Traefik Modern, auto-discovering, and developer-friendly with built-in dashboard and Lets Encrypt support.
- AWS ALB Ingress Controller Specifically designed for Amazon EKS, integrates natively with Application Load Balancers.
- Google Cloud Ingress Native integration with Google Cloud Load Balancing for GKE clusters.
For this guide, we will use the NGINX Ingress Controller due to its broad compatibility, extensive documentation, and community support. However, the principles outlined here apply to most controllers with minor syntax differences.
Step 2: Deploy the NGINX Ingress Controller
The NGINX Ingress Controller can be installed via Helm or YAML manifests. We recommend using Helm for easier upgrades and configuration management, but well show both methods.
Option A: Install Using Helm
First, add the NGINX Ingress Helm repository:
helm repo add nginx-stable https://helm.nginx.com/stable
helm repo update
Then install the controller:
helm install my-nginx-ingress nginx-stable/nginx-ingress \
--namespace ingress-nginx \
--create-namespace \
--set controller.service.type=LoadBalancer
This command:
- Creates a namespace called
ingress-nginx - Deploys the controller with a LoadBalancer service type (ideal for cloud providers)
- Names the release
my-nginx-ingress
Option B: Install Using YAML Manifests
If Helm is not available, use the official manifest:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.1/deploy/static/provider/cloud/deploy.yaml
This deploys the controller using the latest stable version (v1.10.1 as of writing). Ensure you verify the version compatibility with your Kubernetes cluster.
After installation, monitor the rollout:
kubectl get pods -n ingress-nginx
kubectl get services -n ingress-nginx
You should see the ingress-nginx-controller service with an external IP assigned (in cloud environments). If youre on-premises and using MetalLB, ensure its configured to assign IPs to LoadBalancer services.
Step 3: Verify the Ingress Controller
Once the controller is running, test its functionality. Create a simple test service and Ingress resource.
First, create a deployment for a test application:
cat apiVersion: apps/v1
kind: Deployment
metadata:
name: test-app
labels:
app: test-app
spec:
replicas: 2
selector:
matchLabels:
app: test-app
template:
metadata:
labels:
app: test-app
spec:
containers:
- name: app
image: nginx:alpine
ports:
- containerPort: 80
EOF
Then expose it via a ClusterIP Service:
cat apiVersion: v1
kind: Service
metadata:
name: test-app-service
spec:
selector:
app: test-app
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
EOF
Now create the Ingress resource to route traffic to this service:
cat apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: test.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: test-app-service
port:
number: 80
EOF
Important: The ingressClassName: nginx field ensures the correct controller handles this resource. If you're using an older Kubernetes version (kubernetes.io/ingress.class: nginx instead.
Update your local /etc/hosts file to map test.example.com to the external IP of the Ingress Controller:
YOUR_EXTERNAL_IP test.example.com
Now access http://test.example.com in your browser. You should see the default NGINX welcome page. If not, check logs:
kubectl logs -n ingress-nginx deployment/nginx-ingress-controller
kubectl get ingress -o wide
Step 4: Configure TLS/SSL with Lets Encrypt
Production applications require HTTPS. Well use Cert-Manager to automate TLS certificate issuance via Lets Encrypt.
First, install Cert-Manager:
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.4/cert-manager.yaml
Wait for all Cert-Manager pods to be ready:
kubectl get pods -n cert-manager
Next, create a ClusterIssuer for Lets Encrypt (production endpoint):
cat apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: admin@example.com
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx
EOF
Now update your Ingress to request a certificate:
cat apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress-secure
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
ingressClassName: nginx
tls:
- hosts:
- test.example.com
secretName: test-tls-secret
rules:
- host: test.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: test-app-service
port:
number: 80
EOF
Cert-Manager will automatically detect the annotation, request a certificate, and store it in the secret test-tls-secret. Wait a few minutes, then verify:
kubectl get certificate -A
kubectl get secret test-tls-secret -o yaml
Once the certificate is issued, access https://test.example.com you should now see a secure connection with a valid SSL certificate.
Step 5: Configure Advanced Routing Rules
Ingress Controllers support sophisticated routing beyond basic path matching. Here are common advanced configurations:
Path-Based Routing
Route different paths to different services:
spec:
rules:
- host: app.example.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: api-service
port:
number: 80
- path: /web
pathType: Prefix
backend:
service:
name: web-service
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: homepage-service
port:
number: 80
Host-Based Routing
Route different domains to different services:
spec:
rules:
- host: api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api-service
port:
number: 80
- host: www.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-service
port:
number: 80
Header-Based Routing (NGINX Specific)
Use annotations to route based on HTTP headers:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/enable-rewrite-log: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($http_x_version = "v2") {
set $target_service "v2-service";
}
Then use a custom service name in your backend logic or leverage the nginx.ingress.kubernetes.io/upstream-vhost annotation for header-based routing.
Step 6: Configure Rate Limiting and Security
Protect your applications with built-in security features:
Rate Limiting
annotations:
nginx.ingress.kubernetes.io/limit-rps: "10"
nginx.ingress.kubernetes.io/limit-whitelist: "192.168.1.0/24, 10.0.0.0/8"
This limits requests to 10 per second per client IP and whitelists trusted networks.
IP Allow/Deny
annotations:
nginx.ingress.kubernetes.io/whitelist-source-range: "192.168.1.0/24, 10.0.0.0/8"
nginx.ingress.kubernetes.io/denylist-source-range: "192.168.1.100"
Basic Authentication
Create a secret with credentials:
htpasswd -c auth admin
kubectl create secret generic basic-auth --from-file=auth
Apply to Ingress:
annotations:
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required'
Step 7: Monitor and Log Ingress Traffic
Enable detailed logging for troubleshooting and auditing:
annotations:
nginx.ingress.kubernetes.io/log-format-upstream: '{"time": "$time_iso8601", "remote_addr": "$remote_addr", "request_method": "$request_method", "request_uri": "$request_uri", "status": "$status", "body_bytes_sent": "$body_bytes_sent", "http_referer": "$http_referer", "http_user_agent": "$http_user_agent"}'
View logs:
kubectl logs -n ingress-nginx deployment/nginx-ingress-controller | grep -i "access"
For production environments, integrate with centralized logging systems like Loki, Fluentd, or Elasticsearch.
Best Practices
Use IngressClass for Multi-Controller Environments
If your cluster hosts multiple Ingress Controllers (e.g., NGINX and Traefik), always specify ingressClassName in your Ingress resources. This prevents ambiguity and ensures traffic is routed by the intended controller.
Never Use Default IngressClass Without Validation
Some clusters automatically set a default IngressClass. Verify its the one you intend to use:
kubectl get ingressclasses
kubectl get ingressclass nginx -o yaml
Set a default only if youre certain:
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: nginx
annotations:
ingressclass.kubernetes.io/is-default-class: "true"
spec:
controller: k8s.io/ingress-nginx
Use Namespaces Strategically
Deploy Ingress Controllers in dedicated namespaces (e.g., ingress-nginx) to isolate permissions and resources. Avoid deploying them in default or application namespaces.
Apply Resource Limits and Requests
Prevent resource starvation by defining CPU and memory limits in the controller deployment:
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 256Mi
Enable Health Checks and Readiness Probes
Ensure the controller only receives traffic when ready. Most Helm charts enable this by default, but verify:
livenessProbe:
httpGet:
path: /healthz
port: 10254
initialDelaySeconds: 10
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /healthz
port: 10254
initialDelaySeconds: 10
timeoutSeconds: 5
Implement Canary Deployments
Use annotations to route a percentage of traffic to a new version:
annotations:
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "10"
This sends 10% of traffic to the canary service while 90% goes to the stable version.
Regularly Rotate TLS Certificates
Lets Encrypt certificates expire every 90 days. Cert-Manager automates renewal, but monitor issuance events:
kubectl get certificates --all-namespaces
kubectl describe certificate -n your-namespace your-cert-name
Secure the Ingress Controller Itself
Restrict access to the controllers metrics and admin endpoints:
- Disable the NGINX status page in production unless needed
- Use NetworkPolicies to restrict traffic to the controller pod
- Enable mutual TLS (mTLS) for internal communication if required
Use Helm Values for Configuration Over Annotations
While annotations are convenient, theyre per-Ingress. For global settings (e.g., timeouts, buffer sizes), use Helm values or ConfigMaps:
controller:
config:
proxy-read-timeout: "600"
proxy-send-timeout: "600"
client-max-body-size: "100m"
keep-alive: "75"
Tools and Resources
Official Documentation
- Kubernetes Ingress Documentation
- NGINX Ingress Controller Docs
- Cert-Manager Documentation
- Traefik Documentation
Monitoring Tools
- Prometheus + Grafana Collect NGINX metrics (e.g., request rate, latency, errors) via the /metrics endpoint
- Loki Log aggregation for Ingress access logs
- Kiali Service mesh visualization if using Istio alongside Ingress
Validation and Testing Tools
- curl Test endpoints and headers
- httping Measure latency and availability
- kubectx Switch between clusters quickly
- Telepresence Debug Ingress rules locally
Sample GitHub Repositories
CI/CD Integration
Integrate Ingress deployment into your GitOps workflow using Argo CD or Flux. Define Ingress resources as YAML in your Git repository, and let the operator reconcile them automatically. This ensures version control, auditability, and rollback capability.
Real Examples
Example 1: Multi-Tenant SaaS Platform
A SaaS application serves customers under subdomains: customer1.yourapp.com, customer2.yourapp.com. Each customer has a dedicated backend service.
Ingress configuration:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: saas-ingress
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
ingressClassName: nginx
tls:
- hosts:
- customer1.yourapp.com
- customer2.yourapp.com
secretName: saas-tls
rules:
- host: customer1.yourapp.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: customer1-service
port:
number: 80
- host: customer2.yourapp.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: customer2-service
port:
number: 80
Each customers service is dynamically created via a CI/CD pipeline. DNS records are auto-provisioned using external-dns with a provider like Cloudflare or Route 53.
Example 2: API Gateway with Versioning
An API has two versions: v1 and v2. Traffic is split based on path:
spec:
rules:
- host: api.example.com
http:
paths:
- path: /v1
pathType: Prefix
backend:
service:
name: api-v1-service
port:
number: 80
- path: /v2
pathType: Prefix
backend:
service:
name: api-v2-service
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: api-docs-service
port:
number: 80
Additional annotations enforce rate limiting per API key via custom headers and JWT validation using Open Policy Agent (OPA) or Auth0 integration.
Example 3: Internal vs External Services
Some services are only accessible internally (e.g., monitoring dashboards). Use separate Ingress resources with different ingress classes:
ingressClassName: nginxfor public-facing servicesingressClassName: internal-nginxfor internal services, bound to a ClusterIP or private LoadBalancer
Apply NetworkPolicies to restrict access to internal services only from the ingress controllers namespace.
FAQs
What is the difference between Ingress and Ingress Controller?
Ingress is a Kubernetes resource (YAML manifest) that defines routing rules. The Ingress Controller is the actual software (e.g., NGINX, Traefik) that reads those rules and configures a reverse proxy to implement them.
Do I need an Ingress Controller if I use a LoadBalancer Service?
You dont need an Ingress Controller for a single service. However, if you have multiple services and want to expose them under one IP using hostnames or paths, an Ingress Controller is essential. LoadBalancer Services are limited to one service per IP.
Can I run multiple Ingress Controllers in the same cluster?
Yes. Use different ingressClassName values and assign each Ingress resource to the correct controller. This is common in multi-team environments where each team uses a preferred controller.
Why is my Ingress not working even though the controller is running?
Common causes:
- Missing or incorrect
ingressClassName - Service not exposing the correct port or selector mismatch
- Missing or invalid DNS record
- Firewall or network policy blocking traffic
- Incorrect pathType (e.g., using Exact instead of Prefix)
Check logs, describe the Ingress resource, and validate service endpoints with kubectl get endpoints.
How do I upgrade the Ingress Controller?
If using Helm:
helm repo update
helm upgrade my-nginx-ingress nginx-stable/nginx-ingress --namespace ingress-nginx --set controller.image.tag=v1.10.1
If using YAML:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.1/deploy/static/provider/cloud/deploy.yaml
Always test upgrades in a staging environment first.
Is Ingress suitable for TCP/UDP services?
Standard Ingress only handles HTTP/HTTPS. For TCP/UDP, use an Ingress Controller that supports it (e.g., NGINX with tcp-services-configmap or HAProxy). Define TCP/UDP services in a ConfigMap and reference them in the controllers configuration.
How does Ingress compare to Service Mesh (Istio, Linkerd)?
Ingress handles east-west traffic at the cluster edge. Service meshes manage north-south and east-west traffic inside the cluster with advanced features like mTLS, observability, and traffic splitting. Many teams use both: Ingress for external access, service mesh for internal service-to-service communication.
Conclusion
Setting up an Ingress Controller is a pivotal step in deploying scalable, secure, and maintainable applications on Kubernetes. From initial deployment with NGINX to securing traffic with TLS via Cert-Manager, configuring advanced routing, and implementing best practices for performance and reliability, this guide has provided a complete roadmap for production-grade Ingress management.
Remember: Ingress is not just a routing toolits the gateway to your applications availability, security posture, and user experience. Whether youre managing a small internal tool or a global SaaS platform, mastering Ingress Controller configuration empowers you to deliver resilient, high-performance services with confidence.
As cloud-native architectures evolve, the role of the Ingress Controller will only grow in importance. Stay updated with new features in Kubernetes networking, explore integration with service meshes, and continuously refine your routing policies based on real traffic patterns and user behavior. With the right setup and ongoing vigilance, your Ingress Controller will serve as the reliable foundation your applications depend on.