Overview
This guide walks you through deploying an Nginx web server on MicroK8s using Kubernetes Deployment resources and exposing it externally with MetalLB load balancer. This setup is ideal for bare-metal Kubernetes clusters that don't have access to cloud provider load balancers.
Prerequisites
Before proceeding, ensure you have:
- MicroK8s installed and running on your Ubuntu system
- MetalLB addon enabled with a configured IP address pool
- kubectl access configured (MicroK8s uses
microk8s kubectlby default)
To enable MetalLB on MicroK8s, run:
1microk8s enable metallb:192.168.1.240-192.168.1.250Replace the IP range 192.168.1.240-192.168.1.250 with an unused IP range
from your local network that MetalLB can allocate to LoadBalancer services.
Understanding Key Kubernetes Concepts
What is a Deployment?
A Deployment is a Kubernetes resource that provides declarative updates for Pods and ReplicaSets. Key features include:
- Declarative state management: You define the desired state, and Kubernetes ensures it matches
- Rolling updates: Automatically replaces old pods with new ones during updates
- Rollback capability: Easily revert to a previous version if something goes wrong
- Scaling: Adjust the number of replicas based on demand
What is MetalLB?
MetalLB is a load-balancer implementation for bare-metal Kubernetes clusters. Unlike cloud providers (AWS, GCP, Azure) that offer integrated load balancers, bare-metal clusters need MetalLB to provide this functionality. It operates in two modes:
| Mode | Description | Use Case |
|---|---|---|
| Layer 2 (L2) | Uses ARP/NDP to announce IP addresses | Simple setup, single-node failover |
| BGP | Advertises routes via Border Gateway Protocol | Multi-node load balancing, enterprise networks |
Step 1: Deploy Nginx Container on microK8s
Deployment Manifest Breakdown
Create file nginx-deployment.yaml with the following content:
1apiVersion: apps/v12kind: Deployment3metadata:4 name: nginx-deployment5spec:6 replicas: 37 selector:8 matchLabels:9 app: nginx10 template:11 metadata:12 labels:13 app: nginx14 spec:15 containers:16 - name: nginx17 image: nginx:latest18 ports:19 - containerPort: 80Understanding the Deployment Manifest
Let's break down the key fields in the deployment YAML:
| Field | Description |
|---|---|
apiVersion: apps/v1 | Specifies the Kubernetes API version for Deployments |
kind: Deployment | Defines this resource as a Deployment controller |
metadata.name | Unique identifier for this deployment within the namespace |
spec.replicas: 3 | Creates 3 identical pod instances for high availability |
spec.selector.matchLabels | Defines how the Deployment finds which Pods to manage |
spec.template | Pod template that defines the container specifications |
containerPort: 80 | The port that Nginx listens on inside the container |
Important: The spec.selector.matchLabels must match
spec.template.metadata.labels. In Kubernetes apps/v1, the selector is
immutable after creation—you cannot change it after the Deployment is
created.
How Replicas and High Availability Work
When you set replicas: 3, Kubernetes:
- Creates a ReplicaSet that manages the desired number of pods
- Schedules pods across nodes for better fault tolerance
- Self-heals: If a pod crashes or a node fails, Kubernetes automatically creates new pods to maintain the desired count
- Load balances traffic across all healthy pods via the Service
Apply deployment
1microk8s kubectl apply -f nginx-deployment.yamlVerify Deployment Status
After applying, verify that the deployment was successful:
1microk8s kubectl get deploymentsExpected output:
1NAME READY UP-TO-DATE AVAILABLE AGE2nginx-deployment 3/3 3 3 30sThe columns indicate:
- READY:
3/3means all 3 desired replicas are running - UP-TO-DATE: Number of replicas updated to match the desired state
- AVAILABLE: Number of replicas available to serve requests
Check deployment
1microk8s kubectl get podsExpected output:
1NAME READY STATUS RESTARTS AGE2nginx-deployment-6b7f675859-abc12 1/1 Running 0 45s3nginx-deployment-6b7f675859-def34 1/1 Running 0 45s4nginx-deployment-6b7f675859-ghi56 1/1 Running 0 45sView Detailed Pod Information
For troubleshooting or detailed inspection:
1microk8s kubectl describe pod -l app=nginxWatch Pods in Real-Time
Monitor pod status during deployments or scaling:
1microk8s kubectl get pods -l app=nginx -wStep 2: Expose the Deployment Using MetalLB
Understanding Kubernetes Service Types
Before creating the LoadBalancer service, it's important to understand the different service types available:
| Service Type | External Access | IP Assignment | Use Case |
|---|---|---|---|
| ClusterIP | No (internal only) | Internal cluster IP | Inter-service communication |
| NodePort | Yes (node IP + port) | Uses node's IP + high port (30000-32767) | Development, testing |
| LoadBalancer | Yes (dedicated IP) | External IP from MetalLB pool | Production workloads |
| ExternalName | DNS-based | Maps to external DNS name | External service integration |
Service Manifest Breakdown
Create file nginx-service.yaml with the following content:
1apiVersion: v12kind: Service3metadata:4 name: nginx-service5spec:6 selector:7 app: nginx8 ports:9 - protocol: TCP10 port: 8011 targetPort: 8012 type: LoadBalancerUnderstanding the Service Manifest
| Field | Description |
|---|---|
spec.selector.app: nginx | Routes traffic to pods with label app: nginx |
spec.ports.port: 80 | The port exposed by the service (external-facing) |
spec.ports.targetPort: 80 | The port on the pod that receives traffic |
spec.type: LoadBalancer | Instructs MetalLB to allocate an external IP |
How Traffic Flow Works
Client → MetalLB → Service → Pod(s)
| Step | Component | Address | Action |
|---|---|---|---|
| 1 | Client (Browser) | — | Sends HTTP request |
| 2 | MetalLB External IP | 192.168.1.240 | Receives traffic, responds to ARP |
| 3 | Service (nginx-service) | :80 | Load-balances across pods |
| 4 | Pod(s) (nginx) | :80 | Processes request, returns response |
- Client sends a request to the external IP assigned by MetalLB
- MetalLB (Layer 2 mode) responds to ARP requests for the external IP
- Service receives traffic and load-balances across available pods
- Pods process the request and return the response
Apply the Service:
1microk8s kubectl apply -f nginx-service.yamlCheck service and external IP
1microk8s kubectl get service nginx-serviceExpected output:
1NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE2nginx-service LoadBalancer 10.152.183.45 192.168.1.240 80:31234/TCP 15sThe EXTERNAL-IP field shows the IP address allocated by MetalLB. This is the address you'll use to access your Nginx server from any device on your network.
If EXTERNAL-IP shows <pending>, MetalLB may not be configured correctly or has exhausted its IP pool. Check MetalLB logs with microk8s kubectl logs -n metallb-system -l app=metallb.
Verify the Deployment is Accessible
Test the Nginx server using curl:
1curl http://$(microk8s kubectl get svc nginx-service -o jsonpath='{.status.loadBalancer.ingress[0].ip}')Or access it directly via browser at http://192.168.1.240 (replace with your actual external IP).
Step 3: Check Cluster Health
Check cluster health
1microk8s inspectAdditional Health Checks
Verify all components are functioning correctly:
1microk8s kubectl get all -o wideCheck MetalLB speaker pods (responsible for Layer 2 announcements):
1microk8s kubectl get pods -n metallb-systemView service endpoints to confirm pods are registered:
1microk8s kubectl get endpoints nginx-serviceExpected output:
1NAME ENDPOINTS AGE2nginx-service 10.1.28.5:80,10.1.28.6:80,10.1.28.7:80 2mAdvanced Configuration
Scaling the Deployment
Scale up or down based on demand:
1microk8s kubectl scale deployment nginx-deployment --replicas=5Or use declarative scaling by editing the deployment:
1microk8s kubectl edit deployment nginx-deploymentRequest a Specific IP from MetalLB
You can request a specific IP address by adding an annotation to your service:
1apiVersion: v12kind: Service3metadata:4 name: nginx-service5 annotations:6 metallb.universe.tf/loadBalancerIPs: "192.168.1.245"7spec:8 selector:9 app: nginx10 ports:11 - protocol: TCP12 port: 8013 targetPort: 8014 type: LoadBalancerConfigure Resource Limits (Recommended for Production)
Add resource constraints to prevent pod starvation:
1apiVersion: apps/v12kind: Deployment3metadata:4 name: nginx-deployment5spec:6 replicas: 37 selector:8 matchLabels:9 app: nginx10 template:11 metadata:12 labels:13 app: nginx14 spec:15 containers:16 - name: nginx17 image: nginx:latest18 ports:19 - containerPort: 8020 resources:21 requests:22 memory: "64Mi"23 cpu: "100m"24 limits:25 memory: "128Mi"26 cpu: "250m"| Resource Field | Description |
|---|---|
requests.memory | Guaranteed minimum memory allocation |
requests.cpu | Guaranteed minimum CPU (100m = 0.1 CPU cores) |
limits.memory | Maximum memory the container can use |
limits.cpu | Maximum CPU the container can use |
Add Health Checks (Liveness and Readiness Probes)
Improve reliability with health checks:
1apiVersion: apps/v12kind: Deployment3metadata:4 name: nginx-deployment5spec:6 replicas: 37 selector:8 matchLabels:9 app: nginx10 template:11 metadata:12 labels:13 app: nginx14 spec:15 containers:16 - name: nginx17 image: nginx:latest18 ports:19 - containerPort: 8020 livenessProbe:21 httpGet:22 path: /23 port: 8024 initialDelaySeconds: 1025 periodSeconds: 1026 readinessProbe:27 httpGet:28 path: /29 port: 8030 initialDelaySeconds: 531 periodSeconds: 5| Probe Type | Purpose |
|---|---|
| livenessProbe | Restarts container if probe fails (container is unhealthy) |
| readinessProbe | Removes pod from service endpoints if probe fails (traffic routing) |
Troubleshooting
Common Issues and Solutions
| Issue | Cause | Solution |
|---|---|---|
Pods stuck in Pending | Insufficient resources or node issues | Check kubectl describe pod <pod-name> |
EXTERNAL-IP is <pending> | MetalLB not configured or no IPs available | Verify MetalLB addon is enabled and IP pool is configured |
| Service not accessible | Firewall blocking traffic | Check network policies and host firewall rules |
Pods in CrashLoopBackOff | Container startup failure | Check logs with kubectl logs <pod-name> |
Useful Debugging Commands
1# View pod logs2microk8s kubectl logs -l app=nginx --all-containers3
4# Describe service for detailed info5microk8s kubectl describe svc nginx-service6
7# Check events for recent activities8microk8s kubectl get events --sort-by=.metadata.creationTimestamp9
10# Debug networking11microk8s kubectl run debug --image=busybox -it --rm -- wget -qO- nginx-service:80Cleanup
Remove all resources when done:
1microk8s kubectl delete -f nginx-service.yaml2microk8s kubectl delete -f nginx-deployment.yamlOr delete by label:
1microk8s kubectl delete all -l app=nginx