Top Tags

Install Nginx on microK8s with Metallb using Kubernetes Deployment

Install Nginx on microK8s with Metallb using Kubernetes Deployment

Overview

This guide walks you through deploying an Nginx web server on MicroK8s using Kubernetes Deployment resources and exposing it externally with MetalLB load balancer. This setup is ideal for bare-metal Kubernetes clusters that don't have access to cloud provider load balancers.

Prerequisites

Before proceeding, ensure you have:

  • MicroK8s installed and running on your Ubuntu system
  • MetalLB addon enabled with a configured IP address pool
  • kubectl access configured (MicroK8s uses microk8s kubectl by default)

To enable MetalLB on MicroK8s, run:

bash
1microk8s enable metallb:192.168.1.240-192.168.1.250

Understanding Key Kubernetes Concepts

What is a Deployment?

A Deployment is a Kubernetes resource that provides declarative updates for Pods and ReplicaSets. Key features include:

  • Declarative state management: You define the desired state, and Kubernetes ensures it matches
  • Rolling updates: Automatically replaces old pods with new ones during updates
  • Rollback capability: Easily revert to a previous version if something goes wrong
  • Scaling: Adjust the number of replicas based on demand

What is MetalLB?

MetalLB is a load-balancer implementation for bare-metal Kubernetes clusters. Unlike cloud providers (AWS, GCP, Azure) that offer integrated load balancers, bare-metal clusters need MetalLB to provide this functionality. It operates in two modes:

ModeDescriptionUse Case
Layer 2 (L2)Uses ARP/NDP to announce IP addressesSimple setup, single-node failover
BGPAdvertises routes via Border Gateway ProtocolMulti-node load balancing, enterprise networks

Step 1: Deploy Nginx Container on microK8s

Deployment Manifest Breakdown

Create file nginx-deployment.yaml with the following content:

yaml
1apiVersion: apps/v1
2kind: Deployment
3metadata:
4 name: nginx-deployment
5spec:
6 replicas: 3
7 selector:
8 matchLabels:
9 app: nginx
10 template:
11 metadata:
12 labels:
13 app: nginx
14 spec:
15 containers:
16 - name: nginx
17 image: nginx:latest
18 ports:
19 - containerPort: 80

Understanding the Deployment Manifest

Let's break down the key fields in the deployment YAML:

FieldDescription
apiVersion: apps/v1Specifies the Kubernetes API version for Deployments
kind: DeploymentDefines this resource as a Deployment controller
metadata.nameUnique identifier for this deployment within the namespace
spec.replicas: 3Creates 3 identical pod instances for high availability
spec.selector.matchLabelsDefines how the Deployment finds which Pods to manage
spec.templatePod template that defines the container specifications
containerPort: 80The port that Nginx listens on inside the container

How Replicas and High Availability Work

When you set replicas: 3, Kubernetes:

  1. Creates a ReplicaSet that manages the desired number of pods
  2. Schedules pods across nodes for better fault tolerance
  3. Self-heals: If a pod crashes or a node fails, Kubernetes automatically creates new pods to maintain the desired count
  4. Load balances traffic across all healthy pods via the Service

Apply deployment

bash
1microk8s kubectl apply -f nginx-deployment.yaml

Verify Deployment Status

After applying, verify that the deployment was successful:

bash
1microk8s kubectl get deployments

Expected output:

plaintext
1NAME READY UP-TO-DATE AVAILABLE AGE
2nginx-deployment 3/3 3 3 30s

The columns indicate:

  • READY: 3/3 means all 3 desired replicas are running
  • UP-TO-DATE: Number of replicas updated to match the desired state
  • AVAILABLE: Number of replicas available to serve requests

Check deployment

bash
1microk8s kubectl get pods

Expected output:

plaintext
1NAME READY STATUS RESTARTS AGE
2nginx-deployment-6b7f675859-abc12 1/1 Running 0 45s
3nginx-deployment-6b7f675859-def34 1/1 Running 0 45s
4nginx-deployment-6b7f675859-ghi56 1/1 Running 0 45s

View Detailed Pod Information

For troubleshooting or detailed inspection:

bash
1microk8s kubectl describe pod -l app=nginx

Watch Pods in Real-Time

Monitor pod status during deployments or scaling:

bash
1microk8s kubectl get pods -l app=nginx -w

Step 2: Expose the Deployment Using MetalLB

Understanding Kubernetes Service Types

Before creating the LoadBalancer service, it's important to understand the different service types available:

Service TypeExternal AccessIP AssignmentUse Case
ClusterIPNo (internal only)Internal cluster IPInter-service communication
NodePortYes (node IP + port)Uses node's IP + high port (30000-32767)Development, testing
LoadBalancerYes (dedicated IP)External IP from MetalLB poolProduction workloads
ExternalNameDNS-basedMaps to external DNS nameExternal service integration

Service Manifest Breakdown

Create file nginx-service.yaml with the following content:

yaml
1apiVersion: v1
2kind: Service
3metadata:
4 name: nginx-service
5spec:
6 selector:
7 app: nginx
8 ports:
9 - protocol: TCP
10 port: 80
11 targetPort: 80
12 type: LoadBalancer

Understanding the Service Manifest

FieldDescription
spec.selector.app: nginxRoutes traffic to pods with label app: nginx
spec.ports.port: 80The port exposed by the service (external-facing)
spec.ports.targetPort: 80The port on the pod that receives traffic
spec.type: LoadBalancerInstructs MetalLB to allocate an external IP

How Traffic Flow Works

Client → MetalLB → Service → Pod(s)

StepComponentAddressAction
1Client (Browser)Sends HTTP request
2MetalLB External IP192.168.1.240Receives traffic, responds to ARP
3Service (nginx-service):80Load-balances across pods
4Pod(s) (nginx):80Processes request, returns response
  1. Client sends a request to the external IP assigned by MetalLB
  2. MetalLB (Layer 2 mode) responds to ARP requests for the external IP
  3. Service receives traffic and load-balances across available pods
  4. Pods process the request and return the response

Apply the Service:

bash
1microk8s kubectl apply -f nginx-service.yaml

Check service and external IP

bash
1microk8s kubectl get service nginx-service

Expected output:

plaintext
1NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
2nginx-service LoadBalancer 10.152.183.45 192.168.1.240 80:31234/TCP 15s

The EXTERNAL-IP field shows the IP address allocated by MetalLB. This is the address you'll use to access your Nginx server from any device on your network.

Verify the Deployment is Accessible

Test the Nginx server using curl:

bash
1curl http://$(microk8s kubectl get svc nginx-service -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

Or access it directly via browser at http://192.168.1.240 (replace with your actual external IP).


Step 3: Check Cluster Health

Check cluster health

bash
1microk8s inspect

Additional Health Checks

Verify all components are functioning correctly:

bash
1microk8s kubectl get all -o wide

Check MetalLB speaker pods (responsible for Layer 2 announcements):

bash
1microk8s kubectl get pods -n metallb-system

View service endpoints to confirm pods are registered:

bash
1microk8s kubectl get endpoints nginx-service

Expected output:

plaintext
1NAME ENDPOINTS AGE
2nginx-service 10.1.28.5:80,10.1.28.6:80,10.1.28.7:80 2m

Advanced Configuration

Scaling the Deployment

Scale up or down based on demand:

bash
1microk8s kubectl scale deployment nginx-deployment --replicas=5

Or use declarative scaling by editing the deployment:

bash
1microk8s kubectl edit deployment nginx-deployment

Request a Specific IP from MetalLB

You can request a specific IP address by adding an annotation to your service:

yaml
1apiVersion: v1
2kind: Service
3metadata:
4 name: nginx-service
5 annotations:
6 metallb.universe.tf/loadBalancerIPs: "192.168.1.245"
7spec:
8 selector:
9 app: nginx
10 ports:
11 - protocol: TCP
12 port: 80
13 targetPort: 80
14 type: LoadBalancer

Add resource constraints to prevent pod starvation:

yaml
1apiVersion: apps/v1
2kind: Deployment
3metadata:
4 name: nginx-deployment
5spec:
6 replicas: 3
7 selector:
8 matchLabels:
9 app: nginx
10 template:
11 metadata:
12 labels:
13 app: nginx
14 spec:
15 containers:
16 - name: nginx
17 image: nginx:latest
18 ports:
19 - containerPort: 80
20 resources:
21 requests:
22 memory: "64Mi"
23 cpu: "100m"
24 limits:
25 memory: "128Mi"
26 cpu: "250m"
Resource FieldDescription
requests.memoryGuaranteed minimum memory allocation
requests.cpuGuaranteed minimum CPU (100m = 0.1 CPU cores)
limits.memoryMaximum memory the container can use
limits.cpuMaximum CPU the container can use

Add Health Checks (Liveness and Readiness Probes)

Improve reliability with health checks:

yaml
1apiVersion: apps/v1
2kind: Deployment
3metadata:
4 name: nginx-deployment
5spec:
6 replicas: 3
7 selector:
8 matchLabels:
9 app: nginx
10 template:
11 metadata:
12 labels:
13 app: nginx
14 spec:
15 containers:
16 - name: nginx
17 image: nginx:latest
18 ports:
19 - containerPort: 80
20 livenessProbe:
21 httpGet:
22 path: /
23 port: 80
24 initialDelaySeconds: 10
25 periodSeconds: 10
26 readinessProbe:
27 httpGet:
28 path: /
29 port: 80
30 initialDelaySeconds: 5
31 periodSeconds: 5
Probe TypePurpose
livenessProbeRestarts container if probe fails (container is unhealthy)
readinessProbeRemoves pod from service endpoints if probe fails (traffic routing)

Troubleshooting

Common Issues and Solutions

IssueCauseSolution
Pods stuck in PendingInsufficient resources or node issuesCheck kubectl describe pod <pod-name>
EXTERNAL-IP is <pending>MetalLB not configured or no IPs availableVerify MetalLB addon is enabled and IP pool is configured
Service not accessibleFirewall blocking trafficCheck network policies and host firewall rules
Pods in CrashLoopBackOffContainer startup failureCheck logs with kubectl logs <pod-name>

Useful Debugging Commands

bash
1# View pod logs
2microk8s kubectl logs -l app=nginx --all-containers
3
4# Describe service for detailed info
5microk8s kubectl describe svc nginx-service
6
7# Check events for recent activities
8microk8s kubectl get events --sort-by=.metadata.creationTimestamp
9
10# Debug networking
11microk8s kubectl run debug --image=busybox -it --rm -- wget -qO- nginx-service:80

Cleanup

Remove all resources when done:

bash
1microk8s kubectl delete -f nginx-service.yaml
2microk8s kubectl delete -f nginx-deployment.yaml

Or delete by label:

bash
1microk8s kubectl delete all -l app=nginx