Overview
This guide demonstrates how to deploy a container image from DockerHub to a K3s (lightweight Kubernetes) cluster. K3s is a certified Kubernetes distribution designed for production workloads in resource-constrained environments, edge computing, and IoT devices.
Key Concepts
| Component | Description |
|---|---|
| Deployment | Kubernetes resource that manages a set of replicated Pods, ensuring the desired state matches the actual state |
| Service | Exposes your Deployment to network traffic, either internally or externally via LoadBalancer |
| PersistentVolumeClaim (PVC) | Requests storage resources from the cluster for data persistence |
| Namespace | Virtual cluster that provides isolation between workloads |
Prerequisites
Before deploying, ensure your K3s cluster is running and kubectl is configured:
1# Verify cluster connection2kubectl cluster-info3
4# Check node status5kubectl get nodes -o wide6
7# Create namespace if it doesn't exist8kubectl create namespace test-app --dry-run=client -o yaml | kubectl apply -f -Deployment Without Persistent Storage
The following deployment uses emptyDir volume, which provides temporary storage that exists only for the lifetime of the Pod. This approach is suitable for development and testing only — data will be lost when the Pod restarts.
Understanding emptyDir Volumes
- Created when a Pod is assigned to a node
- Exists as long as the Pod runs on that node
- Initially empty, hence the name
- All containers in the Pod can read/write the same files
- Deleted permanently when the Pod is removed from the node
Create a Deployment YAML without PVC
1apiVersion: apps/v12kind: Deployment3metadata:4 name: postgres-deployment5 namespace: test-app6 labels:7 app: postgres8spec:9 replicas: 110 selector:11 matchLabels:12 app: postgres13 template:14 metadata:15 labels:16 app: postgres17 spec:18 restartPolicy: Always19 containers:20 - name: postgres21 image: docker.io/dedkola/postgres:1722 imagePullPolicy: Always23 ports:24 - containerPort: 543225 resources:26 requests:27 memory: "1Gi"28 cpu: "1000m"29 limits:30 memory: "2Gi"31 cpu: "2000m"32
33 volumeMounts:34 - name: postgres-storage35 mountPath: /var/lib/postgresql/data36 volumes:37 - name: postgres-storage38 emptyDir: {} # For production, use PersistentVolumeClaim39---40apiVersion: v141kind: Service42metadata:43 name: postgres-service44 namespace: test-app45spec:46 type: LoadBalancer47 selector:48 app: postgres49 ports:50 - protocol: TCP51 port: 543252 targetPort: 5432Understanding the Deployment Manifest
Metadata and Labels
Labels are key-value pairs attached to objects for identification and selection:
app: postgres— identifies this resource as part of the PostgreSQL application- Used by Services and other resources to select Pods via
selector.matchLabels
Container Configuration
| Field | Value | Purpose |
|---|---|---|
image | docker.io/dedkola/postgres:17 | Full image path with registry, repository, and tag |
imagePullPolicy: Always | Forces pulling the latest image on each Pod start | |
containerPort: 5432 | PostgreSQL default port exposed by the container |
Resource Requests vs Limits
Kubernetes uses requests and limits to manage container resources:
- Requests: Minimum guaranteed resources. The scheduler uses this to find a suitable node
- Limits: Maximum resources a container can use. Enforced by the kubelet
| Resource | Value | Description |
|---|---|---|
| Request | 1Gi | Guaranteed minimum memory |
| Limit | 2Gi | Maximum allowed (OOM kill if exceeded) |
CPU Limits Behavior:
- Enforced via CPU throttling by the Linux kernel
- Container is throttled when approaching the limit
- Hard limit — containers cannot exceed their CPU limit
Memory Limits Behavior:
- Enforced via OOM (Out of Memory) kills
- Container may be terminated if it exceeds memory limit under memory pressure
- Reactive enforcement — brief overages may not trigger immediate termination
Monitoring and Troubleshooting
Effective monitoring helps identify resource bottlenecks and application issues.
Check resources
View real-time CPU and memory usage for Pods in the namespace:
1kubectl top pod -n test-appExpected output format:
1NAME CPU(cores) MEMORY(bytes)2postgres-deployment-6577845884-8964z 25m 256MiNote: The
kubectl topcommand requires the Metrics Server to be installed. K3s includes it by default.
Additional Resource Monitoring Commands
1# View resource usage for all nodes2kubectl top nodes3
4# Get detailed Pod resource information5kubectl get pod -n test-app -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[*].resources}{"\n"}{end}'6
7# Watch Pod status in real-time8kubectl get pods -n test-app -wCheck logs
Retrieve container logs for debugging:
1kubectl logs postgres-deployment-6577845884-8964z -n test-appUseful log options:
1# Follow logs in real-time (like tail -f)2kubectl logs -f postgres-deployment-6577845884-8964z -n test-app3
4# Show last 100 lines only5kubectl logs --tail=100 postgres-deployment-6577845884-8964z -n test-app6
7# Show logs from the last hour8kubectl logs --since=1h postgres-deployment-6577845884-8964z -n test-app9
10# Show logs from previous container instance (after restart)11kubectl logs --previous postgres-deployment-6577845884-8964z -n test-appDetailed Pod Inspection
&
1kubectl describe pod postgres-deployment-6577845884-8964z -n test-appThe describe command provides comprehensive information including:
- Events: Recent events like scheduling, pulling images, container starts/restarts
- Conditions: Pod readiness, container status, initialization state
- Resource allocation: Actual requests and limits applied
- Volume mounts: Storage configuration and mount paths
- Node assignment: Which node the Pod is running on
Common Troubleshooting Scenarios
| Issue | Command | What to Look For |
|---|---|---|
| Pod not starting | kubectl describe pod <name> -n test-app | Events section for errors |
| Container crashing | kubectl logs --previous <pod> -n test-app | Error messages before crash |
| Image pull errors | kubectl get events -n test-app | ErrImagePull or ImagePullBackOff |
| Resource issues | kubectl top pod -n test-app | CPU/memory usage near limits |
Deployment With Persistent Storage
For production environments, always use PersistentVolumeClaims (PVC) to ensure data survives Pod restarts, rescheduling, and node failures.
Understanding PVC Access Modes
| Access Mode | Abbreviation | Description |
|---|---|---|
ReadWriteOnce | RWO | Volume can be mounted read-write by a single node |
ReadOnlyMany | ROX | Volume can be mounted read-only by many nodes |
ReadWriteMany | RWX | Volume can be mounted read-write by many nodes |
For PostgreSQL: Use
ReadWriteOncesince database files should only be written by one instance at a time.
Storage Classes in K3s
K3s comes with a default StorageClass called local-path that provisions storage on the node's local filesystem:
1# View available storage classes2kubectl get storageclass3
4# Describe the default K3s storage class5kubectl describe storageclass local-pathWith PVC
1apiVersion: apps/v12kind: Deployment3metadata:4 name: postgres-deployment5 namespace: test-app6 labels:7 app: postgres8spec:9 replicas: 110 selector:11 matchLabels:12 app: postgres13 template:14 metadata:15 labels:16 app: postgres17 spec:18 restartPolicy: Always19 containers:20 - name: postgres21 image: docker.io/dedkola/postgres:1722 imagePullPolicy: Always23 ports:24 - containerPort: 543225 resources:26 requests:27 memory: "1Gi"28 cpu: "1000m"29 limits:30 memory: "2Gi"31 cpu: "2000m"32
33 volumeMounts:34 - name: postgres-storage35 mountPath: /var/lib/postgresql/data36 volumes:37 - name: postgres-storage38 persistentVolumeClaim:39 claimName: postgres-pvc40---41apiVersion: v142kind: Service43metadata:44 name: postgres-service45 namespace: test-app46spec:47 type: LoadBalancer48 selector:49 app: postgres50 ports:51 - protocol: TCP52 port: 543253 targetPort: 543254---55apiVersion: v156kind: PersistentVolumeClaim57metadata:58 name: postgres-pvc59 namespace: test-app60spec:61 accessModes:62 - ReadWriteOnce63 storageClassName: local-path64 resources:65 requests:66 storage: 10GiPVC Lifecycle and Binding
When you create a PVC, Kubernetes attempts to find a matching PersistentVolume (PV) or dynamically provision one:
| Status | Description |
|---|---|
| Pending | Waiting for a PV to be bound |
| Bound | Successfully bound to a PV |
| Released | PVC deleted, PV awaiting reclamation |
| Failed | Automatic reclamation failed |
Verify PVC Status
1# Check PVC status2kubectl get pvc -n test-app3
4# Detailed PVC information5kubectl describe pvc postgres-pvc -n test-app6
7# Check bound PersistentVolume8kubectl get pv | grep postgresExpected output:
1NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE2postgres-pvc Bound pvc-a1b2c3d4-e5f6-7890-abcd-ef1234567890 10Gi RWO local-path 5mApplying the Deployment
Deploy your manifests to the cluster:
1# Apply the deployment (save YAML to file first)2kubectl apply -f postgres-deployment.yaml3
4# Or apply directly from URL/stdin5cat <<EOF | kubectl apply -f -6# ... paste your YAML here ...7EOFVerify Deployment Status
1# Check deployment status2kubectl get deployment -n test-app3
4# Watch rollout progress5kubectl rollout status deployment/postgres-deployment -n test-app6
7# View all resources in namespace8kubectl get all -n test-appScale the Deployment
Warning: Scaling PostgreSQL replicas requires special consideration for data consistency. For stateful applications like databases, consider using StatefulSets instead.
1# Scale to 2 replicas (for stateless apps only)2kubectl scale deployment postgres-deployment --replicas=2 -n test-app3
4# View replica status5kubectl get pods -n test-app -l app=postgresConnecting to PostgreSQL
From Inside the Cluster
Other Pods can connect using the Service DNS name:
1postgres-service.test-app.svc.cluster.local:5432From Outside the Cluster (LoadBalancer)
With type: LoadBalancer, K3s assigns an external IP (or uses the node IP with MetalLB):
1# Get the external IP/port2kubectl get svc postgres-service -n test-app3
4# Connect using psql5psql -h <EXTERNAL-IP> -p 5432 -U postgresPort Forwarding for Local Development
1# Forward local port 5432 to the Pod2kubectl port-forward svc/postgres-service 5432:5432 -n test-app3
4# In another terminal, connect locally5psql -h localhost -p 5432 -U postgresBest Practices
Security Recommendations
- Never store passwords in plain text — Use Kubernetes Secrets:
1# Create a secret for database password2kubectl create secret generic postgres-secret \3 --from-literal=POSTGRES_PASSWORD='your-secure-password' \4 -n test-app- Use Network Policies to restrict traffic to the database Pod
- Enable Pod Security Standards for production clusters
Resource Management Tips
- Set requests based on typical usage patterns
- Set limits to prevent runaway resource consumption
- Monitor actual usage with
kubectl topand adjust accordingly - Use LimitRange to enforce default limits in a namespace