Top Tags

Deploy DockerHub Image to K3s

Step-by-step guide to deploy a DockerHub image to a K3s cluster.

Overview

This guide demonstrates how to deploy a container image from DockerHub to a K3s (lightweight Kubernetes) cluster. K3s is a certified Kubernetes distribution designed for production workloads in resource-constrained environments, edge computing, and IoT devices.

Key Concepts

ComponentDescription
DeploymentKubernetes resource that manages a set of replicated Pods, ensuring the desired state matches the actual state
ServiceExposes your Deployment to network traffic, either internally or externally via LoadBalancer
PersistentVolumeClaim (PVC)Requests storage resources from the cluster for data persistence
NamespaceVirtual cluster that provides isolation between workloads

Prerequisites

Before deploying, ensure your K3s cluster is running and kubectl is configured:

bash
1# Verify cluster connection
2kubectl cluster-info
3
4# Check node status
5kubectl get nodes -o wide
6
7# Create namespace if it doesn't exist
8kubectl create namespace test-app --dry-run=client -o yaml | kubectl apply -f -

Deployment Without Persistent Storage

The following deployment uses emptyDir volume, which provides temporary storage that exists only for the lifetime of the Pod. This approach is suitable for development and testing only — data will be lost when the Pod restarts.

Understanding emptyDir Volumes

  • Created when a Pod is assigned to a node
  • Exists as long as the Pod runs on that node
  • Initially empty, hence the name
  • All containers in the Pod can read/write the same files
  • Deleted permanently when the Pod is removed from the node

Create a Deployment YAML without PVC

yaml
1apiVersion: apps/v1
2kind: Deployment
3metadata:
4 name: postgres-deployment
5 namespace: test-app
6 labels:
7 app: postgres
8spec:
9 replicas: 1
10 selector:
11 matchLabels:
12 app: postgres
13 template:
14 metadata:
15 labels:
16 app: postgres
17 spec:
18 restartPolicy: Always
19 containers:
20 - name: postgres
21 image: docker.io/dedkola/postgres:17
22 imagePullPolicy: Always
23 ports:
24 - containerPort: 5432
25 resources:
26 requests:
27 memory: "1Gi"
28 cpu: "1000m"
29 limits:
30 memory: "2Gi"
31 cpu: "2000m"
32
33 volumeMounts:
34 - name: postgres-storage
35 mountPath: /var/lib/postgresql/data
36 volumes:
37 - name: postgres-storage
38 emptyDir: {} # For production, use PersistentVolumeClaim
39---
40apiVersion: v1
41kind: Service
42metadata:
43 name: postgres-service
44 namespace: test-app
45spec:
46 type: LoadBalancer
47 selector:
48 app: postgres
49 ports:
50 - protocol: TCP
51 port: 5432
52 targetPort: 5432

Understanding the Deployment Manifest

Metadata and Labels

Labels are key-value pairs attached to objects for identification and selection:

  • app: postgres — identifies this resource as part of the PostgreSQL application
  • Used by Services and other resources to select Pods via selector.matchLabels

Container Configuration

FieldValuePurpose
imagedocker.io/dedkola/postgres:17Full image path with registry, repository, and tag
imagePullPolicy: AlwaysForces pulling the latest image on each Pod start
containerPort: 5432PostgreSQL default port exposed by the container

Resource Requests vs Limits

Kubernetes uses requests and limits to manage container resources:

  • Requests: Minimum guaranteed resources. The scheduler uses this to find a suitable node
  • Limits: Maximum resources a container can use. Enforced by the kubelet
ResourceValueDescription
Request1GiGuaranteed minimum memory
Limit2GiMaximum allowed (OOM kill if exceeded)

CPU Limits Behavior:

  • Enforced via CPU throttling by the Linux kernel
  • Container is throttled when approaching the limit
  • Hard limit — containers cannot exceed their CPU limit

Memory Limits Behavior:

  • Enforced via OOM (Out of Memory) kills
  • Container may be terminated if it exceeds memory limit under memory pressure
  • Reactive enforcement — brief overages may not trigger immediate termination

Monitoring and Troubleshooting

Effective monitoring helps identify resource bottlenecks and application issues.

Check resources

View real-time CPU and memory usage for Pods in the namespace:

bash
1kubectl top pod -n test-app

Expected output format:

plaintext
1NAME CPU(cores) MEMORY(bytes)
2postgres-deployment-6577845884-8964z 25m 256Mi

Note: The kubectl top command requires the Metrics Server to be installed. K3s includes it by default.

Additional Resource Monitoring Commands

bash
1# View resource usage for all nodes
2kubectl top nodes
3
4# Get detailed Pod resource information
5kubectl get pod -n test-app -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[*].resources}{"\n"}{end}'
6
7# Watch Pod status in real-time
8kubectl get pods -n test-app -w

Check logs

Retrieve container logs for debugging:

bash
1kubectl logs postgres-deployment-6577845884-8964z -n test-app

Useful log options:

bash
1# Follow logs in real-time (like tail -f)
2kubectl logs -f postgres-deployment-6577845884-8964z -n test-app
3
4# Show last 100 lines only
5kubectl logs --tail=100 postgres-deployment-6577845884-8964z -n test-app
6
7# Show logs from the last hour
8kubectl logs --since=1h postgres-deployment-6577845884-8964z -n test-app
9
10# Show logs from previous container instance (after restart)
11kubectl logs --previous postgres-deployment-6577845884-8964z -n test-app

Detailed Pod Inspection

&

bash
1kubectl describe pod postgres-deployment-6577845884-8964z -n test-app

The describe command provides comprehensive information including:

  • Events: Recent events like scheduling, pulling images, container starts/restarts
  • Conditions: Pod readiness, container status, initialization state
  • Resource allocation: Actual requests and limits applied
  • Volume mounts: Storage configuration and mount paths
  • Node assignment: Which node the Pod is running on

Common Troubleshooting Scenarios

IssueCommandWhat to Look For
Pod not startingkubectl describe pod <name> -n test-appEvents section for errors
Container crashingkubectl logs --previous <pod> -n test-appError messages before crash
Image pull errorskubectl get events -n test-appErrImagePull or ImagePullBackOff
Resource issueskubectl top pod -n test-appCPU/memory usage near limits

Deployment With Persistent Storage

For production environments, always use PersistentVolumeClaims (PVC) to ensure data survives Pod restarts, rescheduling, and node failures.

Understanding PVC Access Modes

Access ModeAbbreviationDescription
ReadWriteOnceRWOVolume can be mounted read-write by a single node
ReadOnlyManyROXVolume can be mounted read-only by many nodes
ReadWriteManyRWXVolume can be mounted read-write by many nodes

For PostgreSQL: Use ReadWriteOnce since database files should only be written by one instance at a time.

Storage Classes in K3s

K3s comes with a default StorageClass called local-path that provisions storage on the node's local filesystem:

bash
1# View available storage classes
2kubectl get storageclass
3
4# Describe the default K3s storage class
5kubectl describe storageclass local-path

With PVC

yaml
1apiVersion: apps/v1
2kind: Deployment
3metadata:
4 name: postgres-deployment
5 namespace: test-app
6 labels:
7 app: postgres
8spec:
9 replicas: 1
10 selector:
11 matchLabels:
12 app: postgres
13 template:
14 metadata:
15 labels:
16 app: postgres
17 spec:
18 restartPolicy: Always
19 containers:
20 - name: postgres
21 image: docker.io/dedkola/postgres:17
22 imagePullPolicy: Always
23 ports:
24 - containerPort: 5432
25 resources:
26 requests:
27 memory: "1Gi"
28 cpu: "1000m"
29 limits:
30 memory: "2Gi"
31 cpu: "2000m"
32
33 volumeMounts:
34 - name: postgres-storage
35 mountPath: /var/lib/postgresql/data
36 volumes:
37 - name: postgres-storage
38 persistentVolumeClaim:
39 claimName: postgres-pvc
40---
41apiVersion: v1
42kind: Service
43metadata:
44 name: postgres-service
45 namespace: test-app
46spec:
47 type: LoadBalancer
48 selector:
49 app: postgres
50 ports:
51 - protocol: TCP
52 port: 5432
53 targetPort: 5432
54---
55apiVersion: v1
56kind: PersistentVolumeClaim
57metadata:
58 name: postgres-pvc
59 namespace: test-app
60spec:
61 accessModes:
62 - ReadWriteOnce
63 storageClassName: local-path
64 resources:
65 requests:
66 storage: 10Gi

PVC Lifecycle and Binding

When you create a PVC, Kubernetes attempts to find a matching PersistentVolume (PV) or dynamically provision one:

StatusDescription
PendingWaiting for a PV to be bound
BoundSuccessfully bound to a PV
ReleasedPVC deleted, PV awaiting reclamation
FailedAutomatic reclamation failed

Verify PVC Status

bash
1# Check PVC status
2kubectl get pvc -n test-app
3
4# Detailed PVC information
5kubectl describe pvc postgres-pvc -n test-app
6
7# Check bound PersistentVolume
8kubectl get pv | grep postgres

Expected output:

plaintext
1NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
2postgres-pvc Bound pvc-a1b2c3d4-e5f6-7890-abcd-ef1234567890 10Gi RWO local-path 5m

Applying the Deployment

Deploy your manifests to the cluster:

bash
1# Apply the deployment (save YAML to file first)
2kubectl apply -f postgres-deployment.yaml
3
4# Or apply directly from URL/stdin
5cat <<EOF | kubectl apply -f -
6# ... paste your YAML here ...
7EOF

Verify Deployment Status

bash
1# Check deployment status
2kubectl get deployment -n test-app
3
4# Watch rollout progress
5kubectl rollout status deployment/postgres-deployment -n test-app
6
7# View all resources in namespace
8kubectl get all -n test-app

Scale the Deployment

Warning: Scaling PostgreSQL replicas requires special consideration for data consistency. For stateful applications like databases, consider using StatefulSets instead.

bash
1# Scale to 2 replicas (for stateless apps only)
2kubectl scale deployment postgres-deployment --replicas=2 -n test-app
3
4# View replica status
5kubectl get pods -n test-app -l app=postgres

Connecting to PostgreSQL

From Inside the Cluster

Other Pods can connect using the Service DNS name:

plaintext
1postgres-service.test-app.svc.cluster.local:5432

From Outside the Cluster (LoadBalancer)

With type: LoadBalancer, K3s assigns an external IP (or uses the node IP with MetalLB):

bash
1# Get the external IP/port
2kubectl get svc postgres-service -n test-app
3
4# Connect using psql
5psql -h <EXTERNAL-IP> -p 5432 -U postgres

Port Forwarding for Local Development

bash
1# Forward local port 5432 to the Pod
2kubectl port-forward svc/postgres-service 5432:5432 -n test-app
3
4# In another terminal, connect locally
5psql -h localhost -p 5432 -U postgres

Best Practices

Security Recommendations

  1. Never store passwords in plain text — Use Kubernetes Secrets:
bash
1# Create a secret for database password
2kubectl create secret generic postgres-secret \
3 --from-literal=POSTGRES_PASSWORD='your-secure-password' \
4 -n test-app
  1. Use Network Policies to restrict traffic to the database Pod
  2. Enable Pod Security Standards for production clusters

Resource Management Tips

  • Set requests based on typical usage patterns
  • Set limits to prevent runaway resource consumption
  • Monitor actual usage with kubectl top and adjust accordingly
  • Use LimitRange to enforce default limits in a namespace