Overview
This guide covers the complete installation of MicroK8s — a lightweight, single-package Kubernetes distribution developed by Canonical — along with Helm (the Kubernetes package manager) and OneDev (a self-hosted Git server with CI/CD capabilities).
What is MicroK8s?
MicroK8s is a minimal, CNCF-certified Kubernetes distribution designed for:
- Developer workstations — Quick local Kubernetes environment
- Edge computing — Low resource footprint ideal for IoT devices
- CI/CD pipelines — Fast cluster bootstrapping for testing
- Production environments — High availability clusters with minimal overhead
Key characteristics:
| Feature | Description |
|---|---|
| Memory footprint | ~540MB RAM minimum |
| Startup time | Less than 30 seconds |
| Package format | Snap package with automatic updates |
| CNI | Calico (default), Flannel, Cilium available |
| Container runtime | containerd |
What is Helm?
Helm is the package manager for Kubernetes that helps you define, install, and upgrade complex Kubernetes applications. Helm uses Charts — pre-configured packages of Kubernetes resources.
What is OneDev?
OneDev is an all-in-one DevOps platform featuring:
- Git repository management
- Issue tracking with custom workflows
- Built-in CI/CD engine
- Code search and navigation
- Pull request support
Install MicroK8s
The --classic flag grants the snap full system access, which is required for MicroK8s to manage containers, networking, and storage.
1sudo snap install microk8s --classicAdd user to microk8s group
After installation, add your user to the microk8s group to avoid using sudo for every command. Replace ded with your actual username.
The -a flag appends the user to the group without removing them from other groups. The -G flag specifies the group name.
1sudo usermod -a -G microk8s ded2sudo chown -R ded ~/.kubeImportant: After adding yourself to the group, either log out and log back in, or run
newgrp microk8sto apply the group membership immediately.
Verify Installation
Before enabling addons, verify that MicroK8s is running correctly:
1# Check MicroK8s version2microk8s version3
4# Wait for cluster to be ready (blocks until ready)5microk8s status --wait-readyExpected output:
microk8s is running
high-availability: no
datastore master nodes: 127.0.0.1:19001
datastore standby nodes: none
Enable Services (Addons)
MicroK8s uses an addon system to enable additional functionality. Each addon installs and configures specific Kubernetes components.
Addon Reference Table
| Addon | Purpose | Resource Usage |
|---|---|---|
dashboard | Web-based Kubernetes UI for cluster management | Low |
metallb | Bare-metal load balancer for external IP allocation | Low |
ingress | NGINX-based ingress controller for HTTP/HTTPS routing | Medium |
storage | Hostpath storage provisioner for PersistentVolumeClaims | Low |
dns | CoreDNS for internal cluster DNS resolution | Low |
registry | Private container registry (localhost:32000) | Medium |
community | Access to community-maintained addons | None |
istio | Service mesh for traffic management and security | High |
Enable Core Addons
1microk8s status --wait-ready2microk8s enable dashboard3microk8s enable metallb4microk8s enable ingress5microk8s enable storage6microk8s enable dns7microk8s enable registry8microk8s enable community9microk8s enable istioNote on MetalLB: When enabling MetalLB, you'll be prompted to enter an IP address range. Choose a range within your LAN subnet that's not used by DHCP. For example:
192.168.1.240-192.168.1.250
Understanding Each Addon
Dashboard — Provides a web UI for visualizing cluster resources, deploying applications, and troubleshooting.
MetalLB — Essential for bare-metal Kubernetes. In cloud environments, LoadBalancer services get external IPs from the cloud provider. MetalLB provides this functionality on-premises by announcing IPs via ARP (Layer 2) or BGP (Layer 3).
Ingress — Routes external HTTP/HTTPS traffic to internal services based on hostnames and paths. Uses NGINX by default.
Storage — Enables the microk8s-hostpath StorageClass, allowing dynamic provisioning of PersistentVolumes on the node's filesystem.
DNS (CoreDNS) — Provides service discovery within the cluster. Services can be reached via <service>.<namespace>.svc.cluster.local.
Registry — Local container registry accessible at localhost:32000. Useful for development workflows without pushing to external registries.
Istio — Service mesh providing:
- Traffic management (load balancing, canary deployments)
- Security (mTLS, authorization policies)
- Observability (metrics, tracing, logging)
Monitoring and Troubleshooting
Check Cluster Status
Use these commands to inspect the state of your cluster, diagnose issues, and verify that components are running correctly.
1microk8s kubectl get all --all-namespaces2microk8s kubectl get svc3microk8s kubectl get pods -n metallb-system4microk8s kubectl get all -n onedev5microk8s helm list -n onedev6microk8s kubectl get events -n onedev7microk8s kubectl get pods -n onedev8microk8s kubectl describe nodeCommand Reference
| Command | Description |
|---|---|
get all --all-namespaces | Lists all resources (pods, services, deployments, etc.) across all namespaces |
get svc | Lists services in the default namespace |
get pods -n <namespace> | Lists pods in a specific namespace |
get events -n <namespace> | Shows recent events (useful for debugging failed deployments) |
describe node | Detailed node information including capacity, conditions, and allocated resources |
helm list -n <namespace> | Lists Helm releases in a namespace |
Additional Diagnostic Commands
1# Check cluster component health2microk8s kubectl get componentstatuses3
4# View logs from a specific pod5microk8s kubectl logs <pod-name> -n <namespace>6
7# Follow logs in real-time8microk8s kubectl logs -f <pod-name> -n <namespace>9
10# Get detailed pod information11microk8s kubectl describe pod <pod-name> -n <namespace>12
13# Check resource usage (requires metrics-server addon)14microk8s enable metrics-server15microk8s kubectl top nodes16microk8s kubectl top pods -n <namespace>17
18# View cluster events sorted by time19microk8s kubectl get events --sort-by='.lastTimestamp' -AInstall Helm
Helm is installed separately from MicroK8s. While MicroK8s provides a built-in microk8s helm3 command, installing Helm as a standalone tool provides more flexibility.
Helm Architecture
Helm 3 uses a client-only architecture:
- Helm CLI — Command-line tool that communicates directly with the Kubernetes API
- Charts — Packages containing templated Kubernetes manifests
- Releases — Instances of charts deployed to a cluster
- Repositories — Servers hosting chart packages
1sudo snap install helm --classic2helm repo add onedev https://dl.cloudsmith.io/public/onedev/onedev/helm/charts3helm repo update onedevEssential Helm Commands
1# List all configured repositories2helm repo list3
4# Search for charts in repositories5helm search repo onedev6
7# Show chart information8helm show chart onedev/onedev9
10# Show default values for a chart11helm show values onedev/onedev12
13# Download a chart locally without installing14helm pull onedev/onedev --untarConfigure kubectl
By default, MicroK8s uses its own microk8s kubectl wrapper. To use the standard kubectl command with your MicroK8s cluster, you need to export the kubeconfig.
What is kubeconfig?
The kubeconfig file (~/.kube/config) contains:
- Clusters — Kubernetes API server endpoints and CA certificates
- Users — Authentication credentials (certificates, tokens)
- Contexts — Mappings between clusters and users
1sudo snap install kubectl --classic2sudo microk8s kubectl config view --raw > ~/.kube/config3kubectl get nodesSecurity Note: The exported kubeconfig contains cluster admin credentials. Ensure
~/.kube/confighas restrictive permissions:chmod 600 ~/.kube/config
Working with Multiple Clusters
If you manage multiple Kubernetes clusters, use contexts:
1# View current context2kubectl config current-context3
4# List all contexts5kubectl config get-contexts6
7# Switch to a different context8kubectl config use-context <context-name>9
10# View cluster info11kubectl cluster-infoAccess Kubernetes Dashboard
The Kubernetes Dashboard provides a web-based UI for cluster management. By default, it's only accessible from localhost for security.
Dashboard Access Methods
Method 1: Dashboard Proxy (Simple)
This creates a secure tunnel to the dashboard:
1microk8s dashboard-proxyMethod 2: Screen Session (Persistent)
For long-running access, use a screen session to keep the proxy running in the background:
1screen -S dashboard-proxyThen run microk8s dashboard-proxy inside the screen session. Detach with Ctrl+A, D.
More for Screen here Usinng Screen with MicroK8s dashboard proxy example
Method 3: Port Forward (Manual)
1# Get the dashboard token2microk8s kubectl describe secret -n kube-system microk8s-dashboard-token3
4# Forward the port5microk8s kubectl port-forward -n kube-system service/kubernetes-dashboard 10443:4436
7# Access at: https://localhost:10443Exposing Dashboard on LAN (Not Recommended for Production)
To access the dashboard from other machines on your network, you can create a NodePort service:
1# Create NodePort service for dashboard2microk8s kubectl expose deployment kubernetes-dashboard \3 -n kube-system \4 --type=NodePort \5 --port=443 \6 --target-port=8443 \7 --name=dashboard-nodeportWarning: Exposing the dashboard externally is a security risk. Use VPN or SSH tunneling for remote access instead.
Install OneDev
OneDev is deployed using Helm with automatic namespace creation. The -n onedev flag specifies the target namespace, and --create-namespace creates it if it doesn't exist.
Pre-installation Checklist
Before installing OneDev, ensure:
- Storage addon is enabled (
microk8s enable storage) - DNS addon is enabled (
microk8s enable dns) - Sufficient disk space for Git repositories
- MetalLB configured (if exposing externally)
1helm install onedev onedev/onedev -n onedev --create-namespaceUnderstanding Helm Install Command
| Component | Description |
|---|---|
helm install | Command to deploy a new release |
onedev (first) | Release name — identifier for this deployment |
onedev/onedev | Chart reference — <repo>/<chart> format |
-n onedev | Target namespace |
--create-namespace | Create namespace if it doesn't exist |
Customizing Installation with Values
To customize the installation, create a values.yaml file:
1# values.yaml - OneDev custom configuration2persistence:3 enabled: true4 size: 50Gi5 storageClass: microk8s-hostpath6
7resources:8 requests:9 memory: "1Gi"10 cpu: "500m"11 limits:12 memory: "4Gi"13 cpu: "2000m"14
15# MySQL database settings (if using external database)16mysql:17 enabled: false18
19# Ingress configuration20ingress:21 enabled: true22 className: nginx23 hosts:24 - host: onedev.local25 paths:26 - path: /27 pathType: PrefixThen install with custom values:
1helm install onedev onedev/onedev -n onedev --create-namespace -f values.yamlVerify Installation
1# Watch pods until they're ready2kubectl get pods -n onedev -w3
4# Check persistent volume claims5kubectl get pvc -n onedev6
7# View OneDev logs8kubectl logs -l app=onedev -n onedev --tail=100Expose OneDev to LAN
By default, OneDev uses a ClusterIP service, accessible only within the cluster. To access it from your LAN, upgrade the release to use a LoadBalancer service type.
Service Types Explained
| Type | Accessibility | Use Case |
|---|---|---|
ClusterIP | Internal only | Inter-pod communication |
NodePort | Node IP:Port | Direct access via node |
LoadBalancer | External IP | Production-grade external access |
1helm upgrade onedev onedev/onedev -n onedev --set service.type=LoadBalancer --reuse-valuesUnderstanding the Upgrade Command
| Flag | Description |
|---|---|
--set service.type=LoadBalancer | Override specific value |
--reuse-values | Keep all previously set values |
Verify External IP Assignment
After the upgrade, MetalLB will assign an external IP:
1# Check service external IP2kubectl get svc -n onedev3
4# Expected output:5# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)6# onedev LoadBalancer 10.152.183.XX 192.168.1.240 80:XXXXX/TCPAlternative: Using NodePort
If MetalLB isn't configured, use NodePort:
1helm upgrade onedev onedev/onedev -n onedev --set service.type=NodePort --set service.nodePort=30080 --reuse-valuesAccess at: http://<node-ip>:30080
Configuring Ingress for Domain Access
For hostname-based access, configure an Ingress resource:
1apiVersion: networking.k8s.io/v12kind: Ingress3metadata:4 name: onedev-ingress5 namespace: onedev6 annotations:7 nginx.ingress.kubernetes.io/proxy-body-size: "100m"8spec:9 ingressClassName: nginx10 rules:11 - host: onedev.yourdomain.com12 http:13 paths:14 - path: /15 pathType: Prefix16 backend:17 service:18 name: onedev19 port:20 number: 80Namespace Management
Namespaces provide logical isolation for Kubernetes resources. They're essential for:
- Multi-tenancy — Separate teams or projects
- Resource quotas — Limit CPU/memory per namespace
- Network policies — Control traffic between namespaces
- RBAC — Fine-grained access control
Viewing Namespaces
1kubectl get namespaces2kubectl get pods --all-namespacesDefault Namespaces
| Namespace | Purpose |
|---|---|
default | Default namespace for resources without explicit namespace |
kube-system | Kubernetes system components (API server, scheduler, etc.) |
kube-public | Publicly accessible resources |
kube-node-lease | Node heartbeat data for health monitoring |
Working with Namespaces
Set the default namespace for kubectl commands:
1kubectl config set-context --current --namespace=defaultAdditional Namespace Operations
1# Create a new namespace2kubectl create namespace development3
4# Delete a namespace (WARNING: deletes all resources within)5kubectl delete namespace development6
7# View resources in a specific namespace8kubectl get all -n kube-system9
10# View current namespace context11kubectl config view --minify | grep namespaceResource Quotas
Limit resource consumption per namespace:
1apiVersion: v12kind: ResourceQuota3metadata:4 name: compute-quota5 namespace: development6spec:7 hard:8 requests.cpu: "4"9 requests.memory: 8Gi10 limits.cpu: "8"11 limits.memory: 16Gi12 pods: "20"Export and Backup Configurations
Exporting Kubernetes resources to YAML files is essential for:
- Backup and disaster recovery
- Version control — Store configurations in Git
- Migration — Move resources between clusters
- Documentation — Reference current state
Export Service Configuration
1kubectl get svc mycluster -o yaml > mycluster.yamlAdditional Export Commands
1# Export deployment2kubectl get deployment <name> -o yaml > deployment.yaml3
4# Export all resources in a namespace5kubectl get all -n onedev -o yaml > onedev-backup.yaml6
7# Export specific resource types8kubectl get configmaps,secrets -n onedev -o yaml > onedev-configs.yaml9
10# Export without cluster-specific metadata (cleaner for reuse)11kubectl get deployment <name> -o yaml | \12 kubectl neat > deployment-clean.yamlNote:
kubectl neatis a plugin that removes cluster-specific fields. Install with:kubectl krew install neat
Helm Release Backup
1# Export Helm release values2helm get values onedev -n onedev > onedev-values.yaml3
4# Export all Helm release info5helm get all onedev -n onedev > onedev-release.yaml6
7# Export Helm release manifest8helm get manifest onedev -n onedev > onedev-manifest.yamlMaintenance Commands
MicroK8s Lifecycle
1# Stop MicroK8s (preserves data)2microk8s stop3
4# Start MicroK8s5microk8s start6
7# Completely reset MicroK8s (WARNING: destroys all data)8microk8s reset9
10# Uninstall MicroK8s11sudo snap remove microk8sHelm Release Management
1# View release history2helm history onedev -n onedev3
4# Rollback to previous version5helm rollback onedev -n onedev6
7# Rollback to specific revision8helm rollback onedev 2 -n onedev9
10# Uninstall release11helm uninstall onedev -n onedevStorage Management
1# List persistent volumes2kubectl get pv3
4# List persistent volume claims5kubectl get pvc --all-namespaces6
7# Describe PVC for troubleshooting8kubectl describe pvc <pvc-name> -n <namespace>9
10# Delete stuck PVC (after removing finalizers if needed)11kubectl patch pvc <pvc-name> -n <namespace> \12 -p '{"metadata":{"finalizers":null}}'13kubectl delete pvc <pvc-name> -n <namespace>Troubleshooting Guide
Common Issues and Solutions
| Issue | Possible Cause | Solution |
|---|---|---|
Pod stuck in Pending | Insufficient resources | Check node capacity: kubectl describe node |
Pod in CrashLoopBackOff | Application error | Check logs: kubectl logs <pod> |
PVC stuck in Pending | Storage not available | Enable storage addon: microk8s enable storage |
| No external IP | MetalLB not configured | Configure MetalLB with IP range |
| DNS resolution fails | CoreDNS not running | Check: kubectl get pods -n kube-system -l k8s-app=kube-dns |
Debug a Running Pod
1# Execute shell in pod2kubectl exec -it <pod-name> -n <namespace> -- /bin/sh3
4# Copy files from/to pod5kubectl cp <pod-name>:/path/to/file ./local-file -n <namespace>6kubectl cp ./local-file <pod-name>:/path/to/file -n <namespace>7
8# Port forward for debugging9kubectl port-forward pod/<pod-name> 8080:80 -n <namespace>Network Debugging
1# Test DNS resolution from within cluster2kubectl run debug --image=busybox --rm -it --restart=Never -- nslookup kubernetes3
4# Check endpoints5kubectl get endpoints -n <namespace>6
7# Verify network policies8kubectl get networkpolicies -n <namespace>High Availability Setup
For production environments, MicroK8s supports high availability clustering with multiple nodes.
Adding Nodes to Cluster
On the master node, generate a join token:
1microk8s add-nodeThis outputs a command like:
microk8s join 192.168.1.100:25000/abc123xyz...
On the worker node:
1# Install MicroK8s2sudo snap install microk8s --classic3
4# Join the cluster5microk8s join 192.168.1.100:25000/abc123xyz...Verify Cluster Nodes
1# List all nodes in cluster2kubectl get nodes -o wide3
4# Check node roles5kubectl get nodes --show-labelsHA Considerations
| Configuration | Nodes Required | Fault Tolerance |
|---|---|---|
| Single node | 1 | None |
| 3-node HA | 3 | 1 node failure |
| 5-node HA | 5 | 2 node failures |
Note: For HA datastore, MicroK8s uses Dqlite (distributed SQLite). At least 3 nodes are recommended for production HA.
Security Best Practices
RBAC Configuration
Create a limited service account for CI/CD pipelines:
1apiVersion: v12kind: ServiceAccount3metadata:4 name: cicd-deployer5 namespace: onedev6---7apiVersion: rbac.authorization.k8s.io/v18kind: Role9metadata:10 name: deployer-role11 namespace: onedev12rules:13 - apiGroups: ["", "apps", "extensions"]14 resources: ["deployments", "services", "pods", "configmaps", "secrets"]15 verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]16---17apiVersion: rbac.authorization.k8s.io/v118kind: RoleBinding19metadata:20 name: deployer-binding21 namespace: onedev22subjects:23 - kind: ServiceAccount24 name: cicd-deployer25 namespace: onedev26roleRef:27 kind: Role28 name: deployer-role29 apiGroup: rbac.authorization.k8s.ioNetwork Policies
Restrict traffic to OneDev namespace:
1apiVersion: networking.k8s.io/v12kind: NetworkPolicy3metadata:4 name: onedev-network-policy5 namespace: onedev6spec:7 podSelector: {}8 policyTypes:9 - Ingress10 - Egress11 ingress:12 - from:13 - namespaceSelector:14 matchLabels:15 name: ingress-nginx16 ports:17 - protocol: TCP18 port: 8019 egress:20 - to:21 - namespaceSelector: {}22 ports:23 - protocol: TCP24 port: 44325 - protocol: TCP26 port: 5327 - protocol: UDP28 port: 53Related Resources
- Using Screen with MicroK8s
- Deploy MySQL on Kubernetes
- Deploy DockerHub Image to K3s
- GitLab Runner Setup