Top Tags

MicroK8s install with Helm and OneDev

MicroK8s install with Helm and OneDev

Overview

This guide covers the complete installation of MicroK8s — a lightweight, single-package Kubernetes distribution developed by Canonical — along with Helm (the Kubernetes package manager) and OneDev (a self-hosted Git server with CI/CD capabilities).

What is MicroK8s?

MicroK8s is a minimal, CNCF-certified Kubernetes distribution designed for:

  • Developer workstations — Quick local Kubernetes environment
  • Edge computing — Low resource footprint ideal for IoT devices
  • CI/CD pipelines — Fast cluster bootstrapping for testing
  • Production environments — High availability clusters with minimal overhead

Key characteristics:

FeatureDescription
Memory footprint~540MB RAM minimum
Startup timeLess than 30 seconds
Package formatSnap package with automatic updates
CNICalico (default), Flannel, Cilium available
Container runtimecontainerd

What is Helm?

Helm is the package manager for Kubernetes that helps you define, install, and upgrade complex Kubernetes applications. Helm uses Charts — pre-configured packages of Kubernetes resources.

What is OneDev?

OneDev is an all-in-one DevOps platform featuring:

  • Git repository management
  • Issue tracking with custom workflows
  • Built-in CI/CD engine
  • Code search and navigation
  • Pull request support

Install MicroK8s

The --classic flag grants the snap full system access, which is required for MicroK8s to manage containers, networking, and storage.

bash
1sudo snap install microk8s --classic

Add user to microk8s group

After installation, add your user to the microk8s group to avoid using sudo for every command. Replace ded with your actual username.

The -a flag appends the user to the group without removing them from other groups. The -G flag specifies the group name.

bash
1sudo usermod -a -G microk8s ded
2sudo chown -R ded ~/.kube

Important: After adding yourself to the group, either log out and log back in, or run newgrp microk8s to apply the group membership immediately.

Verify Installation

Before enabling addons, verify that MicroK8s is running correctly:

bash
1# Check MicroK8s version
2microk8s version
3
4# Wait for cluster to be ready (blocks until ready)
5microk8s status --wait-ready

Expected output:

microk8s is running high-availability: no datastore master nodes: 127.0.0.1:19001 datastore standby nodes: none

Enable Services (Addons)

MicroK8s uses an addon system to enable additional functionality. Each addon installs and configures specific Kubernetes components.

Addon Reference Table

AddonPurposeResource Usage
dashboardWeb-based Kubernetes UI for cluster managementLow
metallbBare-metal load balancer for external IP allocationLow
ingressNGINX-based ingress controller for HTTP/HTTPS routingMedium
storageHostpath storage provisioner for PersistentVolumeClaimsLow
dnsCoreDNS for internal cluster DNS resolutionLow
registryPrivate container registry (localhost:32000)Medium
communityAccess to community-maintained addonsNone
istioService mesh for traffic management and securityHigh

Enable Core Addons

bash
1microk8s status --wait-ready
2microk8s enable dashboard
3microk8s enable metallb
4microk8s enable ingress
5microk8s enable storage
6microk8s enable dns
7microk8s enable registry
8microk8s enable community
9microk8s enable istio

Note on MetalLB: When enabling MetalLB, you'll be prompted to enter an IP address range. Choose a range within your LAN subnet that's not used by DHCP. For example: 192.168.1.240-192.168.1.250

Understanding Each Addon

Dashboard — Provides a web UI for visualizing cluster resources, deploying applications, and troubleshooting.

MetalLB — Essential for bare-metal Kubernetes. In cloud environments, LoadBalancer services get external IPs from the cloud provider. MetalLB provides this functionality on-premises by announcing IPs via ARP (Layer 2) or BGP (Layer 3).

Ingress — Routes external HTTP/HTTPS traffic to internal services based on hostnames and paths. Uses NGINX by default.

Storage — Enables the microk8s-hostpath StorageClass, allowing dynamic provisioning of PersistentVolumes on the node's filesystem.

DNS (CoreDNS) — Provides service discovery within the cluster. Services can be reached via <service>.<namespace>.svc.cluster.local.

Registry — Local container registry accessible at localhost:32000. Useful for development workflows without pushing to external registries.

Istio — Service mesh providing:

  • Traffic management (load balancing, canary deployments)
  • Security (mTLS, authorization policies)
  • Observability (metrics, tracing, logging)

Monitoring and Troubleshooting

Check Cluster Status

Use these commands to inspect the state of your cluster, diagnose issues, and verify that components are running correctly.

bash
1microk8s kubectl get all --all-namespaces
2microk8s kubectl get svc
3microk8s kubectl get pods -n metallb-system
4microk8s kubectl get all -n onedev
5microk8s helm list -n onedev
6microk8s kubectl get events -n onedev
7microk8s kubectl get pods -n onedev
8microk8s kubectl describe node

Command Reference

CommandDescription
get all --all-namespacesLists all resources (pods, services, deployments, etc.) across all namespaces
get svcLists services in the default namespace
get pods -n <namespace>Lists pods in a specific namespace
get events -n <namespace>Shows recent events (useful for debugging failed deployments)
describe nodeDetailed node information including capacity, conditions, and allocated resources
helm list -n <namespace>Lists Helm releases in a namespace

Additional Diagnostic Commands

bash
1# Check cluster component health
2microk8s kubectl get componentstatuses
3
4# View logs from a specific pod
5microk8s kubectl logs <pod-name> -n <namespace>
6
7# Follow logs in real-time
8microk8s kubectl logs -f <pod-name> -n <namespace>
9
10# Get detailed pod information
11microk8s kubectl describe pod <pod-name> -n <namespace>
12
13# Check resource usage (requires metrics-server addon)
14microk8s enable metrics-server
15microk8s kubectl top nodes
16microk8s kubectl top pods -n <namespace>
17
18# View cluster events sorted by time
19microk8s kubectl get events --sort-by='.lastTimestamp' -A

Install Helm

Helm is installed separately from MicroK8s. While MicroK8s provides a built-in microk8s helm3 command, installing Helm as a standalone tool provides more flexibility.

Helm Architecture

Helm 3 uses a client-only architecture:

  • Helm CLI — Command-line tool that communicates directly with the Kubernetes API
  • Charts — Packages containing templated Kubernetes manifests
  • Releases — Instances of charts deployed to a cluster
  • Repositories — Servers hosting chart packages
bash
1sudo snap install helm --classic
2helm repo add onedev https://dl.cloudsmith.io/public/onedev/onedev/helm/charts
3helm repo update onedev

Essential Helm Commands

bash
1# List all configured repositories
2helm repo list
3
4# Search for charts in repositories
5helm search repo onedev
6
7# Show chart information
8helm show chart onedev/onedev
9
10# Show default values for a chart
11helm show values onedev/onedev
12
13# Download a chart locally without installing
14helm pull onedev/onedev --untar

Configure kubectl

By default, MicroK8s uses its own microk8s kubectl wrapper. To use the standard kubectl command with your MicroK8s cluster, you need to export the kubeconfig.

What is kubeconfig?

The kubeconfig file (~/.kube/config) contains:

  • Clusters — Kubernetes API server endpoints and CA certificates
  • Users — Authentication credentials (certificates, tokens)
  • Contexts — Mappings between clusters and users
bash
1sudo snap install kubectl --classic
2sudo microk8s kubectl config view --raw > ~/.kube/config
3kubectl get nodes

Security Note: The exported kubeconfig contains cluster admin credentials. Ensure ~/.kube/config has restrictive permissions: chmod 600 ~/.kube/config

Working with Multiple Clusters

If you manage multiple Kubernetes clusters, use contexts:

bash
1# View current context
2kubectl config current-context
3
4# List all contexts
5kubectl config get-contexts
6
7# Switch to a different context
8kubectl config use-context <context-name>
9
10# View cluster info
11kubectl cluster-info

Access Kubernetes Dashboard

The Kubernetes Dashboard provides a web-based UI for cluster management. By default, it's only accessible from localhost for security.

Dashboard Access Methods

Method 1: Dashboard Proxy (Simple)

This creates a secure tunnel to the dashboard:

bash
1microk8s dashboard-proxy

Method 2: Screen Session (Persistent)

For long-running access, use a screen session to keep the proxy running in the background:

bash
1screen -S dashboard-proxy

Then run microk8s dashboard-proxy inside the screen session. Detach with Ctrl+A, D.

More for Screen here Usinng Screen with MicroK8s dashboard proxy example

Method 3: Port Forward (Manual)

bash
1# Get the dashboard token
2microk8s kubectl describe secret -n kube-system microk8s-dashboard-token
3
4# Forward the port
5microk8s kubectl port-forward -n kube-system service/kubernetes-dashboard 10443:443
6
7# Access at: https://localhost:10443

To access the dashboard from other machines on your network, you can create a NodePort service:

bash
1# Create NodePort service for dashboard
2microk8s kubectl expose deployment kubernetes-dashboard \
3 -n kube-system \
4 --type=NodePort \
5 --port=443 \
6 --target-port=8443 \
7 --name=dashboard-nodeport

Warning: Exposing the dashboard externally is a security risk. Use VPN or SSH tunneling for remote access instead.


Install OneDev

OneDev is deployed using Helm with automatic namespace creation. The -n onedev flag specifies the target namespace, and --create-namespace creates it if it doesn't exist.

Pre-installation Checklist

Before installing OneDev, ensure:

  • Storage addon is enabled (microk8s enable storage)
  • DNS addon is enabled (microk8s enable dns)
  • Sufficient disk space for Git repositories
  • MetalLB configured (if exposing externally)
bash
1helm install onedev onedev/onedev -n onedev --create-namespace

Understanding Helm Install Command

ComponentDescription
helm installCommand to deploy a new release
onedev (first)Release name — identifier for this deployment
onedev/onedevChart reference — <repo>/<chart> format
-n onedevTarget namespace
--create-namespaceCreate namespace if it doesn't exist

Customizing Installation with Values

To customize the installation, create a values.yaml file:

yaml
1# values.yaml - OneDev custom configuration
2persistence:
3 enabled: true
4 size: 50Gi
5 storageClass: microk8s-hostpath
6
7resources:
8 requests:
9 memory: "1Gi"
10 cpu: "500m"
11 limits:
12 memory: "4Gi"
13 cpu: "2000m"
14
15# MySQL database settings (if using external database)
16mysql:
17 enabled: false
18
19# Ingress configuration
20ingress:
21 enabled: true
22 className: nginx
23 hosts:
24 - host: onedev.local
25 paths:
26 - path: /
27 pathType: Prefix

Then install with custom values:

bash
1helm install onedev onedev/onedev -n onedev --create-namespace -f values.yaml

Verify Installation

bash
1# Watch pods until they're ready
2kubectl get pods -n onedev -w
3
4# Check persistent volume claims
5kubectl get pvc -n onedev
6
7# View OneDev logs
8kubectl logs -l app=onedev -n onedev --tail=100

Expose OneDev to LAN

By default, OneDev uses a ClusterIP service, accessible only within the cluster. To access it from your LAN, upgrade the release to use a LoadBalancer service type.

Service Types Explained

TypeAccessibilityUse Case
ClusterIPInternal onlyInter-pod communication
NodePortNode IP:PortDirect access via node
LoadBalancerExternal IPProduction-grade external access
bash
1helm upgrade onedev onedev/onedev -n onedev --set service.type=LoadBalancer --reuse-values

Understanding the Upgrade Command

FlagDescription
--set service.type=LoadBalancerOverride specific value
--reuse-valuesKeep all previously set values

Verify External IP Assignment

After the upgrade, MetalLB will assign an external IP:

bash
1# Check service external IP
2kubectl get svc -n onedev
3
4# Expected output:
5# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
6# onedev LoadBalancer 10.152.183.XX 192.168.1.240 80:XXXXX/TCP

Alternative: Using NodePort

If MetalLB isn't configured, use NodePort:

bash
1helm upgrade onedev onedev/onedev -n onedev --set service.type=NodePort --set service.nodePort=30080 --reuse-values

Access at: http://<node-ip>:30080

Configuring Ingress for Domain Access

For hostname-based access, configure an Ingress resource:

yaml
1apiVersion: networking.k8s.io/v1
2kind: Ingress
3metadata:
4 name: onedev-ingress
5 namespace: onedev
6 annotations:
7 nginx.ingress.kubernetes.io/proxy-body-size: "100m"
8spec:
9 ingressClassName: nginx
10 rules:
11 - host: onedev.yourdomain.com
12 http:
13 paths:
14 - path: /
15 pathType: Prefix
16 backend:
17 service:
18 name: onedev
19 port:
20 number: 80

Namespace Management

Namespaces provide logical isolation for Kubernetes resources. They're essential for:

  • Multi-tenancy — Separate teams or projects
  • Resource quotas — Limit CPU/memory per namespace
  • Network policies — Control traffic between namespaces
  • RBAC — Fine-grained access control

Viewing Namespaces

bash
1kubectl get namespaces
2kubectl get pods --all-namespaces

Default Namespaces

NamespacePurpose
defaultDefault namespace for resources without explicit namespace
kube-systemKubernetes system components (API server, scheduler, etc.)
kube-publicPublicly accessible resources
kube-node-leaseNode heartbeat data for health monitoring

Working with Namespaces

Set the default namespace for kubectl commands:

bash
1kubectl config set-context --current --namespace=default

Additional Namespace Operations

bash
1# Create a new namespace
2kubectl create namespace development
3
4# Delete a namespace (WARNING: deletes all resources within)
5kubectl delete namespace development
6
7# View resources in a specific namespace
8kubectl get all -n kube-system
9
10# View current namespace context
11kubectl config view --minify | grep namespace

Resource Quotas

Limit resource consumption per namespace:

yaml
1apiVersion: v1
2kind: ResourceQuota
3metadata:
4 name: compute-quota
5 namespace: development
6spec:
7 hard:
8 requests.cpu: "4"
9 requests.memory: 8Gi
10 limits.cpu: "8"
11 limits.memory: 16Gi
12 pods: "20"

Export and Backup Configurations

Exporting Kubernetes resources to YAML files is essential for:

  • Backup and disaster recovery
  • Version control — Store configurations in Git
  • Migration — Move resources between clusters
  • Documentation — Reference current state

Export Service Configuration

bash
1kubectl get svc mycluster -o yaml > mycluster.yaml

Additional Export Commands

bash
1# Export deployment
2kubectl get deployment <name> -o yaml > deployment.yaml
3
4# Export all resources in a namespace
5kubectl get all -n onedev -o yaml > onedev-backup.yaml
6
7# Export specific resource types
8kubectl get configmaps,secrets -n onedev -o yaml > onedev-configs.yaml
9
10# Export without cluster-specific metadata (cleaner for reuse)
11kubectl get deployment <name> -o yaml | \
12 kubectl neat > deployment-clean.yaml

Note: kubectl neat is a plugin that removes cluster-specific fields. Install with: kubectl krew install neat

Helm Release Backup

bash
1# Export Helm release values
2helm get values onedev -n onedev > onedev-values.yaml
3
4# Export all Helm release info
5helm get all onedev -n onedev > onedev-release.yaml
6
7# Export Helm release manifest
8helm get manifest onedev -n onedev > onedev-manifest.yaml

Maintenance Commands

MicroK8s Lifecycle

bash
1# Stop MicroK8s (preserves data)
2microk8s stop
3
4# Start MicroK8s
5microk8s start
6
7# Completely reset MicroK8s (WARNING: destroys all data)
8microk8s reset
9
10# Uninstall MicroK8s
11sudo snap remove microk8s

Helm Release Management

bash
1# View release history
2helm history onedev -n onedev
3
4# Rollback to previous version
5helm rollback onedev -n onedev
6
7# Rollback to specific revision
8helm rollback onedev 2 -n onedev
9
10# Uninstall release
11helm uninstall onedev -n onedev

Storage Management

bash
1# List persistent volumes
2kubectl get pv
3
4# List persistent volume claims
5kubectl get pvc --all-namespaces
6
7# Describe PVC for troubleshooting
8kubectl describe pvc <pvc-name> -n <namespace>
9
10# Delete stuck PVC (after removing finalizers if needed)
11kubectl patch pvc <pvc-name> -n <namespace> \
12 -p '{"metadata":{"finalizers":null}}'
13kubectl delete pvc <pvc-name> -n <namespace>

Troubleshooting Guide

Common Issues and Solutions

IssuePossible CauseSolution
Pod stuck in PendingInsufficient resourcesCheck node capacity: kubectl describe node
Pod in CrashLoopBackOffApplication errorCheck logs: kubectl logs <pod>
PVC stuck in PendingStorage not availableEnable storage addon: microk8s enable storage
No external IPMetalLB not configuredConfigure MetalLB with IP range
DNS resolution failsCoreDNS not runningCheck: kubectl get pods -n kube-system -l k8s-app=kube-dns

Debug a Running Pod

bash
1# Execute shell in pod
2kubectl exec -it <pod-name> -n <namespace> -- /bin/sh
3
4# Copy files from/to pod
5kubectl cp <pod-name>:/path/to/file ./local-file -n <namespace>
6kubectl cp ./local-file <pod-name>:/path/to/file -n <namespace>
7
8# Port forward for debugging
9kubectl port-forward pod/<pod-name> 8080:80 -n <namespace>

Network Debugging

bash
1# Test DNS resolution from within cluster
2kubectl run debug --image=busybox --rm -it --restart=Never -- nslookup kubernetes
3
4# Check endpoints
5kubectl get endpoints -n <namespace>
6
7# Verify network policies
8kubectl get networkpolicies -n <namespace>

High Availability Setup

For production environments, MicroK8s supports high availability clustering with multiple nodes.

Adding Nodes to Cluster

On the master node, generate a join token:

bash
1microk8s add-node

This outputs a command like:

microk8s join 192.168.1.100:25000/abc123xyz...

On the worker node:

bash
1# Install MicroK8s
2sudo snap install microk8s --classic
3
4# Join the cluster
5microk8s join 192.168.1.100:25000/abc123xyz...

Verify Cluster Nodes

bash
1# List all nodes in cluster
2kubectl get nodes -o wide
3
4# Check node roles
5kubectl get nodes --show-labels

HA Considerations

ConfigurationNodes RequiredFault Tolerance
Single node1None
3-node HA31 node failure
5-node HA52 node failures

Note: For HA datastore, MicroK8s uses Dqlite (distributed SQLite). At least 3 nodes are recommended for production HA.


Security Best Practices

RBAC Configuration

Create a limited service account for CI/CD pipelines:

yaml
1apiVersion: v1
2kind: ServiceAccount
3metadata:
4 name: cicd-deployer
5 namespace: onedev
6---
7apiVersion: rbac.authorization.k8s.io/v1
8kind: Role
9metadata:
10 name: deployer-role
11 namespace: onedev
12rules:
13 - apiGroups: ["", "apps", "extensions"]
14 resources: ["deployments", "services", "pods", "configmaps", "secrets"]
15 verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
16---
17apiVersion: rbac.authorization.k8s.io/v1
18kind: RoleBinding
19metadata:
20 name: deployer-binding
21 namespace: onedev
22subjects:
23 - kind: ServiceAccount
24 name: cicd-deployer
25 namespace: onedev
26roleRef:
27 kind: Role
28 name: deployer-role
29 apiGroup: rbac.authorization.k8s.io

Network Policies

Restrict traffic to OneDev namespace:

yaml
1apiVersion: networking.k8s.io/v1
2kind: NetworkPolicy
3metadata:
4 name: onedev-network-policy
5 namespace: onedev
6spec:
7 podSelector: {}
8 policyTypes:
9 - Ingress
10 - Egress
11 ingress:
12 - from:
13 - namespaceSelector:
14 matchLabels:
15 name: ingress-nginx
16 ports:
17 - protocol: TCP
18 port: 80
19 egress:
20 - to:
21 - namespaceSelector: {}
22 ports:
23 - protocol: TCP
24 port: 443
25 - protocol: TCP
26 port: 53
27 - protocol: UDP
28 port: 53

External Documentation