Top Tags

MySQL InnoDB Cluster on K3s and expose it to LAN by LoadBalancer

How to deploy MySQL InnoDB Cluster on Kubernetes with MetalLB

Takes from Official MySQL Operator documentation here

Install K3s and MetalLB here K3s with LoadBalancer MetalLB

Understanding MySQL InnoDB Cluster Architecture

MySQL InnoDB Cluster is a complete high availability solution for MySQL that provides:

  • Group Replication: Automatic data replication across all cluster members using a virtual synchronous replication protocol
  • MySQL Router: Lightweight middleware that provides transparent routing between your application and the MySQL cluster
  • MySQL Shell: Advanced client and code editor for managing the cluster

Key Components

ComponentDescriptionDefault Port
MySQL ServerDatabase instances running Group Replication3306, 33060 (X Protocol)
MySQL RouterConnection routing and load balancing6446 (RW), 6447 (RO), 6448 (X RW), 6449 (X RO)
Group ReplicationSynchronous multi-primary/single-primary replication33061

How MySQL Operator Works

The MySQL Operator for Kubernetes manages the full lifecycle of MySQL InnoDB Cluster setups:

  1. Provisioning: Automatically creates StatefulSets, Services, ConfigMaps, and Secrets
  2. Configuration: Manages Group Replication configuration and MySQL Router setup
  3. Monitoring: Watches cluster health and responds to failures
  4. Scaling: Handles adding/removing instances with data synchronization
  5. Upgrades: Performs rolling upgrades with minimal downtime

Deploy MySQL Operator

bash
1kubectl apply -f https://raw.githubusercontent.com/mysql/mysql-operator/trunk/deploy/deploy-crds.yaml
2kubectl apply -f https://raw.githubusercontent.com/mysql/mysql-operator/trunk/deploy/deploy-operator.yaml

The first command installs Custom Resource Definitions (CRDs) that define new Kubernetes resource types:

  • InnoDBCluster - Main cluster resource
  • MySQLBackup - Backup resource
  • ClusterSecretRef - Secret reference for multi-cluster setups

The second command deploys the operator itself, which watches for these custom resources.

Verify the deployment

bash
1kubectl get deployment mysql-operator --namespace mysql-operator

You should see output similar to:

plaintext
1NAME READY UP-TO-DATE AVAILABLE AGE
2mysql-operator 1/1 1 1 2m

Additional Operator Verification Commands

Check operator pods are running:

bash
1kubectl get pods --namespace mysql-operator

View operator logs for troubleshooting:

bash
1kubectl logs -f deployment/mysql-operator --namespace mysql-operator

Create credentials secret

The secret stores MySQL root credentials used by the cluster. The operator reads this secret during cluster initialization.

bash
1kubectl create secret generic mypwds \
2 --from-literal=rootUser=root \
3 --from-literal=rootHost=% \
4 --from-literal=rootPassword="123"

Secret Parameters Explained:

  • rootUser: MySQL root username (typically "root")
  • rootHost: Host pattern for root access (% means any host, use specific IP/hostname for security)
  • rootPassword: Root password (should be complex in production)

Alternative: Creating Secret from YAML

For version control and GitOps workflows:

yaml
1apiVersion: v1
2kind: Secret
3metadata:
4 name: mypwds
5type: Opaque
6stringData:
7 rootUser: root
8 rootHost: "%"
9 rootPassword: "your-secure-password"

Deploy MySQL InnoDB Cluster

bash
1nano mycluster.yaml
yaml
1apiVersion: mysql.oracle.com/v2
2kind: InnoDBCluster
3metadata:
4 name: mycluster
5spec:
6 secretName: mypwds
7 tlsUseSelfSigned: true
8 instances: 3
9 router:
10 instances: 1

Cluster Specification Explained:

FieldDescription
secretNameReference to the Kubernetes secret containing root credentials
tlsUseSelfSignedWhen true, the operator generates self-signed certificates for TLS encryption
instancesNumber of MySQL server instances (minimum 3 for fault tolerance)
router.instancesNumber of MySQL Router instances for load balancing
bash
1kubectl apply -f mycluster.yaml
2kubectl get innodbcluster --watch

The cluster goes through several states during initialization:

  • PENDING → Resources being created
  • INITIALIZING → MySQL instances starting
  • ONLINE → Cluster is ready

Extended Cluster Configuration

For production deployments, consider these additional options:

yaml
1apiVersion: mysql.oracle.com/v2
2kind: InnoDBCluster
3metadata:
4 name: mycluster-production
5spec:
6 secretName: mypwds
7 tlsUseSelfSigned: true
8 instances: 3
9 version: "9.0.1" # Specific MySQL version
10 router:
11 instances: 2 # Multiple routers for HA
12 datadirVolumeClaimTemplate:
13 accessModes:
14 - ReadWriteOnce
15 resources:
16 requests:
17 storage: 50Gi
18 storageClassName: local-path # K3s default storage class
19 mycnf: |
20 [mysqld]
21 max_connections=500
22 innodb_buffer_pool_size=1G
23 innodb_log_file_size=256M

Resource Limits Configuration

Control CPU and memory for MySQL pods:

yaml
1apiVersion: mysql.oracle.com/v2
2kind: InnoDBCluster
3metadata:
4 name: mycluster-resources
5spec:
6 secretName: mypwds
7 tlsUseSelfSigned: true
8 instances: 3
9 router:
10 instances: 1
11 podSpec:
12 containers:
13 - name: mysql
14 resources:
15 requests:
16 cpu: "500m"
17 memory: "1Gi"
18 limits:
19 cpu: "2"
20 memory: "4Gi"

Expose MySQL InnoDB Cluster to LAN

Dont do this!!!

bash
1kubectl patch service mycluster -p '{"spec": {"type": "LoadBalancer"}}'

Why Direct Exposure is Dangerous

  1. No authentication layer: Anyone on the network can attempt connections
  2. Brute force attacks: Exposed ports are targets for automated attacks
  3. Data exfiltration: Compromised credentials lead to data theft
  4. Compliance violations: Many standards (PCI-DSS, HIPAA) prohibit direct database exposure

Secure Alternatives

Option 1: Internal ClusterIP with Application Proxy

Keep MySQL internal and access through your application:

bash
1# MySQL remains ClusterIP (default)
2# Applications in the cluster connect via service name
3mysql -h mycluster -P 6446 -u root -p

Option 2: NodePort for Controlled Access

Expose on specific node ports with firewall rules:

yaml
1apiVersion: v1
2kind: Service
3metadata:
4 name: mycluster-nodeport
5spec:
6 type: NodePort
7 selector:
8 mysql.oracle.com/cluster: mycluster
9 component: mysqlrouter
10 ports:
11 - name: mysql-rw
12 port: 6446
13 targetPort: 6446
14 nodePort: 30446

Option 3: VPN or SSH Tunnel

Access from outside via secure tunnel:

bash
1# SSH tunnel from local machine
2ssh -L 3306:mycluster.default.svc.cluster.local:6446 user@k3s-node

Next example deploy WordPress with MySQL InnoDB Cluster here

Cluster Management

Connect Mysqlsh

bash
1kubectl run --rm -it myshell --image=container-registry.oracle.com/mysql/community-operator -- mysqlsh

Once connected, you can interact with the cluster using MySQL Shell's JavaScript or Python mode:

javascript
1// Connect to the cluster
2\connect root@mycluster:6446
3
4// Check cluster status
5var cluster = dba.getCluster()
6cluster.status()

Connect to specific pod in cluster

bash
1kunectl get pods
2kubectl --namespace default exec -it mycluster-2 -- bash
3
4mysqlsh root@localhost

Monitoring and Troubleshooting

Check Cluster Status

View overall cluster health:

bash
1kubectl get innodbcluster

Example output:

plaintext
1NAME STATUS ONLINE INSTANCES ROUTERS AGE
2mycluster ONLINE 3 3 1 1h

View Cluster Events

Monitor events for debugging:

bash
1kubectl describe innodbcluster mycluster

Check Individual Pod Status

bash
1kubectl get pods -l mysql.oracle.com/cluster=mycluster

View MySQL Logs

bash
1# View logs for specific MySQL instance
2kubectl logs mycluster-0 -c mysql
3
4# Follow logs in real-time
5kubectl logs -f mycluster-0 -c mysql

Check Group Replication Status

Connect to MySQL and run:

sql
1SELECT * FROM performance_schema.replication_group_members;

Expected output shows all members with ONLINE state:

plaintext
1+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+
2| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE |
3+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+
4| group_replication_applier | 3b3c6a7c-1234-5678-9abc-def012345678 | mycluster-0 | 3306 | ONLINE | PRIMARY |
5| group_replication_applier | 4c4d7b8d-2345-6789-abcd-ef0123456789 | mycluster-1 | 3306 | ONLINE | SECONDARY |
6| group_replication_applier | 5d5e8c9e-3456-789a-bcde-f01234567890 | mycluster-2 | 3306 | ONLINE | SECONDARY |
7+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+

Common Issues and Solutions

IssueSymptomSolution
Cluster stuck in PENDINGPods not startingCheck PVC binding: kubectl get pvc
Split-brain scenarioMultiple primariesRestart affected pods, check network
High replication lagSlow queriesCheck performance_schema.replication_group_member_stats
Router not connectingConnection refusedVerify router pods: kubectl get pods -l component=mysqlrouter

Scaling the Cluster

Add More Instances

Update the instances field:

bash
1kubectl patch innodbcluster mycluster --type=merge -p '{"spec":{"instances":5}}'

Scale Router Instances

For higher connection capacity:

bash
1kubectl patch innodbcluster mycluster --type=merge -p '{"spec":{"router":{"instances":3}}}'

Backup and Recovery

Create Backup Using MySQL Shell

bash
1kubectl run --rm -it backup-shell --image=container-registry.oracle.com/mysql/community-operator -- mysqlsh root@mycluster:6446 -- util.dumpInstance('/backup')

Backup to Persistent Volume

Create a backup job with PVC:

yaml
1apiVersion: batch/v1
2kind: Job
3metadata:
4 name: mysql-backup
5spec:
6 template:
7 spec:
8 containers:
9 - name: backup
10 image: container-registry.oracle.com/mysql/community-operator
11 command:
12 - mysqlsh
13 - root@mycluster:6446
14 - --password=$(MYSQL_ROOT_PASSWORD)
15 - -e
16 - "util.dumpInstance('/backup', {threads: 4})"
17 env:
18 - name: MYSQL_ROOT_PASSWORD
19 valueFrom:
20 secretKeyRef:
21 name: mypwds
22 key: rootPassword
23 volumeMounts:
24 - name: backup-storage
25 mountPath: /backup
26 volumes:
27 - name: backup-storage
28 persistentVolumeClaim:
29 claimName: mysql-backup-pvc
30 restartPolicy: Never

Performance Tuning

Key MySQL Parameters for InnoDB Cluster

Add to your cluster's mycnf section:

ini
1[mysqld]
2# Connection handling
3max_connections = 500
4thread_cache_size = 50
5
6# InnoDB settings
7innodb_buffer_pool_size = 2G
8innodb_buffer_pool_instances = 4
9innodb_log_file_size = 512M
10innodb_flush_log_at_trx_commit = 1
11
12# Group Replication tuning
13group_replication_poll_spin_loops = 0
14group_replication_compression_threshold = 1000000
15group_replication_flow_control_mode = QUOTA

Router Connection Pool Settings

For high-traffic applications:

yaml
1spec:
2 router:
3 instances: 2
4 podSpec:
5 containers:
6 - name: router
7 env:
8 - name: MYSQL_ROUTER_max_total_connections
9 value: "1024"