When I first set up my Kubernetes cluster using OrangePi CM4 boards on the Turing Pi V2, storage was a challenge. Since the OrangePi CM4 didn’t support NVMe and SATA across the four nodes, I resorted to NFS mounts for persistent volume storage. While this worked, it wasn’t the most efficient solution, especially for performance-intensive workloads.
Now that I’ve transitioned to Turing Pi RK1 boards, I installed four 512GB NVMe drives, significantly improving my storage capabilities. I considered Ceph, but its memory and resource requirements exceeded what I was willing to allocate. Instead, I opted for Longhorn, which met my needs while maintaining a good balance between resource usage and performance.
Why Longhorn?
Sure, Longhorn introduces some overhead, but even with that, its performance on NVMe easily surpasses my previous NFS setup. So far, I’m quite happy with the results. Plus, Longhorn integrates well into Kubernetes, offering built-in snapshot and backup features, making it a solid choice for managing my persistent storage needs.
Installing Longhorn with Ansible
I automated my Longhorn deployment using Ansible playbooks. Below is a breakdown of my setup:
1. Install Required Dependencies
Before deploying Longhorn, I ensured my cluster had all necessary dependencies installed. Here’s the playbook for setting them up:
- name: Install on master from script
hosts: all
become: true
gather_facts: true
tasks:
- name: Install apt packages for longhorn
ansible.builtin.apt:
name: "{{ item }}"
state: present
loop:
- nfs-common
- open-iscsi
- util-linux
- name: Create Storage folder
ansible.builtin.file:
path: /storage
state: directory
2. Deploy Longhorn
Once the dependencies were in place, I used the following Ansible playbook to install Longhorn:
- name: Install Longhorn
hosts: master
gather_facts: false
tasks:
- name: Add stable chart repo
kubernetes.core.helm_repository:
validate_certs: false
kubeconfig: /etc/rancher/k3s/k3s.yaml
#force_update: true
repo_name: longhorn
repo_url: "https://charts.longhorn.io"
state: present
- name: Deploy Container-agent
kubernetes.core.helm:
validate_certs: false
kubeconfig: /etc/rancher/k3s/k3s.yaml
name: longhorn
chart_ref: longhorn/longhorn
create_namespace: true
release_namespace: longhorn-system
release_state: present
purge: true
force: true
wait: true
set_values:
- value: defaultSettings.defaultDataPath="/storage"
value_type: string
- value: service.ui.loadBalancerIP="192.168.X.X"
value_type: string
- value: service.ui.type="LoadBalancer"
value_type: string
Final Thoughts
This setup allows me to leverage NVMe storage efficiently while maintaining Kubernetes-native storage management. Longhorn’s built-in redundancy and snapshot features add resilience to my cluster, making it a much better solution than my previous NFS-based approach.
If you’re considering persistent storage solutions for Kubernetes on resource-constrained hardware, I highly recommend giving Longhorn a shot. While Ceph is powerful, Longhorn strikes the right balance of performance, ease of use, and resource efficiency for my needs.
Let me know if you’ve experimented with different storage solutions for Kubernetes clusters—I’d love to hear what worked for you!