Running K3s on My Turing Pi Clusters with Ansible Automation

When managing Kubernetes on resource-constrained hardware like the Turing Pi V2, I’ve found that K3s is a great lightweight alternative to full-blown Kubernetes (K8s). It keeps things simple, removes unnecessary overhead, and works well with my RK1 nodes running Ubuntu Server. Since my cluster doesn’t consist of large, powerful servers, K3s provides the perfect balance of performance and efficiency.

To streamline my setup, I use Ansible to automate the installation and configuration of K3s across my nodes. Below, I’ll walk through my Ansible playbooks for preparing the nodes and deploying K3s.


Prepping Nodes for K3s Installation

Before installing K3s, I ensure that my cluster nodes are set up correctly. This includes disabling swap, setting up required networking configurations, and installing dependencies.

Here’s the Ansible playbook I use to prepare my nodes:

- hosts: all
gather_facts: false
tasks:

- name: Wait 300 seconds
ansible.builtin.wait_for_connection:
register: connected

- name: Install apt packages
become: true
ansible.builtin.apt:
name: "{{ item }}"
state: present
loop:
- git
- python3-kubernetes
- python3-yaml
- python3-jsonpatch

- name: Unconditionally reboot the machine with all defaults
ansible.builtin.reboot:
become: true
when: connected is success

Installing K3s on the Cluster

Once the nodes are prepped, I deploy K3s using another Ansible playbook. This playbook installs K3s on both the control plane and worker nodes, ensuring the cluster is properly configured.

By default, K3s includes “Klipper” LoadBalancer, which uses iptables NAT rules to forward traffic. While it works, it isn’t always reliable for external network traffic. MetalLB provides a better alternative because:

Works with Bare Metal – No need for cloud-based load balancer integrations.
Uses Layer 2 or BGP – Provides direct IP assignments without NAT issues.
More Reliable – Unlike Klipper, MetalLB integrates with your network properly.

Since my cluster runs on physical hardware without cloud integrations, MetalLB is the best choice for handling LoadBalancer services.

📌 Note: {{ k3s_token }} is a variable stored in my Ansible Vault, containing the token/password used for K3s authentication.

---
- name: Install on master from script
hosts: master
become: true
tasks:
- name: Populate service facts
ansible.builtin.service_facts:

- name: Install K3s on master
ansible.builtin.shell: curl -sfL https://get.k3s.io | sh -s - --write-kubeconfig-mode 644 --disable servicelb --token {{ k3s_token }} --disable-cloud-controller --bind-address {{ ansible_default_ipv4.address }} --tls-san "FQDNofMasterNode,192.168.X.X"
register: k3s_master_install
when: ansible_facts.services['k3s.service'] is not defined

- name: Install on worker from script
hosts: workers
become: true
vars:
k3s_url: "https://{{ hostvars['FQDNofMasterNode']['ansible_default_ipv4']['address'] }}:6443"
tasks:
- name: Populate service facts
ansible.builtin.service_facts:

- name: Install K3s on workers
ansible.builtin.shell: curl -sfL https://get.k3s.io | K3S_URL={{ k3s_url }} K3S_TOKEN={{ k3s_token }} sh -
when: ansible_facts.services['k3s-agent.service'] is not defined

- name: Label K3s workers
hosts: master
become: true
gather_facts: false
tasks:
- name: Label K3s nodes as workers
ansible.builtin.shell: kubectl label node {{ hostvars[item].ansible_nodename }} node-role.kubernetes.io/worker=worker
with_items: "{{ groups['workers'] }}"

- name: Install Helm
hosts: master
become: true
gather_facts: false
tasks:
- name: Install Helm
ansible.builtin.shell: |
cd ~
mkdir helm
cd helm
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
when: k3s_master_install.changed

- name: Install MetalLB
hosts: master
become: true
gather_facts: false
tasks:
- name: Add stable chart repo
kubernetes.core.helm_repository:
validate_certs: false
kubeconfig: /etc/rancher/k3s/k3s.yaml
#force_update: true
repo_name: metallb
repo_url: "https://metallb.github.io/metallb"
state: present
register: helm_repo_install
when: k3s_master_install.changed

- name: Deploy MetalLB
kubernetes.core.helm:
validate_certs: false
kubeconfig: /etc/rancher/k3s/k3s.yaml
name: metallb
chart_ref: metallb/metallb
release_namespace: metallb-system
create_namespace: true
release_state: present
purge: true
force: true
wait: true
when:
- k3s_master_install.changed
- helm_repo_install is defined

- name: Set MetalLB config
kubernetes.core.k8s:
validate_certs: false
kubeconfig: /etc/rancher/k3s/k3s.yaml
state: present
definition:
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool
namespace: metallb-system
spec:
addresses:
- 192.168.X.X-192.168.X.X # ip range
register: metallb_config
when:
- k3s_master_install.changed
- helm_repo_install is defined

- name: Set MetalLB L2Advertisement
kubernetes.core.k8s:
validate_certs: false
kubeconfig: /etc/rancher/k3s/k3s.yaml
state: present
definition:
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: first-pool
namespace: metallb-system
spec:
ipAddressPools:
- first-pool
when:
- k3s_master_install.changed
- metallb_config is defined

Why K3s?

I’ve stuck with K3s on my Turing Pi clusters because:

Lightweight: Strips away unnecessary components from traditional Kubernetes.
Lower Overhead: Perfect for clusters with limited resources.
Easy to Install & Manage: Requires minimal configuration compared to full K8s.
Runs on Ubuntu Server: Works seamlessly with my setup.
Fully Automated via Ansible: Ensures consistency across deployments.

With this Ansible-driven approach, I can quickly deploy or rebuild my K3s cluster with minimal effort.


Final Thoughts

For anyone running Kubernetes on Turing Pi or other small-scale ARM clusters, I highly recommend K3s.

If you’re running K3s on low-power hardware, I’d love to hear about your experience! What solutions have you found work best for your setup? Drop a comment and let’s chat! 🚀

Leave a Comment