Create Kubernetes cluster using kubeadm and containerd with Ansible
ansible homelab kubernetesIn this article we will create a Kubernetes cluster with master and worker machines using Ansible. We will use containerd as container runtime and flannel as CNI plugin.
Prerequisites
- Ansible
- At least a host for kubernetes masters, and a host for kubernetes workers
We will use Ubuntu machines and name kmasters
as our
kubernetes master hosts and kworkers
for our worker hosts
in this example.
Some basic packages are expected to be installed in all hosts, you can also install them using a task:
- name: install apt packages
apt:
name:
- apt-transport-https
- ca-certificates
- curl
- python3-pip
update_cache: yes
Install containerd
The following tasks will run in all the hosts as root:
- hosts: kmasters, kworkers
become: yes
Load the required kernel modules and install containerd:
- name: add modules required by containerd
modprobe:
name: "{{ item }}"
state: present
persistent: present
loop:
- overlay
- br_netfilter
- name: install containerd
apt:
name: containerd
Configure and enable containerd:
- name: create containerd directory
file:
path: /etc/containerd
state: directory
- name: create containerd config
shell: containerd config default > /etc/containerd/config.toml
notify:
- restart containerd
- name: use systemd as cgroup driver
lineinfile:
dest: /etc/containerd/config.toml
regexp: '(\s+SystemdCgroup) = false'
line: '\1 = true'
backrefs: yes
state: present
notify:
- restart containerd
- name: enable containerd
service:
name: containerd
enabled: yes
When changing the configuration, the handler
restart containerd
will be triggered:
- name: restart containerd
service:
name: containerd
state: restarted
Install kubernetes
As the installation of containerd, the installation of kubernetes will run in all the hosts as root.
Configure networking and disable swap:
- name: configure kubernetes networking
sysctl:
sysctl_file: /etc/sysctl.d/99-kubernetes-cri.conf
name: "{{ item.name }}"
value: "{{ item.value }}"
loop:
- { name: 'net.ipv4.ip_forward', value: '1'}
- { name: 'net.bridge.bridge-nf-call-iptables', value: '1'}
- { name: 'net.bridge.bridge-nf-call-ip6tables', value: '1'}
- name: disable swap
command: swapoff -a
- name: permamently disable swap
lineinfile:
dest: /etc/fstab
regexp: '^(/swap.*)'
line: '# \1'
backrefs: yes
state: present
For simplicity we will disable the firewall, but we could also allow the ports required (at least, the TCP ports 4149, 6443, 10250, 10255 and 10256):
- name: disable ufw
ufw:
state: disabled
Install kubernetes:
- name: add kubernetes apt key
apt_key:
url: https://packages.cloud.google.com/apt/doc/apt-key.gpg
state: present
- name: add kubernetes repository
apt_repository:
repo: "deb http://apt.kubernetes.io/ kubernetes-xenial main"
- name: install kubernetes
apt:
name:
- kubeadm
- kubelet
- kubectl
- kubernetes-cni
update_cache: yes
state: latest
- name: enable kubelet
service:
name: kubelet
enabled: yes
If we are not running the playbook with kube
user as our
ansible_user
, create it with sudo permissions and no
password required:
- name: create the kube user
user:
name: kube
shell: /bin/bash
generate_ssh_key: yes
- name: add kube to sudoers with no password
community.general.sudoers:
name: kube
user: kube
commands: ALL
runas: ALL
We can also add our SSH key to the authorized keys for
kube
if we don’t want to create a new one:
- name: authorize current ssh user to connect as kube
authorized_key:
user: kube
key: "{{ lookup('file', lookup('env','HOME') + '/.ssh/id_rsa.pub') }}"
ignore_errors: yes
Configure master nodes
The following tasks will run in the kubernetes master hosts as user
kube
:
- hosts: kmasters
remote_user: kube
To make the playbook idempotent, we check if we already initialized kubernetes:
- stat:
path: .kube/config
register: kube_config
- set_fact:
kube_initialized: "{{ kube_config.stat.exists }}"
Running the playbook multiple times won’t create a cluster in the
machines where kubernetes cluster is configured. To recreate a cluster,
we will need to sudo kubeadm reset
and remove the
configuration folder ~/.kube
.
Create the .kube directory and stop kubelet prior installation:
- name: create .kube directory
file:
path: .kube
state: directory
mode: 0755
recurse: yes
when: not kube_initialized
- name: stop kubelet
service:
name: kubelet
state: stopped
when: not kube_initialized
become: yes
Initialize the cluster, configure kubernetes, and install CNI (flannel). Note that if any of the tasks fail, the configuration will be removed (if created) and the cluster reset (if initialized). Hence, these tasks won’t be skipped in the next playbook run:
- block:
- name: init kubernetes cluster
shell: kubeadm init --pod-network-cidr=10.244.0.0/16 --cri-socket /run/containerd/containerd.sock
become: yes
- name: copy kubernetes conf
copy:
src: /etc/kubernetes/admin.conf
dest: .kube/config
remote_src: yes
owner: "{{ ansible_user }}"
become: yes
- name: install pod network (flannel)
shell: kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
rescue:
- name: remove .kube dir on error
file:
name: .kube/config
state: absent
- name: reset kubernetes on error
shell: kubeadm reset -f
become: yes
ignore_errors: yes
when: not kube_initialized
Configure flannel:
- name: create flannel run dir
file:
path: /run/flannel/
state: directory
become: yes
- name: setup flannel
copy:
dest: /run/flannel/subnet.env
content: |
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true become: yes
Download the join command to the local machine to use in the kubernetes worker tasks:
- name: get join command
shell: kubeadm token create --print-join-command > /tmp/join_kubernetes.sh
- name: download join command
fetch:
src: /tmp/join_kubernetes.sh
dest: /tmp/
flat: yes
Configure worker nodes
The following tasks will run in the kubernetes worker hosts as root:
- hosts: kworkers
become: yes
To make the following tasks idempotent, check the status of the node:
---
- shell: "curl http://localhost:10248/healthz || true"
register: health
- set_fact:
kube_joined: "{{ health.stdout == 'ok' }}"
Upload and run the join command:
- name: get join command
copy:
src: /tmp/join_kubernetes.sh
dest: /tmp/join_kubernetes.sh
mode: 0777
when: not kube_joined
- name: join kubernetes cluster
command: sh /tmp/join_kubernetes.sh
when: not kube_joined
From a worker node, we can check the nodes status with
kubectl
:
kubectl get nodes