Kubernetes on Macbook Pro (Apple Silicon series)
December 2023
This article is a step-by-step walkthrough on how to install a Kubernetes cluster on a Macbook Pro laptop based on the Apple Silicon series of chips.
The installation is based on the kubeadm utility and is a simplification of the steps in the Kubernetes documentation
Pre-requisites
-
Macbook laptop (Apple Silicon series) with minimum 16 GB RAM (recommended)
-
Virtualization tool to provision Linux VMs
VirtualBox currently does not have good support for Apple Silicon Macs, therefore we will use Canonical Multipass
Follow the installation instructions for macOS and verify you are able to launch a sample Ubuntu instance. Cleanup the instance after verification. -
Your account on your Mac must have admin privilege and be able to use
sudo
Provision the VMs
We will create 3 VMs for our setup as follows
-
kubemaster - The controlplane node
-
kubeworker01 - The first worker node
-
kubeworker02 - The second worker node
Each VM will have the following configuration (you can choose to edit it as per your host machine capacity)
-
Disk space: 10G
-
Memory 3G
-
CPUs 2
In Multipass, by default, the IP address allocated to a VM is subject to change after a reboot of the VM. If IP addresses change over reboots, it will break the Kubernetes cluster. As such it is imperative that the VMs are provisioned with a static IP address as documented here |
Provisioning the controlplane instance (kubemaster)
multipass launch --disk 10G --memory 3G --cpus 2 --name kubemaster --network name=en0,mode=manual,mac="52:54:00:4b:ab:cd" jammy
The values to the --network option need to be passed carefully name=en0 - This is the name of the Wifi network on your host machine. To get a list of possible values use the command mac="52:54:00:4b:ab:cd" - A unique and random MAC address that will be allocated to the instance |
multipass exec -n kubemaster -- sudo bash -c 'cat << EOF > /etc/netplan/10-custom.yaml
network:
version: 2
ethernets:
extra0:
dhcp4: no
match:
macaddress: "52:54:00:4b:ab:cd" (1)
addresses: [192.168.68.101/24] (2)
EOF'
1 | The exact MAC address chosen in the multipass launch command |
2 | The static IP address that will be allocated to this VM. The static IP address should be in the same subnet as the original IP address of the instance. The original IP address allocated to the VM can be found by the command |
multipass exec -n kubemaster -- sudo netplan apply
multipass info kubemaster
The command above should show two IPs, the second of which is the one we just configured
ping 192.168.68.2 (1)
ping 192.168.68.101 (2)
multipass exec -n kubemaster -- ping 192.168.0.5 (3)
1 | Ping from host to original IP (192.168.68.2) of instance |
2 | Ping from host to static IP (192.168.68.101) of instance |
3 | Ping from the instance to the host IP (192.168.68.5) |
Provisioning the first worker node (kubeworker01)
The MAC address and static IP address chosen must be different from the ones allocated to the controlplane instance |
multipass launch --disk 10G --memory 3G --cpus 2 --name kubeworker01 --network name=en0,mode=manual,mac="52:54:00:4b:ba:dc" jammy
multipass exec -n kubeworker01 -- sudo bash -c 'cat << EOF > /etc/netplan/10-custom.yaml
network:
version: 2
ethernets:
extra0:
dhcp4: no
match:
macaddress: "52:54:00:4b:ba:dc"
addresses: [192.168.68.102/24]
EOF'
multipass exec -n kubeworker01 -- sudo netplan apply
Test using
as done for the controlplane.
Additionally test that ping
from kubemaster to kubeworker01 and vice versa is working.ping
Provisioning the second worker node (kubeworker02)
The MAC address and static IP address chosen must be different from the ones allocated to the controlplane and first worker instance |
multipass launch --disk 10G --memory 3G --cpus 2 --name kubeworker02 --network name=en0,mode=manual,mac="52:54:00:4b:cd:ab" jammy
multipass exec -n kubeworker02 -- sudo bash -c 'cat << EOF > /etc/netplan/10-custom.yaml
network:
version: 2
ethernets:
extra0:
dhcp4: no
match:
macaddress: "52:54:00:4b:cd:ab"
addresses: [192.168.68.103/24]
EOF'
multipass exec -n kubeworker02 -- sudo netplan apply
Test using
as done for the controlplane.
Additionally test that all 3 VMs are able to ping each other successfully on their static IPs.ping
Configure the local DNS
multipass shell
commandmultipass shell kubemaster
multipass shell kubeworker01
multipass shell kubeworker02
#<static IP> <hostname>
192.168.68.101 kubemaster
192.168.68.102 kubeworker01
192.168.68.103 kubeworker02
Install Kubernetes
Now that we have a perfect set of VMs up and running, it is time to proceed toward the Kubernetes installation.
Versions
The below versions were used in this lab.
Software / Package | Version | Location |
---|---|---|
containerd |
1.7.9 |
|
runc |
1.1.10 |
|
CNI plugin |
1.3.0 |
|
kubeadm |
1.28.4 |
apt-get |
kubelet |
1.28.4 |
apt-get |
kubectl |
1.28.4 |
apt-get |
Any commands mentioned below need to be executed inside the terminal of the VMs.
multipass shell
commandmultipass shell kubemaster
multipass shell kubeworker01
multipass shell kubeworker02
Install and configure prerequisites
Forwarding IPv4 and letting iptables see bridged traffic
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# Apply sysctl params without reboot
sudo sysctl --system
# Verify that the br_netfilter, overlay modules are loaded by running the following commands:
lsmod | grep br_netfilter
lsmod | grep overlay
#Verify that the net.bridge.bridge-nf-call-iptables, net.bridge.bridge-nf-call-ip6tables, and net.ipv4.ip_forward system variables are set to 1 in your sysctl config by running the following command:
sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
For all the packages to be installed, ensure to use the arm64 variant only. |
Install a Container Runtime
You need to install a container runtime into each node in the cluster so that Pods can run there.
Step 1: Install containerd
curl -LO https://github.com/containerd/containerd/releases/download/v1.7.9/containerd-1.7.9-linux-arm64.tar.gz
sudo tar Cxzvf /usr/local containerd-1.7.9-linux-arm64.tar.gz
curl -LO https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
sudo mkdir -p /usr/local/lib/systemd/system/
sudo mv containerd.service /usr/local/lib/systemd/system/
sudo mkdir -p /etc/containerd/
sudo containerd config default | sudo tee /etc/containerd/config.toml > /dev/null
sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml
sudo systemctl daemon-reload
sudo systemctl enable --now containerd
#Check that containerd service is up and running
systemctl status containerd
Step 2: Install runc
curl -LO https://github.com/opencontainers/runc/releases/download/v1.1.10/runc.arm64
sudo install -m 755 runc.arm64 /usr/local/sbin/runc
Step 3: Install CNI plugins
curl -LO https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz
sudo mkdir -p /opt/cni/bin
sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-arm64-v1.3.0.tgz
Install kubeadm, kubelet and kubectl
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
Configure crictl to work with containerd
sudo crictl config runtime-endpoint unix:///var/run/containerd/containerd.sock
Initializing the controlplane node
Commands for initializing the controlplane node should be executed on kubemaster only |
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.68.101
apiserver-advertise-address must be the exact value of the static IP allocated to kubemaster |
If the command runs successfully, you should see the message Your Kubernetes control-plane has initialized successfully!
Save the entire kubeadm join command which is printed on the output. This will be used when the worker nodes are ready to be connected to the cluster |
Make kubectl work for your non-root user .Execute the below command on kubemaster
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Verify that you are able to reach the cluster through kubectl
kubectl get pods -n kube-system
The coredns pods will not be Ready at this stage. This is as expected as we have not deployed the Pod network add-on yet. |
Install a Pod network add-on
You must deploy a Container Network Interface (CNI) based Pod network add-on so that your Pods can communicate with each other. Cluster DNS (CoreDNS) will not start up before a network is installed.
A list of all compatible Pod network add-ons can be found here
In this lab, we will use Weave Net
kubectl apply -f https://reweave.azurewebsites.net/k8s/v1.28/net.yaml
It will take up to a minute for the weave pod to be ready
At this point the controlplane node should be ready with all pods in the kube-system namespace up and running. Please validate this to confirm the sanity of the controlplane. |
Join the worker nodes to the cluster
Connect to each worker node and run the entire
command that was copied earlier from the output of the kubeadm join
commandkubeadm init
kubeadm join 192.168.68.101:6443 --token zkxg..... \
--discovery-token-ca-cert-hash sha256:43c947a.....
If you missed making a note of the kubeadm join command earlier, you can generate a new token by using the below command on the controlplane and use it instead. |
kubeadm token create --print-join-command
After few seconds check that all nodes have joined the cluster and are in a Ready state.
kubectl get nodes
Validation
Validate that the Kubernetes setup is working correctly by deploying a nginx pod on the cluster
kubectl run test-nginx --image=nginx
Once the pod is in a Ready state, then its time to say Congratulations! You just built a fully functioning 3 node Kubernetes cluster on your Macbook Pro (Apple Silicon series)