A Beginner's Guide: Setting up a Kubernetes Cluster using Vagrant VM on Your Local Machine
Introduction
In the realm of modern software development and deployment, Kubernetes has emerged as a powerful tool for managing containerized applications. It offers scalability, resilience, and ease of management. However, setting up a Kubernetes cluster for experimentation or development purposes can often be a daunting task, especially for beginners.
Fortunately, there's a solution: Vagrant. Vagrant simplifies the process of creating and managing virtual machines (VMs) and, combined with Kubernetes, allows you to set up a local development environment with ease. In this guide, we'll walk through the steps to set up a Kubernetes cluster using Vagrant VMs on your local machine.
Getting Started with Vagrant and Kubernetes
Prerequisites
Vagrant: Download and install Vagrant from its official website.
VirtualBox or VMware: Choose a hypervisor for your VMs. VirtualBox is a common choice and can be downloaded from here.
Setting up the Vagrant's VMs
Step 1: Initialize Vagrant in a New Directory
Create a new directory for your project and navigate to it in your terminal or command prompt. Initialize Vagrant by executing the following command:
vagrant init
Step 2: Configure Vagrantfile for Kubernetes Nodes
Edit the Vagrantfile
generated in the directory. Define the configuration for the Kubernetes nodes. Below is a sample configuration for a three-node cluster:
# -*- mode: ruby -*-
# vi: set ft=ruby :
# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
# Define the number of master and worker nodes
# If this number is changed, remember to update setup-hosts.sh script with the new hosts IP details in /etc/hosts of each VM.
NUM_WORKER_NODE = 2
IP_NW = "192.168.56."
MASTER_IP_START = 11
NODE_IP_START = 20
# Sets up hosts file and DNS
def setup_dns(node)
# Set up /etc/hosts
node.vm.provision "setup-hosts", :type => "shell", :path => "config/setup-hosts.sh" do |s|
s.args = ["enp0s8", node.vm.hostname]
end
# Set up DNS resolution
node.vm.provision "setup-dns", type: "shell", :path => "config/update-dns.sh"
end
# Runs provisioning steps that are required by masters and workers
def provision_kubernetes_node(node)
# Set up DNS
setup_dns node
# Set up ssh
node.vm.provision "setup-ssh", :type => "shell", :path => "config/ssh.sh"
end
# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure("2") do |config|
# The most common configuration options are documented and commented below.
# For a complete reference, please see the online documentation at
# https://docs.vagrantup.com.
# Every Vagrant development environment requires a box. You can search for
# boxes at https://vagrantcloud.com/search.
# config.vm.box = "base"
config.vm.box = "ubuntu/jammy64"
# config.vm.box = "spox/ubuntu-arm"
config.vm.boot_timeout = 900
# Disable automatic box update checking. If you disable this, then
# boxes will only be checked for updates when the user runs
# `vagrant box outdated`. This is not recommended.
config.vm.box_check_update = false
# Provision Master Nodes
config.vm.define "controlplane" do |node|
# Name shown in the GUI
node.vm.provider "virtualbox" do |vb|
vb.name = "controlplane"
vb.memory = 4096
vb.cpus = 2
end
node.vm.hostname = "controlplane"
# node.vm.disk :disk, size: "10GB"
node.vm.network :private_network, ip: IP_NW + "#{MASTER_IP_START}"
node.vm.network "forwarded_port", guest: 22, host: "#{2710}"
provision_kubernetes_node node
# Install (opinionated) configs for vim and tmux on master-1. These used by the author for CKA exam.
node.vm.provision "file", source: "./config/tmux.conf", destination: "$HOME/.tmux.conf"
node.vm.provision "file", source: "./config/vimrc", destination: "$HOME/.vimrc"
end
# Provision Worker Nodes
(1..NUM_WORKER_NODE).each do |i|
config.vm.define "node0#{i}" do |node|
node.vm.provider "virtualbox" do |vb|
vb.name = "node0#{i}"
vb.memory = 2048
vb.cpus = 2
end
node.vm.hostname = "node0#{i}"
# node.vm.disk :disk, size: "10GB"
node.vm.network :private_network, ip: IP_NW + "#{NODE_IP_START + i}"
node.vm.network "forwarded_port", guest: 22, host: "#{2720 + i}"
provision_kubernetes_node node
end
end
end
Step 3: Creating configuration files for the VMs
To configure the nodes, create a config
directory and add the following configuration files.
-
#!/bin/bash # # Set up /etc/hosts so we can resolve all the machines in the VirtualBox network set -ex IFNAME=$1 THISHOST=$2 ADDRESS="$(ip -4 addr show $IFNAME | grep "inet" | head -1 |awk '{print $2}' | cut -d/ -f1)" NETWORK=$(echo $ADDRESS | awk 'BEGIN {FS="."} ; { printf("%s.%s.%s", $1, $2, $3) }') sed -e "s/^.*${HOSTNAME}.*/${ADDRESS} ${HOSTNAME} ${HOSTNAME}.local/" -i /etc/hosts # remove ubuntu-jammy entry sed -e '/^.*ubuntu-jammy.*/d' -i /etc/hosts sed -e "/^.*$2.*/d" -i /etc/hosts # Update /etc/hosts about other hosts cat >> /etc/hosts <<EOF ${NETWORK}.11 controlplane ${NETWORK}.21 node01 ${NETWORK}.22 node02 EOF # Expoert internal IP as an environment variable echo "INTERNAL_IP=${ADDRESS}" >> /etc/environment
-
#!/bin/bash # Enable password auth in sshd so we can use ssh-copy-id sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/' /etc/ssh/sshd_config systemctl restart sshd
-
#!/bin/bash # Point to Google's DNS server sed -i -e 's/#DNS=/DNS=8.8.8.8/' /etc/systemd/resolved.conf service systemd-resolved restart
tmux.conf
set -g default-shell /usr/bin/bash set -g mouse on bind -n C-x setw synchronize-panes
vimrc
set nu set ts=2 set sw=2 set et set ai set pastetoggle=<F3>
Step 4: Provision the Vagrant VMs
Run the following command to start and provision the VMs based on the configuration specified:
Vagrant up
This command will create and configure the VMs according to the settings in your Vagrantfile
.
To check the status of all the provisioned VMs, run the below command:
vagrant status
Setting up the Kubernetes cluster using Kubeadm
Step 1: SSH into the Nodes
SSH into each node using the following command:
vagrant ssh <node-name>
Step 2: Installing and configuring containerd on each node
To install and configure the containerd on each node, copy the below commands and run on each nodes:
#!/bin/bash
###### configure containerd #####
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# Apply sysctl params without reboot
sudo sysctl --system
# Verify that the required setting
lsmod | grep br_netfilter
lsmod | grep overlay
sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg -y
sudo install -m 0755 -d /etc/apt/keyrings -y
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install containerd.io -y
# cgroups required by containerd
cat <<EOF | sudo tee /etc/containerd/config.toml
version = 2
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
[plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "registry.k8s.io/pause:3.9"
EOF
sudo systemctl restart containerd
Step 3: Installing the kubernetes tools
Install the kubelet, kubectl and kubeadm on each of the node.
sudo apt-get update
# apt-transport-https may be a dummy package; if so, you can skip that package
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl --allow-change-held-packages
sudo apt-mark hold kubelet kubeadm kubectl
source <(kubectl completion bash) # set up autocomplete in bash into the current shell, bash-completion package should be installed first.
echo "source <(kubectl completion bash)" >> ~/.bashrc # add autocomplete permanently to your bash shell.
echo "alias k=kubectl" >> ~/.bashrc
echo "complete -o default -F __start_kubectl k" >> ~/.bashrc
source ~/.bashrc
Once all the tools been installed and nodes have been configure, we are ready to provision the kubernetes cluster.
Step 4: Provision the cluster using kubeadm
SSH into the controlplace node and run the below command to get the ip address of the machine.
ip add | grep 192.168
Once you get the ip address run the below command to provision the cluster.
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=<contol-place-ip-address>
Once the cluster initialisation is completed, you can copy the join command and ssh into the worker nodes and run that command to join the cluster.
Step 5: Configure kubectl on Your Local Machine
Copy the Kubernetes configuration file from the first node to your local machine. Usually located at /etc/kubernetes/admin.conf
, this file contains cluster information required by kubectl.
Optional you'll receive a similar help message/command on the screen to configure the kubectl.
Verifying the Cluster
Finally, verify the setup by running:
kubectl get nodes
This command should display the status of the nodes in your newly created Kubernetes cluster.
Conclusion
Congratulations! You've successfully set up a Kubernetes cluster using Vagrant VMs on your local machine. This setup allows you to experiment, develop, and learn Kubernetes without the complexities of provisioning actual servers.
Remember, this guide provides a basic setup for educational or development purposes. For production-grade clusters or more advanced configurations, refer to Kubernetes official documentation and best practices.
Happy Kuberneting!