4 min to read
Installing Kubernetes with Kubespray and Adding Worker Nodes(2024V.)
A comprehensive guide to setting up Kubernetes using Kubespray on GCP

Overview
This guide demonstrates how to install Kubernetes using Kubespray and add worker nodes to the cluster. We’ll be using Google Cloud Platform (GCP) for our infrastructure.
System Configuration
Environment Details
- OS: Ubuntu 20.04 LTS (Focal)
- Cloud Provider: Google Compute Engine
Node Specifications
Control Plane Node
- Hostname: test-server
- IP: 10.77.101.62
- CPU: 2 cores
- Memory: 8096MB
Worker Nodes
- test-server-agent: 10.77.101.57 (2 CPU, 8096MB RAM)
- test-server-agent2: 10.77.101.200 (2 CPU, 8096MB RAM)
Infrastructure Setup
Control Plane Node Configuration
resource "google_compute_address" "test_server_ip" {
name = var.test_server_ip
}
resource "google_compute_instance" "test_server" {
name = var.test_server
machine_type = "n2-standard-2"
zone = "${var.region}-a"
boot_disk {
initialize_params {
image = "ubuntu-os-cloud/ubuntu-2004-lts"
size = 10
}
}
network_interface {
network = var.shared_vpc
subnetwork = "${var.subnet_share}-mgmt-a"
access_config {
nat_ip = google_compute_address.test_server_ip.address
}
}
}
Worker Node Configuration
resource "google_compute_address" "test_server_agent_ip" {
name = var.test_server_agent_ip
}
resource "google_compute_instance" "test_server_agent" {
name = var.test_server_agent
machine_type = "n2-standard-2"
zone = "${var.region}-a"
boot_disk {
initialize_params {
image = "ubuntu-os-cloud/ubuntu-2004-lts"
size = 10
}
}
network_interface {
network = var.shared_vpc
subnetwork = "${var.subnet_share}-mgmt-a"
access_config {
nat_ip = google_compute_address.test_server_agent_ip.address
}
}
}
Prerequisites
SSH Key Setup
# Generate SSH key if needed
ssh-keygen
# Copy SSH key to worker nodes
ssh-copy-id somaz@10.77.101.57
ssh-copy-id somaz@10.77.101.200
# Update /etc/hosts
cat << EOF >> /etc/hosts
10.77.101.62 test-server
10.77.101.57 test-server-agent
10.77.101.200 test-server-agent2
EOF
Package Installation
# Install Python 3.10
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt-get -y update
sudo apt install -y python3.10 python3-pip git python3.10-venv
# Verify Python version
python3.10 --version # Should show Python 3.10.13
Kubespray Deployment
1. Clone Repository and Setup Environment
git clone https://github.com/kubernetes-sigs/kubespray.git
VENVDIR=kubespray-venv
KUBESPRAYDIR=kubespray
python3.10 -m venv $VENVDIR
source $VENVDIR/bin/activate
cd $KUBESPRAYDIR
pip install -U -r requirements.txt
2. Prepare Ansible Inventory
# Copy sample inventory
cp -rfp inventory/sample inventory/somaz-cluster
# Update inventory with nodes
declare -a IPS=(10.77.101.62 10.77.101.57)
CONFIG_FILE=inventory/somaz-cluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
3. Configure Inventory
# inventory/somaz-cluster/inventory.ini
[all]
test-server ansible_host=10.77.101.62 ip=10.77.101.62
test-server-agent ansible_host=10.77.101.57 ip=10.77.101.57
[kube_control_plane]
test-server
[etcd]
test-server
[kube_node]
test-server-agent
[k8s_cluster:children]
kube_control_plane
kube_node
calico_rr
4. Verify Ansible Connectivity
ansible all -i inventory/somaz-cluster/inventory.ini -m ping
# Optional: Update apt cache
ansible all -i inventory/somaz-cluster/inventory.ini -m apt -a 'update_cache=yes' --become
5. Run Playbook
ansible-playbook -i inventory/somaz-cluster/inventory.ini cluster.yml --become
6. Configure kubectl
mkdir ~/.kube
sudo cp /etc/kubernetes/admin.conf ~/.kube/config
sudo chown $USER:$USER ~/.kube/config
# Setup kubectl autocomplete
echo '# kubectl completion and alias' >> ~/.bashrc
echo 'source <(kubectl completion bash)' >> ~/.bashrc
echo 'alias k=kubectl' >> ~/.bashrc
echo 'complete -F __start_kubectl k' >> ~/.bashrc
source ~/.bashrc
Adding Worker Node
1. Update Inventory
# Add new node to IPS array
declare -a IPS=(10.77.101.62 10.77.101.57 10.77.101.200)
CONFIG_FILE=inventory/somaz-cluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
2. Modify inventory.ini
[all]
test-server ansible_host=10.77.101.62 ip=10.77.101.62
test-server-agent ansible_host=10.77.101.57 ip=10.77.101.57
test-server2-agent ansible_host=10.77.101.200 ip=10.77.101.200
[kube_control_plane]
test-server
[etcd]
test-server
[kube_node]
test-server-agent
test-server2-agent
3. Run Scale Playbook
ansible-playbook -i inventory/somaz-cluster/inventory.ini scale.yml --become
✅ Verification
Check Cluster Status
# Check nodes
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP
test-server Ready control-plane 21m v1.28.6 10.77.101.62
test-server-agent Ready <none> 20m v1.28.6 10.77.101.57
test-server2-agent Ready <none> 65s v1.28.6 10.77.101.200
# Check system pods
kubectl get po -n kube-system
Comments