Kubernetes Cluster On Ubuntu 20.04 Made Easy
Kubernetes Cluster on Ubuntu 20.04 Made Easy
Hey guys! So, you’re looking to dive into the awesome world of Kubernetes and want to get a cluster up and running on good ol’ Ubuntu 20.04? You’ve come to the right place! Setting up your own Kubernetes cluster might sound like a big, scary task, but trust me, with a little guidance, it’s totally doable. We’re going to walk through this step-by-step, making sure you understand why we’re doing each part. By the end of this, you’ll have your very own playground to experiment with container orchestration. Pretty cool, right?
Table of Contents
- Why Build Your Own Kubernetes Cluster?
- Prerequisites: What You’ll Need
- Step 1: Preparing Your Nodes
- Step 2: Installing kubeadm, kubelet, and kubectl
- Step 3: Initializing the Control Plane Node
- Step 4: Installing a Pod Network Add-on (CNI)
- Step 5: Joining Your Worker Node(s) to the Cluster
- Verifying Your Cluster
- Conclusion: Your Kubernetes Journey Begins!
Why Build Your Own Kubernetes Cluster?
Before we jump into the nitty-gritty, let’s chat about
why
you might want to build your own Kubernetes cluster on Ubuntu 20.04. Sure, there are managed services out there, but doing it yourself gives you a
deeper understanding
of how Kubernetes actually works. You’ll learn about networking, control plane components, worker nodes, and how they all play together. This knowledge is
invaluable
for troubleshooting, optimizing, and really mastering Kubernetes. Plus, it’s a fantastic way to learn without incurring cloud costs, especially when you’re just starting out or experimenting with new ideas. Think of it as your own personal, hands-on lab. We’re going to focus on using
kubeadm
, which is the official tool for bootstrapping a Kubernetes cluster. It simplifies a lot of the complex setup that used to be a real headache. So, grab your favorite beverage, get comfortable, and let’s get this cluster built!
Prerequisites: What You’ll Need
Alright, before we start slinging commands, let’s make sure you’ve got everything ready. Think of this as your pre-flight checklist. For this guide, we’ll assume you’re setting up a basic two-node cluster: one control plane node (where the brains of the operation live) and one worker node (where your actual applications will run). You can always add more worker nodes later, but this is a great starting point. You’ll need at least
two Ubuntu 20.04 servers
(virtual machines or physical machines, doesn’t matter). These servers should have a
minimum of 2GB of RAM and 2 CPUs each
. More is always better, especially for the control plane if you plan on running heavier workloads, but 2GB/2CPUs is the bare minimum to get things moving. Make sure they have
static IP addresses
. This is super important for Kubernetes to communicate reliably. If your IPs are dynamic and change, your cluster will likely break. Also, ensure
SSH access
to all your nodes with a user that has
sudo
privileges. We’ll be running commands that require root access. Finally, and this is a big one, you need to
disable swap
on all nodes. Kubernetes doesn’t play nicely with swap enabled, as it can cause performance issues and unexpected behavior. We’ll show you how to do that. Oh, and a stable internet connection is a must for downloading necessary packages. Got all that? Awesome, let’s move on!
Step 1: Preparing Your Nodes
Okay, let’s get our hands dirty with the actual setup. Preparing your nodes is a crucial first step for any successful Kubernetes deployment. We need to make sure all our machines are configured correctly before we even think about installing Kubernetes. This involves a few key actions that ensure stability and prevent common issues down the line. First up, we need to update our package lists and upgrade any existing packages. This is standard practice for any Linux server setup. Open up an SSH session to each of your Ubuntu 20.04 machines and run:
sudo apt update && sudo apt upgrade -y
This ensures you’re running the latest software versions available for your distribution. Next, we absolutely must disable swap . As I mentioned earlier, Kubernetes really dislikes swap being enabled. It can lead to performance degradation and unpredictable application behavior. To disable swap for the current session, run:
sudo swapoff -a
But that’s not enough! Swap will likely be re-enabled after a reboot. To make this change permanent, you need to edit the
/etc/fstab
file. Open it with your favorite text editor (like
nano
or
vim
):
sudo nano /etc/fstab
Find the line that refers to swap (it will likely contain the word
swap
) and comment it out by adding a
#
at the beginning of the line. It should look something like this:
# /swap.img none swap sw 0 0
Save the file and exit. Now, let’s configure some essential kernel modules and settings. Kubernetes networking relies on certain kernel modules being loaded and network parameters being set correctly. We need to ensure the
br_netfilter
module is loaded, which is necessary for bridging network traffic. We also need to configure sysctl parameters to enable IP forwarding and set bridge network filter settings. Create a new sysctl configuration file:
sudo nano /etc/sysctl.d/kubernetes.conf
Add the following lines to this file:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness = 0
vm.swappiness = 0
reinforces disabling swap, ensuring the system prioritizes RAM. Then, apply these changes without rebooting:
sudo sysctl --system
Finally, we need to install containerd , which is the container runtime that Kubernetes will use. Docker is also an option, but containerd is often preferred for its simplicity and integration. Let’s install it:
sudo apt install -y containerd
After installation, you need to configure containerd. Create its default configuration file:
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
Now, edit the configuration file to ensure the systemd cgroup driver is used. This is important for Kubernetes to manage container resources properly. Open the file:
sudo nano /etc/containerd/config.toml
Find the
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
section and make sure
SystemdCgroup
is set to
true
:
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
Save and close the file. Then, restart the containerd service to apply the changes:
sudo systemctl restart containerd
And enable it to start on boot:
sudo systemctl enable containerd
Phew! That was a lot, but these steps are critical for a stable Kubernetes environment. Make sure you perform all these on both your control plane and worker nodes.
Step 2: Installing kubeadm, kubelet, and kubectl
With our nodes prepped and ready to go, it’s time to install the core Kubernetes components:
kubeadm
,
kubelet
, and
kubectl
. These are the tools that will allow us to initialize, manage, and interact with our cluster. We’ll be using
kubeadm
to bootstrap the cluster,
kubelet
to run on each node and manage pods, and
kubectl
on your local machine (or the control plane node) to send commands to the cluster. First, let’s set up the Kubernetes package repository so
apt
knows where to find the latest versions. Run these commands on
all
your nodes (control plane and worker):
# Install dependencies
sudo apt update
sudo apt install -y apt-transport-https curl
# Download Google Cloud public signing key
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
# Add the Kubernetes apt repository
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
Now that the repository is added, we need to update our package list again to include the Kubernetes packages:
sudo apt update
It’s crucial to
pin the versions
of
kubeadm
,
kubelet
, and
kubectl
to avoid unexpected upgrades that might break compatibility. We want to use the specific versions that work together. Let’s install them, and also make sure they are held back from automatic upgrades:
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
The
apt-mark hold
command tells
apt
not to automatically upgrade these packages. This is a good practice when setting up specific versions of critical software. Now, let’s verify the installation by checking the versions:
kubeadm version
You should see the version number printed. If you encounter any issues, double-check the previous steps, especially the repository configuration and package installation. Remember, consistency across all nodes is key here. If you want to install a
specific
version (e.g.,
1.28.0-00
), you would modify the install command like so:
sudo apt install -y kubelet=1.28.0-00 kubeadm=1.28.0-00 kubectl=1.28.0-00
.
Step 3: Initializing the Control Plane Node
Alright, this is where the magic happens! We’re going to
initialize the control plane node
using
kubeadm
. This command will set up all the necessary components for the Kubernetes control plane, like the API server, etcd (the cluster’s database), scheduler, and controller manager. This is the brain of your Kubernetes cluster. Make sure you’re running these commands on the machine designated as your
control plane node
. Let’s start by running:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
Let’s break down this command:
sudo kubeadm init
is the core command. The
--pod-network-cidr=10.244.0.0/16
flag is
extremely important
. It tells Kubernetes which IP address range will be used for your pods. This CIDR block must be compatible with the network plugin you choose later.
10.244.0.0/16
is commonly used with the Flannel network plugin, which we’ll likely use. If you choose a different network plugin, you might need to adjust this CIDR accordingly. This command can take a few minutes to complete as
kubeadm
downloads container images and configures the control plane components. Once it’s done, you’ll see a lot of output, including some crucial information at the end.
Pay close attention to the instructions for setting up
kubectl
for a non-root user. It will tell you to run commands like these:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
These commands copy the admin configuration file to your user’s home directory, allowing you to interact with the cluster using
kubectl
without needing
sudo
for every command. If you’re logged in as the
ubuntu
user, these are the commands you’ll use. You can verify that
kubectl
is working by running:
kubectl cluster-info
If everything is set up correctly, you should see information about your Kubernetes control plane. The output of
kubeadm init
also provides the
join command
for your worker nodes. It will look something like this:
kubeadm join <control-plane-ip>:6443 --token <some-token>
--discovery-token-ca-cert-hash sha256:<some-hash>
Do not lose this command! Copy it and save it somewhere safe. You’ll need it in the next step to connect your worker node(s) to the control plane. If you accidentally close the terminal or lose the join command, don’t worry. You can regenerate a token and its hash on the control plane node using these commands:
# To create a new token (valid for 24 hours by default)
sudo kubeadm token create --print-join-command
This will output the
kubeadm join
command you need. Remember to run
sudo kubeadm init
with the correct
--pod-network-cidr
that matches your chosen CNI plugin.
Step 4: Installing a Pod Network Add-on (CNI)
So, your control plane is initialized, which is awesome! But right now, your pods can’t communicate with each other. This is because Kubernetes needs a pod network add-on , often called a Container Network Interface (CNI) plugin, to enable pod-to-pod communication across different nodes. Without this, your applications won’t be able to talk to each other, which defeats a lot of the purpose of Kubernetes! There are several CNI plugins available, but Flannel is a popular, simple, and reliable choice for getting started. We’ll use Flannel for this guide. Make sure you are still on your control plane node to apply the Flannel configuration. The command is straightforward:
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
This command downloads the Flannel manifest file from its GitHub repository and applies it to your cluster using
kubectl
. Flannel will then deploy its own pods (usually as a DaemonSet) to each node, creating the necessary overlay network for your pods. It sets up the networking infrastructure that allows pods on different machines to reach each other seamlessly.
What’s happening here? Flannel essentially creates a virtual network that spans across all your nodes. It uses techniques like VXLAN encapsulation to tunnel network traffic between pods, regardless of which physical machine they are running on. This is a fundamental piece of Kubernetes networking that enables features like service discovery and load balancing across your cluster. You can monitor the deployment of Flannel pods by running:
kubectl get pods -n kube-system
You should see
kube-flannel-ds-*
pods running. It might take a minute or two for them to become fully ready. Once Flannel is running, your cluster’s networking is
mostly
set up. However, you’ll notice that your control plane node (
kubectl get nodes
) will show up as
NotReady
. This is because the control plane node won’t schedule pods (including application pods) onto itself until a CNI plugin is installed. After applying the Flannel manifest, the node status should change to
Ready
shortly.
If you decided to use a different Pod Network CIDR during
kubeadm init
, make sure you download the correct Flannel configuration that matches your CIDR. For example, if you used
192.168.0.0/16
, you would need to find a Flannel manifest that is configured for that range, or modify the default one. The official Flannel documentation is your best friend here if you deviate from the standard setup. Getting this network layer right is
absolutely critical
for a functional cluster, so take your time and ensure it’s applied correctly.
Step 5: Joining Your Worker Node(s) to the Cluster
We’ve got a control plane running and a network layer set up. Now it’s time to
join your worker node(s) to the cluster
. This is where you connect your other machines (the ones that will actually run your applications) to the control plane. Remember that
kubeadm join
command we saved earlier? This is what it’s for! If you don’t have it anymore, don’t sweat it – we showed you how to regenerate it on the control plane node. Ensure you are logged into your
worker node
via SSH for this step.
Execute the
kubeadm join
command that you copied from the control plane initialization output. It will look something like this:
sudo kubeadm join <control-plane-ip>:6443 --token <some-token>
--discovery-token-ca-cert-hash sha256:<some-hash>
This command tells the worker node to connect to the specified control plane IP and port, authenticating using the provided token and certificate hash. The
kubelet
service on the worker node will then register itself with the control plane’s API server. It might take a minute or two for the worker node to fully join and be recognized by the cluster. You’ll see output indicating success or failure. If it fails, double-check the IP address, token, and hash, and ensure that network connectivity exists between your worker node and the control plane (firewalls can be a common culprit!).
Once the join command completes successfully, you can switch back to your control plane node and verify that the worker node has joined the cluster. Run the following command:
kubectl get nodes
You should now see both your control plane node and your worker node listed, with their status as
Ready
. If you have multiple worker nodes, repeat this join process for each one. The key is that the
kubeadm join
command is executed on the
worker node
, and it contains details about the
control plane
. This step essentially expands your cluster’s capacity, allowing you to deploy more applications and handle more load. It’s the process of scaling out your Kubernetes infrastructure. Congratulations, you’ve successfully added a worker node!
Verifying Your Cluster
To
verify your cluster
is working as expected, you can run a few more checks. From your control plane node (where you have
kubectl
configured), run:
kubectl get nodes -o wide
This command should list all your nodes (control plane and workers) with their internal and external IP addresses, OS image, kernel version, and container runtime version. Ensure all nodes show
Ready
status. You can also check the status of system pods:
kubectl get pods --all-namespaces
This command lists all pods running in the cluster, including those in the
kube-system
namespace (which are core Kubernetes components like
kube-dns
,
etcd
,
coredns
, and Flannel). All these pods should be in a
Running
state. To really test it out, you can deploy a simple application. Let’s deploy a basic Nginx web server:
kubectl create deployment nginx-test --image=nginx
kubectl expose deployment nginx-test --port=80 --type=NodePort
Wait a moment for the deployment and service to be created, then check the pods:
kubectl get pods
You should see an
nginx-test
pod running. Then check the service:
kubectl get services
Note the
NodePort
assigned to the
nginx-test
service (it will be a port in the range of 30000-32767). You can then access Nginx by opening a web browser and navigating to
http://<any-node-ip>:<nodeport>
. For example, if your control plane IP is
192.168.1.100
and the NodePort is
31234
, you’d go to
http://192.168.1.100:31234
. You should see the default Nginx welcome page! This confirms that your applications can be deployed and accessed.
Conclusion: Your Kubernetes Journey Begins!
And there you have it, folks! You’ve successfully created your very own Kubernetes cluster on Ubuntu 20.04 using
kubeadm
. We’ve covered everything from preparing your nodes, installing the core components, initializing the control plane, setting up networking with Flannel, and joining your worker nodes. This hands-on experience is
absolutely priceless
for anyone looking to get serious about container orchestration. You’ve built a foundation that you can now expand upon. You can add more worker nodes, explore different CNI plugins, dive into advanced networking, and start deploying your own containerized applications with confidence. Remember, Kubernetes is a vast ecosystem, and this is just the beginning. Keep experimenting, keep learning, and don’t be afraid to break things and fix them – that’s how we all learn best. Happy orchestrating!