Kubernetes Cluster On Ubuntu 22.04: Step-by-Step Guide
Kubernetes Cluster on Ubuntu 22.04: Step-by-Step Guide
Hey guys, are you ready to dive into the awesome world of Kubernetes and get your very own cluster up and running on Ubuntu 22.04? Awesome! In this guide, we’re going to walk through the entire process, step by step, making sure even if you’re a bit new to this, you’ll be able to follow along. Kubernetes, often shortened to K8s, is a super powerful open-source system for automating deployment, scaling, and management of containerized applications. Think of it as the ultimate orchestrator for your containers, making your life as a developer or sysadmin way, way easier. So, grab your favorite beverage, get comfortable, and let’s get this cluster built!
Table of Contents
- Why Ubuntu 22.04 for Your Kubernetes Cluster?
- Prerequisites: What You’ll Need Before We Start
- Step 1: Preparing Your Nodes (All of Them!)
- Step 2: Installing Container Runtime (containerd)
- Step 3: Installing Kubernetes Components (kubeadm, kubelet, kubectl)
- Step 4: Initializing the Control Plane Node
- Step 5: Installing a Pod Network Add-on (Flannel)
- Step 6: Joining Worker Nodes to the Cluster
- Conclusion: Your Kubernetes Journey Begins!
Why Ubuntu 22.04 for Your Kubernetes Cluster?
So, why pick Ubuntu 22.04, codenamed “Jammy Jellyfish,” for your Kubernetes cluster, you ask? Well, guys, Ubuntu has long been a favorite in the server world, and 22.04 is no exception. It brings a ton of stability, security updates, and a generally rock-solid foundation that’s perfect for running complex systems like Kubernetes. Plus, it’s got excellent community support, meaning if you run into any snags, there’s a good chance someone else has already figured it out and shared the solution. For Kubernetes, having a reliable operating system is key . You want an OS that’s going to stay out of your way, perform well, and provide a stable environment for your containers to thrive. Ubuntu 22.04 delivers on all these fronts. It’s packed with the latest software packages and kernel improvements, which can offer performance benefits and better compatibility with the tools you’ll be using for your Kubernetes installation, like container runtimes (we’ll get to that!). The Long-Term Support (LTS) nature of Ubuntu 22.04 also means you can rely on it for a good few years without worrying about frequent, disruptive upgrades. This is super important for production environments where stability is king. We’re talking about keeping your applications up and running without a hitch, and a solid OS is the bedrock of that. So, while you could install Kubernetes on other Linux distributions, Ubuntu 22.04 offers a fantastic blend of modern features, stability, and widespread adoption that makes it an ideal choice for building your K8s cluster. Let’s get this party started!
Prerequisites: What You’ll Need Before We Start
Alright team, before we jump into the actual installation, let’s make sure you’ve got all your ducks in a row. Setting up a Kubernetes cluster, even a small one, involves a few moving parts. First off, you’ll need a few machines – these can be physical servers or virtual machines. For a basic setup, you’ll want at least two: one to act as your control plane (formerly known as the master node) and at least one worker node where your actual application containers will run. If you’re just experimenting, you can even run everything on a single machine, but it’s good practice to separate the roles. Each machine needs to be running
Ubuntu 22.04 LTS
. Make sure they’re all connected via a network where they can communicate with each other. You’ll also need
sudo
privileges
on all these machines, as we’ll be installing software and modifying system settings. A
stable internet connection
is a must for downloading packages. Oh, and
disable swap
! Kubernetes doesn’t play nicely with swap enabled, so you’ll need to turn it off on
all
your nodes. We’ll show you how to do that. Finally, it’s a good idea to have a basic understanding of Linux commands and networking concepts. Don’t worry if you’re not a seasoned pro, we’ll guide you through the commands, but knowing your way around the terminal will definitely speed things up. So, check off these items, and we’ll be golden!
Step 1: Preparing Your Nodes (All of Them!)
Okay, awesome! First up, we need to get all our machines (your control plane and your worker nodes) ready. This is a crucial step, guys, so pay close attention. We need to ensure a couple of things are consistent across all nodes for Kubernetes to function smoothly. The first thing on our list is disabling swap . Kubernetes really, really doesn’t like it when swap is enabled. It can cause performance issues and make the scheduler act up. To disable swap temporarily (until reboot), run:
sudo swapoff -a
To make this change permanent, you’ll need to edit your
/etc/fstab
file. Open it with your favorite text editor, like
nano
or
vim
:
sudo nano /etc/fstab
Then, find the line that refers to swap (it usually looks something like
/swap.img ... swap ...
) and comment it out by adding a
#
at the beginning of the line. Save and exit.
Next, we need to enable some kernel modules and configure
sysctl
parameters. These settings help Kubernetes network traffic flow properly. Run the following commands on
all
your nodes:
sudo modprobe overlay
sudo modprobe br_netfilter
sudo sysctl net.bridge.bridge-nf-call-iptables=1
sudo sysctl net.ipv4.ip_forward=1
To make these
sysctl
settings persistent across reboots, create a new configuration file:
sudo nano /etc/sysctl.d/kubernetes.conf
Add the following lines to this file:
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
Save and exit the file. Then, apply these settings immediately without rebooting:
sudo sysctl --system
Finally, make sure your nodes can resolve each other’s hostnames. If you’re using hostnames, you might need to update
/etc/hosts
on each machine, or better yet, ensure you have a DNS server set up. For this guide, we’ll assume you can ping each node by its IP address. So, that’s it for the initial node prep! We’ve disabled swap, enabled necessary kernel modules, and configured IP forwarding. Great job, guys!
Step 2: Installing Container Runtime (containerd)
Alright, now that our nodes are prepped and ready, it’s time to install a container runtime. Kubernetes needs something to actually
run
your containers. While Docker used to be the go-to, the Kubernetes community has largely standardized on
containerd
. It’s a bit simpler and more integrated. So, let’s get
containerd
installed on
all
your nodes. We’ll start by updating your package list:
sudo apt update
Now, install
containerd
. We’ll install the
containerd.io
package:
sudo apt install -y containerd
After installation, we need to configure
containerd
. It usually creates a default configuration file, but we need to adjust it for Kubernetes. First, let’s create the default configuration directory if it doesn’t exist:
sudo mkdir -p /etc/containerd
Now, generate the default configuration and save it to a file:
sudo containerd config default | sudo tee /etc/containerd/config.toml
This command dumps the default
containerd
configuration and pipes it to
tee
, which writes it to the specified file. We need to make one crucial change to this configuration to ensure Kubernetes works seamlessly. Open the configuration file for editing:
sudo nano /etc/containerd/config.toml
Inside this file, you’ll find a section related to the
SystemdCgroup
. We need to set it to
true
. Look for the line that says
SystemdCgroup = false
and change it to
SystemdCgroup = true
. This tells
containerd
to use
systemd
for cgroup management, which is what Kubernetes expects.
Here’s what that part should look like:
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
Save and exit the file. Now, we need to restart the
containerd
service for these changes to take effect:
sudo systemctl restart containerd
And to make sure it starts automatically on boot:
sudo systemctl enable containerd
Boom! You’ve successfully installed and configured
containerd
. This is a massive step towards getting your Kubernetes cluster up and running. High five, everyone!
Step 3: Installing Kubernetes Components (kubeadm, kubelet, kubectl)
Alright, guys, we’ve got our nodes prepped and our container runtime sorted. Now it’s time to install the core Kubernetes tools:
kubeadm
,
kubelet
, and
kubectl
.
kubeadm
is the tool we’ll use to bootstrap our cluster,
kubelet
is the agent that runs on each node and ensures containers are running, and
kubectl
is our command-line interface for interacting with the cluster. We need to install these on
all
nodes.
First, let’s add the official Kubernetes package repositories to your system. This ensures you get the latest stable versions. We’ll start by updating your package index again and installing some prerequisite packages:
sudo apt update
sudo apt install -y apt-transport-https ca-certificates curl
Now, we need to download the Google Cloud public signing key. This is used to verify the authenticity of the packages:
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
Next, add the Kubernetes APT repository itself. This command adds the repository to your system’s sources list:
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
Now that the repository is added, let’s update your package list one more time to include the Kubernetes packages:
sudo apt update
It’s crucial to prevent
kubelet
from being automatically updated or installed. This is because
kubeadm
needs a specific version to work correctly. So, we’ll
hold
the
kubelet
package:
sudo apt-mark unhold kubelet
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet
Important Note:
The version
v1.29
in the repository URL corresponds to Kubernetes version 1.29. If you need a different major version, adjust the URL accordingly (e.g.,
/v1.28/
for version 1.28). It’s generally a good idea to stick with a specific minor version for stability.
Finally, we need to ensure that
kubelet
is enabled to start on boot, but we won’t start it just yet.
kubeadm
will configure and start it during the cluster initialization.
sudo systemctl enable kubelet
And that’s it! You’ve successfully installed
kubeadm
,
kubelet
, and
kubectl
on all your nodes. These are the essential building blocks for your Kubernetes cluster. Nicely done, team!
Step 4: Initializing the Control Plane Node
Alright, guys, this is where the magic happens! We’re about to initialize our control plane node. This node will manage the cluster’s state and make decisions. Remember, you only run this command on the machine designated as your control plane node . Make sure you have a static IP address assigned to your control plane node, or at least an IP that won’t change. We’ll use the control plane node’s IP address in the command.
First, let’s get the IP address of your control plane node. You can usually find this with
ip a
or
hostname -I
.
Now, let’s initialize the control plane using
kubeadm init
. You’ll need to specify the pod network CIDR. A common choice is
10.244.0.0/16
if you plan to use Flannel as your network plugin (which we’ll cover later), or
192.168.0.0/16
for Calico. We’ll use
10.244.0.0/16
for this example:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
This command will take a few minutes to complete.
kubeadm
will set up the necessary components for the control plane, including the API server, etcd database, scheduler, and controller manager. It will also configure
kubelet
on this node.
Once
kubeadm init
finishes successfully, it will output some important information.
Pay close attention to this output!
It will tell you how to configure
kubectl
to manage your cluster and provide the
kubeadm join
command you’ll need to connect your worker nodes to the cluster.
Here’s what you should see (or similar):
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following command as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
To join your worker nodes to the cluster, run (on each worker node):
kubeadm join <control-plane-ip>:6443 --token <token> \
--discovery-token-ca-cert-hash sha256:<hash>
So, let’s configure
kubectl
for your current user. Run these commands:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Now you should be able to interact with your cluster using
kubectl
!
Let’s verify that the control plane components are running:
kubectl get pods -n kube-system
You should see pods like
etcd
,
kube-apiserver
,
kube-controller-manager
,
kube-scheduler
, and
coredns
in a
Running
state. If they aren’t, don’t worry, we’ll troubleshoot!
Crucially, save that
kubeadm join
command somewhere safe!
You’ll need it very soon to add your worker nodes. If you lose it, you can regenerate a token on the control plane node with
sudo kubeadm token create --print-join-command
.
Congratulations, your control plane is up and running! You’re almost there, guys!
Step 5: Installing a Pod Network Add-on (Flannel)
Okay, team, your control plane is initialized, but your nodes can’t communicate with each other yet for pod networking. Kubernetes requires a network add-on (a Container Network Interface or CNI plugin) to enable communication between pods across different nodes. A very popular and straightforward choice is Flannel . So, let’s get Flannel installed on your cluster. This command should be run on your control plane node .
Flannel is typically installed using a YAML manifest file. You can download the latest manifest from the official Flannel GitHub repository. We’ll use
kubectl apply
to deploy it.
First, let’s fetch the Flannel manifest. You can do this by running:
curl -LO https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
This command downloads the YAML file into your current directory. Make sure you’re on your control plane node and have
kubectl
configured.
Now, apply this manifest to your cluster:
sudo kubectl apply -f kube-flannel.yml
This command tells Kubernetes to create the necessary resources (like Deployments and DaemonSets) to run Flannel across your cluster. Flannel will set up the overlay network that allows pods to communicate.
After applying the manifest, it might take a minute or two for the Flannel pods to start up. You can check their status with:
kubectl get pods --all-namespaces
You should see a
kube-flannel-ds
pod running on each of your nodes (once they are joined). The
coredns
pods should also eventually become ready if they weren’t already.
Why is this important? Without a CNI plugin like Flannel, your pods won’t be able to communicate with each other, especially if they land on different nodes. This is fundamental for distributed applications. Flannel creates a virtual network that spans all your nodes, making it seem like all your pods are on the same flat network.
So, you’ve successfully installed a pod network! This is a huge milestone. Your cluster is now much closer to being fully functional. High fives all around!
Step 6: Joining Worker Nodes to the Cluster
Alright, team, the final piece of the puzzle: connecting your worker nodes! Remember that
kubeadm join
command we saved from the control plane initialization? Now’s the time to use it. You’ll run this command on
each of your worker nodes
.
Make sure you have SSH’d into your worker node and have
sudo
privileges. The command looks something like this (replace the placeholders with your actual token and hash):
sudo kubeadm join <control-plane-ip>:6443 --token <token> \
--discovery-token-ca-cert-hash sha256:<hash>
Where do you get the
<token>
and
<hash>
?
- Control Plane IP: This is the IP address of your control plane node.
-
Token:
This is a temporary authentication token generated during
kubeadm init. If you lost it, you can regenerate it on the control plane node by running:sudo kubeadm token create --print-join-command. -
Hash:
This is the SHA256 hash of the control plane node’s CA certificate. The
kubeadm token create --print-join-commandcommand will also output this hash.
Let’s say your control plane IP is
192.168.1.100
and the
kubeadm join
command outputted:
kubeadm join 192.168.1.100:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef
Then, on each worker node, you would run:
sudo kubeadm join 192.168.1.100:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef
Once
kubeadm join
completes on a worker node, it will configure
kubelet
on that node and join it to the cluster. You’ll see output confirming the join was successful.
Verification:
Now, head back to your control plane node . Run the following command to see all the nodes in your cluster:
kubectl get nodes
You should see your control plane node listed with the status
ControlPlane,Master
and your worker nodes listed with the status
Ready
. It might take a minute or two for the worker nodes to show up as
Ready
after they join.
And there you have it, guys! You’ve successfully installed a Kubernetes cluster on Ubuntu 22.04, with a control plane and worker nodes all communicating. This is a massive achievement! You’ve set up the OS, installed the container runtime, deployed Kubernetes components, configured networking, and joined your nodes. You’re now ready to start deploying your applications. Awesome job, everyone!
Conclusion: Your Kubernetes Journey Begins!
So there you have it, folks! You’ve successfully navigated the process of setting up a
Kubernetes cluster on Ubuntu 22.04
. From preparing your nodes and installing essential components like
containerd
,
kubeadm
,
kubelet
, and
kubectl
, to initializing the control plane and enabling pod networking with Flannel, you’ve tackled it all. We’ve covered the prerequisites, the nitty-gritty commands, and the crucial verification steps. This is a significant accomplishment, and you should feel really proud of yourself! You now have a foundational Kubernetes environment ready for your containerized applications. This is just the beginning of your journey with Kubernetes. From here, you can explore deploying applications using Deployments and Services, managing configurations with ConfigMaps and Secrets, implementing persistent storage, and much more. The possibilities are vast! Remember, the Kubernetes community is huge and incredibly supportive, so don’t hesitate to dive into the documentation or seek help if you get stuck. Keep experimenting, keep learning, and most importantly, keep building awesome things with your new K8s cluster! Happy containerizing, everyone!