Press ESC to close Press / to search

How to Install and Configure Kubernetes on Rocky Linux 9: Complete Step-by-Step Guide

🎯 Key Takeaways

  • Introduction to Kubernetes on Rocky Linux 9
  • Why Rocky Linux 9 for Kubernetes?
  • Prerequisites
  • Step 1: Initial System Configuration
  • Step 2: Install Container Runtime (containerd)

πŸ“‘ Table of Contents

Introduction to Kubernetes on Rocky Linux 9

Kubernetes has become the de facto standard for container orchestration in 2026, and Rocky Linux 9 provides an enterprise-grade, stable foundation for production Kubernetes clusters. This comprehensive guide walks you through installing and configuring a complete Kubernetes cluster on Rocky Linux 9, from initial server setup to deploying your first application.

Whether you’re building a development environment or deploying production workloads, this tutorial covers everything you need to know about running Kubernetes on Rocky Linux 9.

Why Rocky Linux 9 for Kubernetes?

Rocky Linux 9 is an excellent choice for Kubernetes deployments:

  • Enterprise stability: 100% RHEL-compatible with 10-year support lifecycle
  • Security: SELinux enabled by default, regular security updates
  • Cost-effective: Free alternative to RHEL with no licensing fees
  • Community support: Large enterprise user base and active community
  • Compatibility: Works seamlessly with commercial Kubernetes platforms (OpenShift, Rancher)

Prerequisites

Before starting, ensure you have:

  • Rocky Linux 9: Fresh installation with minimal package set
  • Hardware requirements:
    • Master node: 2 CPU cores, 4GB RAM, 20GB disk minimum
    • Worker nodes: 2 CPU cores, 2GB RAM, 20GB disk minimum
  • Network: Static IP addresses for all nodes, internet connectivity
  • Root or sudo access: Administrative privileges required
  • Firewall ports: See firewall configuration section below

This guide assumes a 3-node cluster: 1 master (control plane) and 2 worker nodes.

Step 1: Initial System Configuration

Perform these steps on ALL nodes (master and workers).

Update System Packages

sudo dnf update -y
sudo dnf install -y vim wget curl

Set Hostnames

Set unique hostnames for each node:

# On master node
sudo hostnamectl set-hostname k8s-master

# On worker node 1
sudo hostnamectl set-hostname k8s-worker1

# On worker node 2
sudo hostnamectl set-hostname k8s-worker2

Configure /etc/hosts

Add entries for all cluster nodes on EACH machine:

sudo vi /etc/hosts

# Add these lines (replace with your actual IPs)
192.168.1.10 k8s-master
192.168.1.11 k8s-worker1
192.168.1.12 k8s-worker2

Disable Swap

Kubernetes requires swap to be disabled:

sudo swapoff -a
sudo sed -i '/ swap / s/^/#/' /etc/fstab

Disable SELinux (Temporary – For Testing)

For simplicity in testing environments, disable SELinux. In production, configure proper SELinux policies:

sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

Note: For production, keep SELinux enabled and configure Kubernetes with proper SELinux contexts.

Step 2: Install Container Runtime (containerd)

Kubernetes requires a container runtime. We’ll use containerd, the industry-standard runtime in 2026.

Load Required Kernel Modules

cat <

Configure Kernel Parameters

cat <

Install containerd

sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo dnf install -y containerd.io

Configure containerd

sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml

# Enable systemd cgroup driver
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml

sudo systemctl restart containerd
sudo systemctl enable containerd
sudo systemctl status containerd

Step 3: Install Kubernetes Components

Install kubeadm, kubelet, and kubectl on ALL nodes.

Add Kubernetes Repository

cat <

Install Kubernetes Packages

sudo dnf install -y kubelet kubeadm kubectl
sudo systemctl enable kubelet

Note: Don't start kubelet yet. It will start automatically after cluster initialization.

Step 4: Configure Firewall

Open required ports on all nodes.

Master Node Firewall Rules

sudo firewall-cmd --permanent --add-port=6443/tcp      # Kubernetes API server
sudo firewall-cmd --permanent --add-port=2379-2380/tcp # etcd server client API
sudo firewall-cmd --permanent --add-port=10250/tcp     # Kubelet API
sudo firewall-cmd --permanent --add-port=10251/tcp     # kube-scheduler
sudo firewall-cmd --permanent --add-port=10252/tcp     # kube-controller-manager
sudo firewall-cmd --permanent --add-port=10255/tcp     # Read-only Kubelet API
sudo firewall-cmd --reload

Worker Node Firewall Rules

sudo firewall-cmd --permanent --add-port=10250/tcp     # Kubelet API
sudo firewall-cmd --permanent --add-port=30000-32767/tcp # NodePort Services
sudo firewall-cmd --reload

Step 5: Initialize Kubernetes Master Node

Run these commands ONLY on the master node.

Initialize the Cluster

sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.1.10

Replace 192.168.1.10 with your master node's actual IP address.

This process takes 2-5 minutes. Upon completion, you'll see output containing:

  • Setup commands for kubectl configuration
  • kubeadm join command - Save this! You'll need it to add worker nodes

Configure kubectl Access

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Verify Master Node Status

kubectl get nodes
kubectl get pods --all-namespaces

The master node will show "NotReady" status until we install a pod network.

Step 6: Install Pod Network (Flannel)

Kubernetes needs a pod network addon for inter-pod communication. We'll use Flannel.

kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

Wait 1-2 minutes, then verify:

kubectl get pods -n kube-flannel
kubectl get nodes

The master node should now show "Ready" status.

Step 7: Join Worker Nodes to Cluster

On EACH worker node, run the kubeadm join command from Step 5 initialization output:

sudo kubeadm join 192.168.1.10:6443 --token abc123.xyz789token   --discovery-token-ca-cert-hash sha256:longhashvalue...

Lost your join command? Generate a new token on the master:

kubeadm token create --print-join-command

Verify Cluster Status

On the master node:

kubectl get nodes -o wide

# Should show:
# NAME          STATUS   ROLES           AGE   VERSION
# k8s-master    Ready    control-plane   10m   v1.29.0
# k8s-worker1   Ready              5m    v1.29.0
# k8s-worker2   Ready              5m    v1.29.0

Step 8: Deploy Test Application

Let's verify the cluster works by deploying nginx.

Create Deployment

kubectl create deployment nginx --image=nginx --replicas=3

Expose as Service

kubectl expose deployment nginx --port=80 --type=NodePort

Check Deployment Status

kubectl get deployments
kubectl get pods -o wide
kubectl get services

# Get the NodePort
kubectl get svc nginx

Access the Application

If the service shows NodePort as 30080:

curl http://192.168.1.11:30080  # Using worker node IP

You should see the nginx welcome page HTML.

Step 9: Install Kubernetes Dashboard (Optional)

The Kubernetes Dashboard provides a web UI for cluster management.

Deploy Dashboard

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

Create Admin User

cat <

Get Access Token

kubectl -n kubernetes-dashboard create token admin-user

Save this token - you'll need it to log in to the dashboard.

Access Dashboard

kubectl proxy

Access at: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

Paste the token from the previous step to log in.

Step 10: Monitoring and Management Tools

Install Metrics Server

Required for kubectl top commands:

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

# Verify
kubectl top nodes
kubectl top pods

Install Helm Package Manager

curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
helm version

Production Best Practices

High Availability Configuration

For production, run multiple master nodes:

  • 3 or 5 master nodes (odd numbers for etcd quorum)
  • Load balancer in front of API servers
  • Separate etcd cluster for large deployments

Security Hardening

  • Enable RBAC: Already enabled by default in Kubernetes 1.29
  • Network policies: Implement pod-to-pod access controls
  • Pod Security Standards: Enforce restricted pod security policies
  • Certificate rotation: Automate certificate renewal
  • Audit logging: Enable API server audit logs

Backup and Disaster Recovery

# Backup etcd data
sudo ETCDCTL_API=3 etcdctl snapshot save snapshot.db   --endpoints=https://127.0.0.1:2379   --cacert=/etc/kubernetes/pki/etcd/ca.crt   --cert=/etc/kubernetes/pki/etcd/server.crt   --key=/etc/kubernetes/pki/etcd/server.key

Resource Management

Set resource limits for all pods:

resources:
  requests:
    memory: "64Mi"
    cpu: "250m"
  limits:
    memory: "128Mi"
    cpu: "500m"

Troubleshooting Common Issues

Pods Stuck in Pending State

kubectl describe pod 
# Check for: Insufficient CPU/memory, node selector issues, or PVC binding problems

Node NotReady Status

kubectl describe node 
systemctl status kubelet
journalctl -u kubelet -f

Network Issues

kubectl get pods -n kube-flannel
kubectl logs -n kube-flannel 

Certificate Expiration

kubeadm certs check-expiration
kubeadm certs renew all

Upgrading Kubernetes on Rocky Linux 9

Always upgrade one minor version at a time (1.28 β†’ 1.29 β†’ 1.30).

Upgrade Master Node

# Update kubeadm
sudo dnf upgrade -y kubeadm-1.29.x

# Verify upgrade plan
kubeadm upgrade plan

# Apply upgrade
sudo kubeadm upgrade apply v1.29.x

# Upgrade kubelet and kubectl
sudo dnf upgrade -y kubelet-1.29.x kubectl-1.29.x
sudo systemctl restart kubelet

Upgrade Worker Nodes

# Drain node
kubectl drain k8s-worker1 --ignore-daemonsets

# On worker node
sudo dnf upgrade -y kubeadm-1.29.x kubelet-1.29.x kubectl-1.29.x
sudo kubeadm upgrade node
sudo systemctl restart kubelet

# Uncordon node
kubectl uncordon k8s-worker1

Cost Optimization for Kubernetes

  • Use Rocky Linux instead of RHEL: Save $349-$1,299 per node annually
  • Right-size nodes: Start with smaller instances, scale as needed
  • Implement autoscaling: Cluster Autoscaler and Horizontal Pod Autoscaler
  • Resource requests/limits: Prevent resource waste
  • Spot/preemptible instances: For non-critical workloads

Next Steps

Now that you have a working Kubernetes cluster on Rocky Linux 9:

  1. Deploy real applications: Start with stateless apps before databases
  2. Implement CI/CD: Integrate with Jenkins, GitLab CI, or GitHub Actions
  3. Add persistent storage: Configure NFS, Ceph, or cloud storage classes
  4. Set up monitoring: Install Prometheus and Grafana
  5. Implement logging: Deploy EFK (Elasticsearch, Fluentd, Kibana) stack
  6. Service mesh: Consider Istio or Linkerd for advanced networking

Conclusion

You now have a fully functional Kubernetes cluster running on Rocky Linux 9! This setup provides an enterprise-grade container orchestration platform with the stability of RHEL-compatible systems and zero licensing costs.

Rocky Linux 9's 10-year support lifecycle ensures your Kubernetes infrastructure remains stable and secure for years to come. Whether you're running development workloads or production applications, this foundation gives you the flexibility and reliability needed for modern cloud-native deployments.

What's your next Kubernetes project? Share your experiences deploying Kubernetes on Rocky Linux in the comments below!

Was this article helpful?

R

About Ramesh Sundararamaiah

Red Hat Certified Architect

Expert in Linux system administration, DevOps automation, and cloud infrastructure. Specializing in Red Hat Enterprise Linux, CentOS, Ubuntu, Docker, Ansible, and enterprise IT solutions.

🐧 Stay Updated with Linux Tips

Get the latest tutorials, news, and guides delivered to your inbox weekly.

Add Comment


↑