Create Infrastructure On-Premise
Overview
This section outlines the infrastructure setup for a data centre that hosts a Rancher-managed RKE2 Kubernetes cluster along with PostgreSQL 15.12, NFS-backed persistent storage, and a MinIO S3-compatible object store. All virtual machines run on Ubuntu 24.04 and are connected exclusively through private networking. A bastion server, equipped with HAProxy, serves as the public entry point for accessing the cluster.
Pre-requisites
The following tools are required to complete this installation
Rancher RKE2
PostgreSQL 15.12
NFS
MinIO
Bastion server
Scope & Architecture
The objective is to provision the following components in an on‑premise environment
High‑availability RKE2 Kubernetes cluster (3 master + 4 worker nodes) managed by Rancher Manager.
PostgreSQL 15.12 database on a dedicated VM.
NFS server and CSI driver to back Kubernetes PersistentVolumeClaims.
MinIO (AIStor) S3‑compatible object storage service.
Bastion / HAProxy node to expose HTTP(S) traffic from the public domain to the private cluster.
RKE2 provides a highly available control plane when deployed with three server nodes and a fixed registration/load‑balancer address in front of them. NFS CSI driver runs as a Kubernetes add‑on to dynamically provision volumes for persistent workloads. MinIO provides S3‑like object storage for backups and application data. All nodes are connected to the same private subnet (for example, 10.0.0.0/24). Only the bastion/HAProxy node has a public IP address. SSH to all nodes is done via the bastion.
Virtual Machine Sizing
This section outlines the operating system and basic assumptions applicable to all virtual machines used in the setup. Every VM runs Ubuntu Server 24.04 LTS to ensure long-term stability and security updates. Hostnames and IP addresses are not fixed in this guide and should be assigned according to the data centre’s naming standards and network layout.
RKE2 Master Nodes
3
4
8
Control plane + etcd; no public IP
RKE2 Worker Nodes
4
4
16
Application workloads; no public IP
PostgreSQL DB Node
1
4
8
PostgreSQL 15.12; single instance on local storage
Storage Node (NFS + MinIO)
1
4
8
1 TB data disk attached (≈800 GB NFS, ≈100 GB MinIO)
Bastion & HAProxy Node
1
2
4
Public IP; SSH entry point and HTTP(S) load balancer to cluster
Networking Requirements
A private subnet and addressing must be defined in SDC (for example, 10.0.0.0/24).
A DNS record should be planned for the public domain (for example, *.example.gov mapped to the HAProxy public IP).
An outbound internet connection (or controlled repository mirrors) from all nodes should be present to fetch packages and container images.
NTP is configured and time synchronised across all nodes.
There should be a passwordless SSH from the bastion node to all other nodes for operational convenience.
Specific firewall rules opened inside the data centre network to allow the following-
RKE2 control plane ports (6443/tcp)
node‑to‑node traffic
NFS traffic (TCP/UDP 2049)
PostgreSQL 5432/tcp
HTTP(S) traffic between HAProxy and Kubernetes ingress nodes.
On each Ubuntu VM, perform the following baseline steps-
Update packages: sudo apt update && sudo apt -y upgrade
Set hostnames and /etc/hosts entries for all nodes.
Disable swap: sudo swapoff -a and remove swap entries from /etc/fstab.
Install basic tools: sudo apt install -y vim curl wget net-tools jq htop
Steps
A. Cluster Setup
Follow the Rancher "Setting up Infrastructure for a High Availability RKE2 Kubernetes Cluster" guide for detailed reference. Follow the steps below to set up the RKE2 HA Kubernetes cluster for the data centre.
Prepare Internal Load Balancer Address
RKE2 requires a fixed registration address in front of the server nodes. On the bastion/HAProxy node, configure a private IP (for example, 10.0.0.10) or use the HAProxy private interface IP as the RKE2 server URL. This IP will forward TCP 6443 to the three master nodes.
Install RKE2 Server on Master Nodes
Repeat the following steps on each of the three master nodes.
Install RKE2 server binaries -
curl -sfL https://get.rke2.io | sudo INSTALL_RKE2_TYPE=server sh -
Configure the first master node
Create /etc/rancher/rke2/config.yaml with at least:
Start and enable the RKE2 server service:
sudo systemctl enable rke2-server && sudo systemctl start rke2-server
Once the first master is up, copy /var/lib/rancher/rke2/server/node-token. This token will be used by the remaining master and worker nodes.
Install RKE2 Agent on Worker Nodes
On each of the four worker nodes, do the following-
Install RKE2 agent:
curl -sfL https://get.rke2.io | sudo INSTALL_RKE2_TYPE=agent sh -
Configure /etc/rancher/rke2/config.yaml:
server: https://10.0.0.10:6443
token: <cluster-shared-token>
Start the RKE2 agent:
sudo systemctl enable rke2-agent && sudo systemctl start rke2-agent
After all agents join, copy /etc/rancher/rke2/rke2.yaml from any master node to your local machine as kubeconfig and use it to access the cluster with kubectl.
B. Deploy NFS Storage Backend
The storage node is set up as an NFS server that provides shared storage to the Kubernetes cluster. Kubernetes uses the NFS CSI driver to automatically create and manage PersistentVolumes (PVs) when applications request storage. These volumes are provisioned through a StorageClass, allowing workloads to claim storage dynamically without manual volume creation.
Install NFS Server On Ubuntu
Perform the steps below on the storage node:
sudo apt update
sudo apt install nfs-kernel-server -y
sudo mkdir -p /mnt/nfs01/NFS
sudo chown -R nobody:nogroup /mnt/nfs01/NFS
sudo chmod 777 /mnt/nfs01/NFS
Edit /etc/exports and add (replace <client-subnet> with your Kubernetes node subnet, e.g. 10.0.0.0/24):
/mnt/nfs01/NFS <client-subnet>(rw,sync,fsid=1,no_subtree_check,insecure,no_root_squash,anonuid=1001,anongid=1001)
Apply the configuration:
sudo exportfs -rav
Enable and restart the NFS service:
sudo systemctl enable nfs-server
sudo systemctl restart nfs-server
Verify exports: sudo exportfs -v
Install NFS CSI Driver
On a machine with kubectl and Helm configured for the RKE2 cluster, perform the following-
helm repo add csi-driver-nfs https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/charts
Update Helm repo.
helm install csi-driver-nfs csi-driver-nfs/csi-driver-nfs --namespace kube-system --version 4.12.0
kubectl get pods -n kube-system -l app.kubernetes.io/name=csi-driver-nfs
Test Persistent Volume Claim(PVC) and Pod
Create a test PVC and pod (test-nfs-pvc.yaml) to verify end-to-end NFS provisioning:
Execute commands below-
On the NFS server, verify that a directory for the PVC has been created under /mnt/nfs01/NFS and that it contains the hello.txt file written by the pod.
C. Configure MinIO Object Storage
MinIO will be deployed on the storage node using the remaining capacity of the 1 TB disk (approximately 100 GB).
Install and Configure MinIO
Follow the MinIO AIStor Ubuntu installation guide. At a high level, the steps to be followed are below-
D. PostgreSQL 15.12 Database Node
A dedicated VM is used for PostgreSQL 15.12. The data directory is placed on a custom-mounted disk to separate database storage from the OS disk.
Install PostgreSQL 15
Update APT repositories by using the command
sudo apt update
Add the PostgreSQL APT repository by using the command
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
Import the PostgreSQL GPG key:
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | \
sudo tee /etc/apt/trusted.gpg.d/pgdg.asc > /dev/null
Update the package list once again by using the command
sudo apt update
Install PostgreSQL 15 (15.12 or later patch version)
sudo apt install -y postgresql-15 postgresql-client-15
Finally, confirm installation by running the following
sudo systemctl status postgresql
Update PostgreSQL Configuration to Use New Data Directory
Edit the “postgresql.conf” file with the changes suggested below
sudo nano /etc/postgresql/15/main/postgresql.conf
Find the line # data_directory = '/var/lib/postgresql/15/main'
Uncomment and change it to
data_directory = '/mnt/data/postgresql/15/main'
Persist Mount and Allow Remote Access
If /mnt/data is on a dedicated disk or partition, add it to /etc/fstab so it persists after reboot:
sudo nano /etc/fstab
UUID=<UUID of your disk> /mnt/data ext4 defaults 0 2
Use the command below to get the UUID and mount
lsblk -f
sudo mount -a
To allow PostgreSQL access from other nodes (for example, 10.0.0.0/24), edit postgresql.conf according to the instructions below-
sudo nano /etc/postgresql/15/main/postgresql.conf
Set listen_addresses = '*'
password_encryption = 'md5'
Edit pg_hba.conf as per below-
sudo nano /etc/postgresql/15/main/pg_hba.conf
Add host all 10.0.0.0/24 md5
Restart PostgreSQL to apply changes
sudo systemctl restart postgresql
Create database roles and users
Follow the steps below to create a PostgreSQL superadmin user, create a database owned by that superadmin, then create a restricted read/write database user and grant the required privileges (both for existing objects and for future objects).
Create a database user with read/write access only:
CREATE USER <db_rw_user> WITH PASSWORD '<db_rw_password>';
GRANT CONNECT ON DATABASE <db_name> TO <db_rw_user>;
GRANT USAGE ON SCHEMA public TO <db_rw_user>;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO <db_rw_user>;
GRANT CREATE ON SCHEMA public TO <db_rw_user>;
E. Configure Bastion and HAProxy
The bastion VM serves as the SSH entry point and also runs HAProxy to expose HTTP(S) traffic from the public internet to the private RKE2 cluster. The configuration is similar to the existing SDC HAProxy setup.
Restart HAProxy and configure the DNS
After updating the configuration file, restart HAProxy by executing the command below sudo systemctl restart haproxy
Configure public DNS records so that the desired domain (for example, *.example.gov) points to the bastion public IP. The NGINX ingress controller in the RKE2 cluster should be configured with host‑based rules that match these domains.
Validation Checklist
Verify the installation and readiness status of the infrastructure by following the instructions below-
All RKE2 master and worker nodes show Ready in kubectl get nodes.
Default StorageClass is set to NFS, and sample PVCs bind successfully.
MinIO endpoint is reachable from within the cluster, and S3 buckets can be created.
PostgreSQL 15.12 is running, and connectivity from Kubernetes pods is verified.
HAProxy forwards HTTP(S) traffic correctly to the Kubernetes ingress endpoints.
If all the validations pass, that completes the base infrastructure setup for an on‑premise environment using Rancher RKE2, PostgreSQL, NFS and MinIO.
Last updated
Was this helpful?

