Deploy NFS Server

Steps to deploy NFS server on Rancher

Overview

This document provides a detailed guide for deploying an NFS (Network File System) server on Rancher. Follow the steps below to set up and configure the NFS server, and ensure it integrates smoothly with your Rancher environment.

Pre-requisites

Before starting with the deployment and configuration of the NFS (Network File System) server on Rancher, ensure the following prerequisites are met:

  1. Rancher Cluster: Ensure you have an operational Rancher User Management cluster specified here.

  2. Access to a Server: A server (EC2 instance or similar) that will host the NFS server, with root or sudo privileges.

  3. Network Configuration: Ensure the server and the Rancher cluster can communicate over the network. Specifically, ensure the NFS server security group allows traffic on TCP port 2049.

  4. Disk for NFS Share: A dedicated disk available on the server for use as the NFS share.

  5. Installed CLI Tools: Ensure the following CLI tools are installed and accessible:

    • yum (or another package manager if using a different OS)

  6. Service Account: Ensure the nfs-client-provisioner service account is created in your Rancher cluster.

  7. NFS Utilities: Ensure NFS utilities can be installed on the server.

  8. NFS Common: Ensure the NFS Common Library is installed on the worker nodes of Rancher.

Ensure that all these prerequisites are met before proceeding with the deployment steps.

Steps

The steps below provide a comprehensive guide for deploying and configuring an NFS server on Rancher, ensuring proper integration for seamless operation.

1

Install NFS Utilities

Log in to your server and switch to the root user:

[ec2-user@ip-10-0-14-162 ~]$ sudo -i

Install the necessary NFS utilities:

[root@ip-10-0-14-162 ~]# yum install -y nfs-utils

Ensure the installation was successful:

Last metadata expiration check: 0:14:06 ago on Thu May 30 08:24:15 2024.
Package nfs-utils-1:2.5.4-2.rc3.amzn2023.0.3.x86_64 is already installed.
Dependencies resolved.
Nothing to do.
Complete!
2

Enable and Start NFS Services

Enable and start the rpcbind and nfs-server services:

[root@ip-10-0-14-162 ~]# systemctl enable rpcbind
[root@ip-10-0-14-162 ~]# systemctl enable nfs-server
[root@ip-10-0-14-162 ~]# systemctl start rpcbind
[root@ip-10-0-14-162 ~]# systemctl start nfs-server
3

Create the NFS Share Directory

Create the directory that will be shared via NFS:

[root@ip-10-0-14-162 ~]# mkdir -p /nfs-share
4

Prepare the Disk for NFS

Identify and format the disk to be used for the NFS share:

[root@ip-10-0-14-162 ~]# blkid /dev/sdb
[root@ip-10-0-14-162 ~]# mkfs.xfs /dev/sdb

Example output of formatting:

meta-data=/dev/sdb               isize=512    agcount=16, agsize=3276800 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=1 inobtcount=1
data     =                       bsize=4096   blocks=52428800, imaxpct=25
         =                       sunit=1      swidth=1 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=25600, version=2
         =                       sectsz=512   sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
5

Verify and repair the filesystem

Check for any hardware errors related to the device:

dmesg | grep sdb
journalctl -xe | grep sdb

If necessary, check and repair the filesystem:

xfs_repair /dev/sdb
6

Configure NFS Exports

Edit the /etc/exports file to export the /nfs-share directory:

[root@ip-10-0-14-162 ~]# vi /etc/exports

Add the following line:

/nfs-share  *(rw,sync,no_subtree_check,no_root_squash,insecure)

Export the directory:

[root@ip-10-0-14-162 ~]# exportfs -rv

Verify the export:

[root@ip-10-0-14-162 ~]# showmount -e

Example output:

Export list for ip-10-0-14-162.ap-south-1.compute.internal:
/nfs-share *
7

Update NFS Server Security Group

Update the NFS server security group to allow TCP port 2049.

8

Apply the NFS-Common Library on Rancher Worker Nodes

Install the NFS Common Library on Rancher Worker Nodes to enable nfs utilities on worker nodes as well.

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: install-nfs-common
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: install-nfs-common
  template:
    metadata:
      labels:
        app: install-nfs-common
    spec:
      hostPID: true
      hostNetwork: true
      containers:
      - name: install
        image: busybox:latest
        command: ["/bin/sh", "-c"]
        args:
          - nsenter --mount=/proc/1/ns/mnt -- /bin/sh -c 'apt update && apt install -y nfs-common' && sleep infinity
        securityContext:
          privileged: true
      restartPolicy: Always

This DaemonSet ensures the nfs common library is installed on all Kubernetes worker nodes and works on Ubuntu worker nodes only.

9

Apply NFS Manifest

Apply the NFS manifest to your Rancher cluster. While applying the manifest, don’t forget to update with the private IP of your NFS server in the last section of the manifest.

[root@ip-10-0-14-162 ~]# kubectl apply -f nfs.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: nfs
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  namespace: nfs
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
  - apiGroups: ["coordination.k8s.io"]
    resources: ["leases"]
    verbs: ["get", "list", "watch", "create", "update", "delete"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "delete"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: nfs
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: nfs-test
parameters:
  archiveOnDelete: "false"
reclaimPolicy: Delete
volumeBindingMode: Immediate
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  namespace: nfs
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-client-provisioner
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: nfs-test
            - name: NFS_SERVER
              value: 10.0.14.162 # Replace with your NFS server IP
            - name: NFS_PATH
              value: /nfs-share # Replace with your NFS share path
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.0.14.162 # Replace with your NFS server IP
            path: /nfs-share # Replace with your NFS share path
10

Check for the NFS storage class creation

kubectl get storageclassimage-20250311-071950.png

Last updated

Was this helpful?