All pages
Powered by GitBook
1 of 1

Loading...

CI/CD Setup On SDC

Steps to setup CI/CD on SDC

Overview

Kubespray is a composition of Ansible playbooks, inventory, provisioning tools, and domain knowledge for generic OS/Kubernetes cluster configuration management tasks. Kubespray provides:

  • a highly available cluster

  • composable attributes

  • support for most popular Linux distributions

  • continuous-integration tests

Pre-requisites

  1. the repos below to your GitHub Organization account

Hardware

  • One Bastion machine to run Kubespray

  • HA-PROXY machine which acts as a load balancer with Public IP. (CPU: 2Core , Memory: 4Gb)

  • one machine which acts as a master node. (CPU: 2Core , Memory: 4Gb)

  • one machine which acts as a worker node. (CPU: 8Core , Memory: 16Gb)

Software

  1. Kubernetes nodes

    1. Ubuntu 18.04

    2. SSH

    3. Privileged user

Preparing The Nodes

Run and follow instructions on all nodes.

Install Python

Ansible needs Python to be installed on all the machines.

apt-get update && apt-get install python3-pip -y

Disable Swap

Setup SSH using key-based authentication

All the machines should be in the same network with ubuntu or centos installed.

ssh key should be generated from the Bastion machine and must be copied to all the servers part of your inventory.

  • Generate the ssh key ssh-keygen -t rsa

  • Copy over the public key to all nodes.

Setup Ansible Controller machine Setup kubespray

  • Clone the official repository

  • Install dependencies from requirements.txt

  • Create Inventory

where mycluster is the custom configuration name. Replace with whatever name you would like to assign to the current cluster.

Create inventory using an inventory generator.

Once it runs, you can see an inventory file that looks like the below:

  • Review and change parameters under inventory/mycluster/group_vars

  • Deploy Kubespray with Ansible Playbook - run the playbook as Ubuntu

    • The option --become is required, for example writing SSL keys in /etc/, installing packages and interacting with various system daemons.

    • Note: Without --become

Kubernetes cluster will be created with three masters and four nodes with the above process.

Kube config will be generated in a .Kubefolder. The cluster can be accessible via kubeconfig.

HA-Proxy

  • Install haproxy package in a haproxy machine that will be allocated for proxy

sudo apt-get install haproxy -y

  • IPs need to be whitelisted as per the requirements in the config.

sudo vim /etc/haproxy/haproxy.cfg

Volumes

Iscsi volumes will be provided by the SDC team as per the requisition and the same can be used for statefulsets.

CI/CD Build Job Pipeline Setup

Refer to the .

https://github.com/egovernments/CIOps
  • Go lang (version 1.13.X)

  • SOPS

  • GitHub user

  • Docker Hub account

  • Install kubectl on your local machine to interact with the Kubernetes cluster.

  • Install Helm to help package the services along with the configurations, environment, secrets, etc into Kubernetes manifests.

  • ISCSI volumes for persistence volume. (number of quantity: 2 )

    • kaniko-cache-claim:- 10Gb

    • Jenkins home:- 100Gb

    Python

  • Bastion machine

    1. Ansible

    2. git

    3. Python

  • - the playbook will fail to run!
    GitHub Organization account
    Fork
    https://github.com/egovernments/DIGIT-DevOps
    doc here
    sudo swapoff -a
    sudo sed -i '/ swap /d' /etc/fstab
    ssh-copy-id root@<node-ip-address>
    git clone https://github.com/kubernetes-incubator/kubespray.git
    cd kubespray
    sudo pip install -r requirements.txt
    cp -rfp inventory/sample inventory/mycluster
    declare -a IPS=(10.67.53.158 10.67.53.159 )
    CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
    all:
      hosts:
        node1:
          ansible_host: 10.67.53.158
          ip: 10.67.53.158
          access_ip: 10.67.53.158
        node2:
          ansible_host: 10.67.53.159
          ip: 10.67.53.159
          access_ip: 10.67.53.159  
      children:
        kube-master:
          hosts:
            node1:
        kube-node:
          hosts:
            node1:
            node2:
        etcd:
          hosts:
            node1:
        k8s-cluster:
          children:
            kube-master:
            kube-node:
        calico-rr:
          hosts: {}
    
    vim inventory/mycluster/group_vars/all/all.yml
    ...
    ## External LB example config
    ## apiserver_loadbalancer_domain_name: "elb.some.domain"
    apiserver_loadbalancer_domain_name: "10.211.55.101"
    loadbalancer_apiserver:
      address: 10.211.55.101
      port: 443
    ansible-playbook -i inventory/mycluster/hosts.yaml  --become --become-user=ubuntu cluster.yml
    global
    	log /dev/log	local0
    	log /dev/log	local1 notice
    	chroot /var/lib/haproxy
    	stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
    	stats timeout 30s
    	user haproxy
    	group haproxy
    	daemon
    
    	# Default SSL material locations
           # ca-base /etc/ssl/certs
          #	crt-base /etc/ssl/private
    
    	# Default ciphers to use on SSL-enabled listening sockets.
    	# For more information, see ciphers(1SSL). This list is from:
    	#  https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
    	# An alternative list with additional directives can be obtained from
    	#  https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=haproxy
    	#ssl-default-bind-ciphers 
    	#ssl-default-bind-options no-sslv3
    
    defaults
    	log	global
    	mode	http
    	option	httplog
    	option	dontlognull
                timeout connect 5000
                timeout check   5000
                timeout client  30000
                 timeout server  60000
    
    frontend http-in
        bind *:80
        mode tcp
        default_backend http-servers
        http-request redirect scheme https unless { ssl_fc }
    
    frontend https-in
        bind *:443
        mode tcp
        default_backend https-servers
    
    frontend kube-in
        bind *:8383
        mode tcp
        timeout client 3h
        #Jenkins_CD eGov_ACT eGov_Spectra 
        #acl network_allowed src 35.244.58.192 106.51.69.20 180.151.198.122 103.122.14.159 35.154.77.83 35.154.203.141 125.16.100.118 10.67.53.252  52.71.194.45 132.154.83.214 27.6.189.204 10.67.53.120
        #tcp-request connection reject if !network_allowed
        default_backend kube-servers
    
    frontend kube2-in
        bind *:6363
        mode tcp
        timeout client 3h
        #Jenkins_CD eGov_ACT eGov_Spectra
        acl network_allowed src 35.244.58.192 106.51.69.20 180.151.198.122 103.122.14.159 35.154.77.83 35.154.203.141 125.16.100.118 10.67.53.252  52.71.194.45 132.154.83.214 27.6.189.204 10.67.53.120
        tcp-request connection reject if !network_allowed
        default_backend kube-servers
    
    
    backend http-servers
            mode tcp
            balance roundrobin
            server srv4 10.67.53.159:32080 send-proxy
    
    backend https-servers
            mode tcp
            balance roundrobin
            server srv4 10.67.53.159:32443 send-proxy
    
    backend kube-servers
            mode tcp     
            option log-health-checks
            timeout server 3h
            server master1 10.67.53.158:6443 check check-ssl verify none inter 10000
            balance roundrobin  
    sudo iscsiadm -m discovery -t sendtargets -p 10.67.49.8:3260