CI/CD SetUp
CI/CD setup

Prerequisites

    1.
    GitHub Organization account
    2.
    Fork the belo repo's to your GitHub Organization account 1. https://github.com/egovernments/DIGIT-DevOps and 2. https://github.com/egovernments/CIOps
    3.
    AWS KMS
    4.
    Go lang (version 1.13.X)
    5.
    SOPS
    6.
    8.
    AWS account with the admin access to provision EKS Service, you can always subscribe to free AWS account to learn the basics and try, but there is a limit to what is offered as free, for this demo you need to have a commercial subscription to the EKS service, if you want to try out for a day or two, it might cost you about Rs 500 - 1000. (Note: Post the Demo, for the internal folks, eGov will provide a 2-3 hrs time bound access to eGov's AWS account based on the request and available number of slots per day)
    9.
    Install kubectl on your local machine that helps you interact with the kubernetes cluster
    10.
    Install Helm that helps you package the services along with the configurations, envs, secrets, etc into a kubernetes manifests
    11.
    Install terraform version (0.14.10) for the Infra-as-code (IaC) to provision cloud resources as code and with desired resource graph and also it helps to destroy the cluster at one go.
    12.
    Install AWS CLI on your local machine so that you can use aws cli commands to provision and manage the cloud resources on your account.
    13.
    Install AWS IAM Authenticator that helps you authenticate your connection from your local machine so that you should be able to deploy DIGIT services.
    14.
    Use the AWS IAM User credentials provided for the Terraform (Infra-as-code) to connect with your AWS account and provision the cloud resources.
      1.
      You'll get a Secret Access Key and Access Key ID. Save them safely.
      2.
      Open the terminal and Run the following command you have already installed the AWS CLI and you have the credentials saved. (Provide the credentials and you can leave the region and output format as blank)
    1
    aws configure --profile cicd-infra-account
    2
    3
    AWS Access Key ID []:<Your access key>
    4
    AWS Secret Access Key []:<Your secret key>
    5
    Default region name []: ap-south-1
    6
    Default output format []: text
    Copied!
      1.
      The above will create the following file In your machine as /Users/.aws/credentials
    1
    [cicd-infra-account]
    2
    aws_access_key_id=***********
    3
    aws_secret_access_key=****************************
    Copied!

CI/CD Cluster Setup

Terraform helps you build a graph of all your resources, and parallelizes the creation and modification of any non-dependent resources. Because of this, Terraform builds infrastructure as efficiently as possible, and operators get insight into dependencies in their infrastructure.
Before we provision the cloud resources, we need to understand and be sure about what resources need to be provisioned by terraform to deploy CI/CD.
The following is the resource graph that we are going to provision using terraform in a standard way so that every time and for every env, it'll have the same infra.
    EKS Control Plane (Kubernetes Master)
    Work node group (VMs with the estimated number of vCPUs, Memory)
    EBS Volumes (Persistent Volumes)
    VPCs (Private network)
    Users to access, deploy and read-only

Understand the Resource Graph in Terraform script:

    Ideally, one would write the terraform script from the scratch using this doc.
    Here we have already written the terraform script that provisions the production-grade DIGIT Infra and can be customized with the specified configuration.
    Let's Clone the DIGIT-DevOps GitHub repo where the terraform script to provision EKS cluster is available and below is the structure of the files.
1
git clone --branch release https://github.com/egovernments/DIGIT-DevOps.git
2
cd DIGIT-DevOps/infra-as-code/terraform
3
4
5
└── modules
6
├── kubernetes
7
│ └── aws
8
│ ├── eks-cluster
9
│ │ ├── main.tf
10
│ │ ├── outputs.tf
11
│ │ └── variables.tf
12
│ ├── network
13
│ │ ├── main.tf
14
│ │ ├── outputs.tf
15
│ │ └── variables.tf
16
│ └── workers
17
│ ├── main.tf
18
│ ├── outputs.tf
19
│ └── variables.tf
20
└── storage
21
└── aws
22
├── main.tf
23
├── outputs.tf
24
└── variables.tf
Copied!
In here, you will find the main.tf under each of the modules that has the provisioning definition for resources like EKS cluster, and Storage, etc. All these are modularized and reacts as per the customized options provided.
Example:
    VPC Resources:
      VPC
      Subnets
      Internet Gateway
      Route Table
    EKS Cluster Resources:
      IAM Role to allow EKS service to manage other AWS services
      EC2 Security Group to allow networking traffic with EKS cluster
      EKS Cluster
    EKS Worker Nodes Resources:
      IAM role allowing Kubernetes actions to access other AWS services
      EC2 Security Group to allow networking traffic
      Data source to fetch latest EKS worker AMI
      AutoScaling Launch Configuration to configure worker instances
      AutoScaling Group to launch worker instances
    Storage Module
      Configuration in this directory creates EBS volume and attaches it together.
    The following main.tf with create s3 bucket to store all the state of the execution to keep track
    1
    DIGIT-DevOps/Infra-as-code/terraform/egov-cicd/remote-state
    2
    3
    [**main.tf**](https://github.com/egovernments/DIGIT-DevOps/blob/release/infra-as-code/terraform/egov-cicd/remote-state/main.tf)\*\*\*\*
    Copied!
1
provider "aws" {
2
region = "ap-south-1"
3
}
4
5
#This is a bucket name that you can name as you wish
6
resource "aws_s3_bucket" "terraform_state" {
7
bucket = "try-cicd-workshop-yourname"
8
9
versioning {
10
enabled = true
11
}
12
13
lifecycle {
14
prevent_destroy = true
15
}
16
}
17
18
#This is a bucket name that you can name as you wish
19
resource "aws_dynamodb_table" "terraform_state_lock" {
20
name = "try-cicd-workshop-yourname"
21
read_capacity = 1
22
write_capacity = 1
23
hash_key = "LockID"
24
25
attribute {
26
name = "LockID"
27
type = "S"
28
}
29
}
Copied!
    1.
    The following main.tf contains the detailed resource definitions that need to be provisioned, please have a look at it.
    Dir: DIGIT-DevOps/Infra-as-code/terraform/egov-cicd
    2.
    main.tf
1
terraform {
2
backend "s3" {
3
bucket = "try-cicd-workshop-yourname"
4
key = "terraform"
5
region = "ap-south-1"
6
}
7
}
8
9
module "network" {
10
source = "../modules/kubernetes/aws/network"
11
vpc_cidr_block = "${var.vpc_cidr_block}"
12
cluster_name = "${var.cluster_name}"
13
availability_zones = "${var.network_availability_zones}"
14
}
15
16
module "iam_user_deployer" {
17
source = "terraform-aws-modules/iam/aws//modules/iam-user"
18
19
name = "${var.cluster_name}-kube-deployer"
20
force_destroy = true
21
create_iam_user_login_profile = false
22
create_iam_access_key = true
23
24
# User "egovterraform" has uploaded his public key here - https://keybase.io/egovterraform/pgp_keys.asc
25
pgp_key = "${var.iam_keybase_user}"
26
}
27
28
module "iam_user_admin" {
29
source = "terraform-aws-modules/iam/aws//modules/iam-user"
30
31
name = "${var.cluster_name}-kube-admin"
32
force_destroy = true
33
create_iam_user_login_profile = false
34
create_iam_access_key = true
35
36
# User "egovterraform" has uploaded his public key here - https://keybase.io/egovterraform/pgp_keys.asc
37
pgp_key = "${var.iam_keybase_user}"
38
}
39
40
module "iam_user_user" {
41
source = "terraform-aws-modules/iam/aws//modules/iam-user"
42
43
name = "${var.cluster_name}-kube-user"
44
force_destroy = true
45
create_iam_user_login_profile = false
46
create_iam_access_key = true
47
48
# User "test" has uploaded his public key here - https://keybase.io/test/pgp_keys.asc
49
pgp_key = "${var.iam_keybase_user}"
50
}
51
52
data "aws_eks_cluster" "cluster" {
53
name = "${module.eks.cluster_id}"
54
}
55
56
data "aws_eks_cluster_auth" "cluster" {
57
name = "${module.eks.cluster_id}"
58
}
59
provider "kubernetes" {
60
host = "${data.aws_eks_cluster.cluster.endpoint}"
61
cluster_ca_certificate = "${base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)}"
62
token = "${data.aws_eks_cluster_auth.cluster.token}"
63
load_config_file = false
64
version = "~> 1.11"
65
}
66
67
module "eks" {
68
source = "terraform-aws-modules/eks/aws"
69
cluster_name = "${var.cluster_name}"
70
cluster_version = "${var.kubernetes_version}"
71
subnets = "${concat(module.network.private_subnets, module.network.public_subnets)}"
72
73
tags = "${
74
map(
75
"kubernetes.io/cluster/${var.cluster_name}", "owned",
76
"KubernetesCluster", "${var.cluster_name}"
77
)
78
}"
79
80
vpc_id = "${module.network.vpc_id}"
81
82
worker_groups_launch_template = [
83
{
84
name = "spot"
85
subnets = "${slice(module.network.private_subnets, 0, length(var.availability_zones))}"
86
override_instance_types = "${var.override_instance_types}"
87
asg_max_size = "${var.number_of_worker_nodes}"
88
asg_desired_capacity = "${var.number_of_worker_nodes}"
89
kubelet_extra_args = "--node-labels=node.kubernetes.io/lifecycle=spot"
90
spot_allocation_strategy= "lowest-price"
91
spot_max_price = "${var.spot_max_price}"
92
spot_instance_pools = 1
93
cpu_credits = "standard"
94
},
95
]
96
97
map_users = [
98
{
99
userarn = "${module.iam_user_deployer.this_iam_user_arn}"
100
username = "${module.iam_user_deployer.this_iam_user_name}"
101
groups = ["system:masters"]
102
},
103
{
104
userarn = "${module.iam_user_admin.this_iam_user_arn}"
105
username = "${module.iam_user_admin.this_iam_user_name}"
106
groups = ["global-readonly", "digit-user"]
107
},
108
{
109
userarn = "${module.iam_user_user.this_iam_user_arn}"
110
username = "${module.iam_user_user.this_iam_user_name}"
111
groups = ["global-readonly"]
112
},
113
]
114
}
115
116
module "jenkins" {
117
118
source = "../modules/storage/aws"
119
storage_count = 1
120
environment = "${var.cluster_name}"
121
disk_prefix = "jenkins-home"
122
availability_zones = "${var.availability_zones}"
123
storage_sku = "gp2"
124
disk_size_gb = "20"
125
126
}
Copied!

Custom variables/configurations:

You can define your configurations in variables.tf and provide the env specific cloud requirements so that using the same terraform template you can customize the configurations.
1
├── egov-cicd
2
│ ├── main.tf
3
│ ├── outputs.tf
4
│ ├── providers.tf
5
│ ├── remote-state
6
│ │ └── main.tf
7
│ └── variables.tf
Copied!
Following are the values that you need to mention in the following files, the blank ones will be prompted for inputs while execution.
1
#
2
# Variables Configuration
3
#
4
5
variable "cluster_name" {
6
default = "<Desired Cluster name>" #eg: my-digit-cicd
7
}
8
9
variable "vpc_cidr_block" {
10
default = "192.168.0.0/16"
11
}
12
13
variable "network_availability_zones" {
14
default = ["ap-south-1b", "ap-south-1a"]
15
}
16
17
variable "availability_zones" {
18
default = ["ap-south-1b"]
19
}
20
21
variable "kubernetes_version" {
22
default = "1.18"
23
}
24
25
variable "instance_type" {
26
default = "t3a.xlarge"
27
}
28
29
variable "override_instance_types" {
30
default = ["t3.xlarge", "r5ad.xlarge", "r5a.xlarge", "t3a.xlarge"]
31
}
32
33
variable "number_of_worker_nodes" {
34
default = "1"
35
}
36
37
variable "spot_max_price" {
38
default = "0.0538"
39
}
40
41
variable "ssh_key_name" {
42
default = "egov-cicd"
43
}
44
variable "iam_keybase_user" {
45
default = "keybase:egovterraform"
46
}
Copied!

Important: Create your own keybase key before you run the terraform

    Use this URL https://keybase.io/ to create your own PGP key, this will create both public and private key in your machine, upload the public key into the keybase account that you have just created, and give a name to it and ensure that you mention that in your terraform. This allows to encrypt all the sensitive information.
      Example user keybase user in eGov case is "egovterraform" needs to be created and has to uploaded his public key here - https://keybase.io/egovterraform/pgp_keys.asc
      you can use this portal to Decrypt your secret key. To decrypt PGP Message, Upload the PGP Message, PGP Private Key and Passphrase.

Run terraform

Now that we know what the terraform script does, the resources graph that it provisions and what custom values should be given with respect to your env.
Let's begin to run the terraform scripts to provision infra required to Deploy DIGIT on AWS.
    1.
    First CD into the following directory and run the following command 1-by-1 and watch the output closely.
1
cd DIGIT-DevOps/infra-as-code/terraform/egov-cicd/remote-state
2
terraform init
3
terraform plan
4
terraform apply
5
6
7
cd DIGIT-DevOps/infra-as-code/terraform/egov-cicd
8
terraform init
9
terraform plan
10
terraform apply
Copied!
Upon Successful execution following resources gets created which can be verified by the command "terraform output"
    s3 bucket: to store terraform state.
    Network: VPC, security groups.
    IAM users auth: using keybase to create admin, deployer, the user. Use this URL https://keybase.io/ to create your own PGP key, this will create both public and private key in your machine, upload the public key into the keybase account that you have just created, and give a name to it and ensure that you mention that in your terraform. This allows to encrypt all the sensitive information.
      Example user keybase user in eGov case is "egovterraform" needs to be created and has to uploaded his public key here - https://keybase.io/egovterraform/pgp_keys.asc
      you can use this portal to Decrypt your secret key. To decrypt PGP Message, Upload the PGP Message, PGP Private Key and Passphrase.
    EKS cluster: with master(s) & worker node(s).
    Storage(s): for es-master, es-data-v1, es-master-infra, es-data-infra-v1, zookeeper, kafka, kafka-infra.
    Use this link to get the kubeconfig from EKS to get the kubeconfig file and being able to connect to the cluster from your local machine so that you should be able to deploy DIGIT services to the cluster.
1
aws sts get-caller-identity
2
3
# Run the below command and give the respective region-code and the cluster name
4
aws eks --region <region-code> update-kubeconfig --name <cluster_name>
Copied!
    1.
    Finally, Verify that you are able to connect to the cluster by running the following command
1
kubectl config use-context <your cluster name>
2
3
kubectl get nodes
4
5
NAME STATUS AGE VERSION OS-Image
6
ip-192-168-xx-1.ap-south-1.compute.internal Ready 45d v1.18.10-eks-bac369 Amazon Linux 2
Copied!
Whola! All set and now you can go Deploy Jenkins...

2. Jenkins Deployment

Post infra setup (Kubernetes Cluster), We start with deploying the Jenkins and kaniko-cache-warmer.

Prerequisites:

Prepare an <ci.yaml> master config file and <ci-secrets.yaml>, you can name this file as you wish which will have the following configurations.
1
cd DIGIT-DevOps/tree/release/deploy-as-code
Copied!
1
kubectl config use-context <your cluster name>
2
go run main.go deploy -c -e ci 'jenkins,kaniko-cache-warmer,nginx-ingress'
Copied!
You have launch the Jenkins, Same you can access through your sub-domain which you configured in ci.yaml
Last modified 15d ago