This tutorial will walk you through on how to setup CI/CD
Terraform helps you build a graph of all your resources and parallelizes the creation or modification of any non-dependent resources. Thus, Terraform builds infrastructure as efficiently as possible while providing the operators with clear insight into the dependencies on the infrastructure.
Fork the repos below to your GitHub Organization account
Go lang (version 1.13.X)
AWS account with admin access to provision EKS Service. Try subscribing to a free AWS account to learn the basics. There is a limit on what is offered as free. This demo requires a commercial subscription to the EKS service. The cost for a one or two days trial might range between Rs 500-1000. (Note: Post the demo, for the internal folks, eGov will provide a 2-3 hrs time-bound access to eGov's AWS account based on the request and the available number of slots per day).
Install kubectl on your local machine to interact with the Kubernetes cluster.
Install Helm to help package the services along with the configurations, environment, secrets, etc into a Kubernetes manifests.
Install terraform version (0.14.10) for the Infra-as-code (IaC) to provision cloud resources as code and with desired resource graph. It also helps destroy the cluster in one go.
Install AWS CLI on your local machine so that you can use AWS CLI commands to provision and manage the cloud resources on your account.
Install AWS IAM Authenticator to help authenticate your connection from your local machine and deploy DIGIT services.
Use the AWS IAM User credentials provided for the Terraform (Infra-as-code) to connect to the AWS account and provision the cloud resources.
You will receive a Secret Access Key and Access Key ID. Save the keys.
Open the terminal and run the command given below. The AWS CLI is already installed and the credentials are saved. (Provide the credentials, leave the region and output format blank).
The above creates the following file on your machine as /Users/.aws/credentials.
Before we provision the cloud resources, we need to understand and be sure about what resources need to be provisioned by terraform to deploy CI/CD.
The following is the resource graph that we are going to provision using terraform in a standard way so that every time and for every environment, the infra is the same.
EKS Control Plane (Kubernetes master)
Work node group (VMs with the estimated number of vCPUs, Memory)
EBS Volumes (Persistent volumes)
VPCs (Private networks)
Users to access, deploy and read-only
Ideally, one would write the terraform script from scratch using this doc.
Here we have already written the terraform script that provisions the production-grade DIGIT Infra and can be customized with the specified configuration.
Clone the DIGIT-DevOps GitHub repo. The terraform script to provision the EKS cluster is available in this repo. The structure of the files is given below.
Here, you will find the main.tf under each of the modules that have the provisioning definition for resources like EKS cluster, storage, etc. All these are modularized and react as per the customized options provided.
Example:
VPC Resources -
VPC
Subnets
Internet Gateway
Route Table
EKS Cluster Resources -
IAM Role to allow EKS service to manage other AWS services
EC2 Security Group to allow networking traffic with the EKS cluster
EKS Cluster
EKS Worker Nodes Resources -
IAM role allowing Kubernetes actions to access other AWS services
EC2 Security Group to allow networking traffic
Data source to fetch the latest EKS worker AMI
AutoScaling launch configuration to configure worker instances
AutoScaling group to launch worker instances
Storage Module -
Configuration in this directory creates EBS volume and attaches it together.
The following main.tf with create s3 bucket to store all the states of the execution to keep track.
The following main.tf contains the detailed resource definitions that need to be provisioned.
Dir: DIGIT-DevOps/Infra-as-code/terraform/egov-cicd
Define your configurations in variables.tf and provide the environment-specific cloud requirements. The same terraform template can be used to customize the configurations.
Following are the values that you need to mention in the following files. The blank ones will prompt for inputs during execution.
We have covered what the terraform script does, the resources graph that it provisions and what custom values should be given with respect to the selected environment.
Now, run the terraform scripts to provision the infra required to Deploy DIGIT on AWS.
Use the 'cd' command to change to the following directory and run the following commands. Check the output.
After successful execution, the following resources get created and can be verified by the command "terraform output".
s3 bucket: to store terraform state
Network: VPC, security groups
IAM users auth: using keybase to create admin, deployer, the user
Use the URL https://keybase.io/ to create your own PGP key. This creates both public and private keys on the machine, upload the public key into the keybase account that you have just created, give a name to it and ensure that you mention that in your terraform. This allows you to encrypt sensitive information.
Example: Create a user keybase. This is "egovterraform" in the case of eGov. Upload the public key here - https://keybase.io/egovterraform/pgp_keys.asc
Use this portal to Decrypt the secret key. To decrypt the PGP message, upload the PGP Message, PGP Private Key and Passphrase.
EKS cluster: with master(s) & worker node(s).
Storage(s): for es-master, es-data-v1, es-master-infra, es-data-infra-v1, zookeeper, kafka, kafka-infra.
Use this link to get the kubeconfig from EKS to fetch the kubeconfig file. This enables you to connect to the cluster from your local machine and deploy DIGIT services to the cluster.
Finally, verify that you are able to connect to the cluster by running the command below:
Whola! All set and now you can Deploy Jenkins
Post infra setup (Kubernetes Cluster), we start with deploying the Jenkins and kaniko-cache-warmer.
Sub domain to expose CI/CD URL
GitHub Oauth App (this provides you with the clientId, clientSecret)
Under Authorization callback URL
enter the below url ie (Replace <domain_name> with your domain) https://<domain_name>/securityRealm/finishLogin
Generate a new ssh key for the above user (this provides the ssh public and private keys)
Add the earlier created ssh public key to GitHub user account
Add ssh private key to the gitReadSshPrivateKey
With previously created GitHub users generate a personal read-only access token
Docker hub account details (username and password)
SSL certificate for the sub-domain
Prepare an <ci.yaml> master config file and <ci-secrets.yaml>. Name this file as desired. It has the following configurations:
credentials, secrets (you need to encrypt using sops and create a ci-secret.yaml separately)
Add subdomain name in ci.yaml
Check and add your project specific ci-secrets.yaml details (like github Oauth app clientId, clientSecret, gitReadSshPrivateKey, gitReadAccessToken, dockerConfigJson, dockerUsername and dockerPassword)
To create a Jenkins namespace mark this flag true
Add your environment-specific kubconfigs under kubConfigs like https://github.com/egovernments/DIGIT-DevOps/blob/release/config-as-code/environments/ci-demo-secrets.yaml#L50
KubeConfig environment name and deploymentJobs name from ci.yaml should be the same
Update the CIOps and DIGIT-DevOps repo ssh url with the forked repo's ssh url.
Make sure earlier created github users have read-only access to the forked DIGIT-DevOps and CIOps repos.
SSL certificate for the sub-domain.
Update the DOCKER_NAMESPACE with your docker hub organization name.
Update the repo name "egovio" with your docker hub organization name in buildPipeline.groovy
Remove the below env:
Jenkins is launched. You can access the same through your sub-domain configured in ci.yaml.
Since there are many DIGIT services and the development code is part of various git repos, one needs to understand the concept of cicd-as-service which is open-sourced. This page guides you through the process of creating a CI/CD pipeline.
The initial steps for integrating any new service/app to the CI/CD are discussed below.
Once the desired service is ready for integration: decide the service name, type of service, and if DB migration is required or not. While you commit the source code of the service to the git repository, the following file should be added with the relevant details which are mentioned below:
Build-config.yml – It is present under the build directory in the repository
This file contains the below details used for creating the automated Jenkins pipeline job for the newly created service.
While integrating a new service/app, the above content needs to be added to the build-config.yml file of that app repository. For example: to onboard a new service called egov-test, the build-config.yml should be added as mentioned below.
If a job requires multiple images to be created (DB Migration) then it should be added as below,
Note - If a new repository is created then the build-config.yml is created under the build folder and the config values are added to it.
The git repository URL is then added to the Job Builder parameters
When the Jenkins Job => job builder is executed, the CI Pipeline gets created automatically based on the above details in build-config.yml. Eg: egov-test job is created in the builds/DIGIT-OSS/core-services folder in Jenkins since the “build-config is edited under core-services” And it should be the “master” branch. Once the pipeline job is created, it can be executed for any feature branch with build parameters - specifying the branch to be built (master or feature branch).
As a result of the pipeline execution, the respective app/service docker image is built and pushed to the Docker repository.
On repo provide read-only access to GitHub users (created while ci/cd deployment)
The Jenkins CI pipeline is configured and managed 'as code'.
Job Builder – Job Builder is a Generic Jenkins job which creates the Jenkins pipeline automatically which is then used to build the application, create the docker image of it and push the image to the Docker repository. The Job Builder job requires the git repository URL as a parameter. It clones the respective git repository and reads the build/build-config.yml file for each git repository and uses it to create the service build job.
Check and add your repo ssh URL in ci.yaml
If the git repository ssh URL is available, build the Job-Builder Job.
If the git repository URL is not available, check and add the same team.
The services are deployed and managed on a Kubernetes cluster in cloud platforms like AWS, Azure, GCP, OpenStack, etc. Here, we use helm charts to manage and generate the Kubernetes manifest files and use them for further deployment to the respective Kubernetes cluster. Each service is created as charts which have the below-mentioned files.
Note: The steps below are only for the introduction and implementation of new services.
To deploy a new service, you need to create a new helm chart for it( refer to the above example). The chart should be created under the charts/helm directory in the DIGIT-DevOps repository.
If you are going to introduce a new module with the help of multiple services, we suggest you create a new Directory with your module name.
Example.:-
You can refer to the existing helm chart structure here
This chart can also be modified further based on user requirements.
The deployment of manifests to the Kubernetes cluster is made very simple and easy. There are Jenkins Jobs for each state and are environment-specific. We need to provide the image name or the service name for the respective Jenkins deployment job.
The deployment Jenkins job internally performs the following operations:
Reads the image name or the service name given and finds the chart that is specific to it.
Generates the Kubernetes manifests files from the chart using the helm template engine.
Execute the deployment manifest with the specified docker image(s) to the Kubernetes cluster.