Deploy CRS
Steps to setup the DIGIT Complaints Resolution System in a production environment is covered here.
Overview
This section walks you through the end-to-end process of configuring and deploying DIGIT (CRS – Citizen Complaint Resolution System) in a production environment—whether on-premise or on any commercial cloud platform. The installer can be run on a laptop or in a virtual machine, but deploying DIGIT directly on a laptop is not recommended.
DIGIT deployment consists of three key phases:
Infrastructure Provisioning - Creating the Kubernetes cluster and provisioning all required resources.
DIGIT Deployment - Deploying all DIGIT microservices onto the provisioned infrastructure.
Configuration - Applying environment-specific configurations and preparing the services for production use.
While infrastructure provisioning varies across cloud providers and on-prem environments, the DIGIT deployment process remains consistent once the infrastructure is ready.
Before you begin…
Make sure a domain name is ready, and that you have admin access to the FQDN you plan to use for the DIGIT deployment. Example - mydigit.org
Ensure SSL certificates are available for the domain
SMS gateway URL & credentials (If you are planning on integrating with an SMS provider)
Logo, banner, header, footer and other assets for the UI (Optional. Can be configured later as well)
Ensure a GitHub account (or other equivalent tool) is set up under an organisation umbrella.
Teams are created (optional)
Admin & other users have access to the GitHub repository with the correct access.
Ensure you have the requisite access, credentials and permissions to deploy.
Steps
Infra Provisioning
For infrastructure provisioning, follow the steps in the document here for the specified cloud provider. The provisioning process generates key artefacts, which are displayed on the console and stored in an outputs.tf file.
Before proceeding to Step 2: DIGIT Deployment, ensure that you have access to the outputs.tf file created during the provisioning step.
Checkpoints:
Verify successful execution Ensure the console shows no errors and that all provisioning steps have completed successfully. Resolve any issues before moving forward.
Confirm the presence of
outputs.tfCheck that theoutputs.tffile is available under the<infra-provisioning-output-directory>and that all required variables are populated. This file will be required in Step 2.Download the Kubernetes configuration file Make sure you have downloaded the Kubernetes config (
kubeconfig) and have access to it for interacting with the cluster.
Modify Configuration for DIGIT CRS Deployment
Before starting Step 2, ensure the following:
Step 1 (Infrastructure Provisioning) has been completed successfully No errors in the console.
You have the following files
outputs.tf(from infra provisioning)kubeconfig(for cluster access)
Helm installed Helmfile uses Helm internally, so make sure Helm is installed and configured.
2.1: Fork the DIGIT CRS Repository
Fork the CRS repository into your GitHub organisation and name it appropriately.
2.2: Clone the Fork
Clone your forked repository onto your local machine.
2.3: Open the DevOps Configuration Folder
This folder contains all Helmfiles, chart values, and environment configurations.
2.4: Configure Environment Values
Locate the env.yaml file and:
Make a copy named
production.yaml.
Replace all placeholder values (DB hostname, bucket names, domain, etc.). Many of these values come from the
outputs.tffile.
Configure the items given below:
2.4.1 Configure the domain
This FQDN will be used to access the DIGIT instance once all setup is complete.
Where:
What: Public DNS name pointing to the ingress LoadBalancer.
Example: crs-demo.mydomain.org
For on-premise setup only (not required for cloud):
Ensure the domain is mapped to a public IP.
Set up managed TLS certificates for secure communication (recommended for production servers).
DNS: Map the CCRS domain → LoadBalancer external IP.
Network/Firewall: allow HTTPS to the LoadBalancer IP; allow egress to DB/Kafka/ES/SMTP/SMS/etc.
2.4.2 Configure database
Locate the database host name from the outputs.tf file. Any database name without special characters or underscores can be entered below, and it will be created as part of the service deployment.
Where:
What: PostgreSQL server and main application database.
Examples:
Host:
egov-demo.postgres.database.azure.com
pgr-demo-db.ap-south-1.rds.amazonaws.com
Database:
egovdemo
pgrdemodb
2.4.3 Configure Tenant
The tenant is the account ID that will host the applications and configurations. For more info on tenancy, please refer to <TBD>. Tenant IDs on digit can be any string. An example would be "et" for Ethiopia or "in" for India.
Where:
Example Tenant IDs:
2.4.4 Configure Filestore / S3
This sets up the S3 or blob storage or NFS backing the filestore service. This needs to be configured to ensure all files are stored securely. The bucket has been created as part of the infra provisioning, so locate the filestore_s3_bucket from the output console and update it.
Filestore Bucket - fixed-bucketname: <filestore_s3_bucket>
For on-premise deployment, follow the Minio Setup.
2.4.5 Confiture UI Assets Bucket
Create a bucket to store UI assets that will be used to display in the UI. Ensure you have the right permissions configured for this. This will be used to store static assets such as banners, logos, and configuration JS.
Download this global configuration and modify the following key values:
Replace <tenant_id> with the tenant ID defined in Step 2.4.3 – Tenant Configuration. This is a critical step, as the tenant ID configured in this file is used to fetch localisation and master data required for rendering the login page correctly.
2.4.6 Configure Git Indexer & Persister
Provide the link to the forked repository (Step 2.1) and the branch to be used for configurations to be pulled in.
2.4.7 Authenticate OAuth/Github (Optional)
This is required for authorising access to the monitoring tools & dashboards. In the env.yaml file, search for the “oauth2-proxy:” section. Replace the values below:
Examples:
Org: egovernments, my-org
Team: platform-admins, devops
2.4.8 Configure Storage Class (On-Premise only)
Examples
gp3
standard
managed-premium
Configure Secure Secrets Management
Every deployment has DB, GitHub and other passwords that need to be configured. These need to be managed securely. Below are the steps to secure secrets management.
Create a copy of the env-secret.yaml file: deploy-as-code/charts/environments/env-secrets.yaml and call it <environment>-secrets.yaml. Example: production.yaml will have a production-secrets.yaml
Encrypt credentials, secrets using sops and create a <env>-secret.yaml separately
SOPS expects an encryption key to encrypt/decrypt a specified plaintext and keep the details secured. The following are the options to generate the encryption key -
Option 1: Generate PGP keys https://fedingo.com/how-to-generate-pgp-key-in-ubuntu
Option 2: Create AWS KMS keys when you want to use the AWS cloud provider.
Option 3: Create Azure Key Vault when you want to use the MS Azure cloud provider.
Once you generate your encryption key, create a .sops.yaml configuration file under the /helm directory of the cloned repo to define the keys used for specific files. Refer to the SOP doc for more details.
Configure the following secrets in the env-secret.yaml file
3.1 Database passwords
3.2 Git Sync - Config Repository Access
Generate SSH key pairs using the method below:
Generate a new SSH key pair for GitHub authentication.
Add the public SSH key to the relevant GitHub user account to grant read access to the configuration repository.
Update the private SSH key in the deployment secrets file: deploy-as-code/charts/environments/<environment>-secrets.yaml
Paste the generated private SSH key under the git-sync section in the file.
Best Practices
Use a dedicated bot user (e.g., egov-bot)
Grant read-only access
Add key under GitHub → SSH Keys
known_hosts should remain unchanged.
3.3 Encryption Secrets - Mandatory Change
The encryption service encrypts/decrypts data using encryption keys. These are configurable below. Change the values before the first deployment. For more information, refer to this document.
DIGIT CRS Deployment
Once all configurations are completed. Run the command below to install DIGIT CRS successfully. Must run the command below from the devops folder.
{env_filename} - the env.yaml if you created a copy of env.yaml, then that filename should be replaced here.
This will deploy DIGIT CRS using Helmfile in a controlled and auditable manner.
Domain name and CNAME mapping
At the end of the infra provisioning step, the Kubernetes configuration file will be available for download. Download the configuration file and ensure it is active. Run the below command to get the loadbalancer ID from the EKS cluster:
Example output: ae210873da6f.ap-south-1.elb.amazonaws.com
Add this as a CNAME record in your domain provider settings. At the end of this step, you should be able to access the DIGIT CRS UI - citizen and employee portals - by typing in the
https://<domain-name>/digit-ui/citizen
Troubleshooting Deployment
6.1: Check Pod Status
After deployment, first verify that all pods in the target namespace are running correctly. List all pods in the namespace:
kubectl get pods -n <namespace>
Ensure that the STATUS column shows only:
Running
Completed
There should be no pods in the following states:
CrashLoopBackOff
Init:Error
Init:ConfigMapKeyMissing
Example: kubectl get pods -n egov
If any pod is not running as expected, refer to the FAQ / Troubleshooting section before proceeding further.
6.2 Check if basic configuration data is loaded
The default-data-handler service is responsible for seeding essential system data during initialisation. This includes:
Core and common MDMS schemas and master data for DIGIT CRS
Default English localisation data
If any dependent services (such as MDMS, Localisation, or Boundary) are not running at the time of initialisation, the default data will not be loaded.
Once the required services are available, the default-data-handler service must be restarted to trigger data seeding again.
6.3 Restart Gateway Service
In some cases, the gateway service may start before all other services are fully available. When this happens, the gateway may not register newly started services, resulting in 404 errors when accessing their APIs.
To resolve this issue, restart the gateway service:
kubectl rollout restart deployment gateway -n egov
This ensures all downstream services are correctly registered and accessible.
Set up CRS Notebook
Refer to the CRS Data Setup section for detailed steps.
Destroy Cluster
Use the command below to destroy the deployed DIGIT CRS cluster.
$ helmfile -f digit-helmfile.yaml -e pgr-sdc-prd destroy
Support & Best Practices
Always version control env.yaml and env-secrets.yaml
Never commit plain secrets to public repositories; always encrypt and push into a git repo
Test deployments in a non-prod environment first
Maintain separate values files for dev/staging/prod
Last updated
Was this helpful?