Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
As you wrap up your work with DIGIT, ensuring a smooth and error-free cleanup of the resources is crucial. The regular monitoring of the GitHub Actions workflow's output is essential during the destruction process. Watch out for any error messages or signs of issues. A successful job completion will be confirmed by a success message in the GitHub Actions window, indicating that the infrastructure has been effectively destroyed.
When you are ready to remove DIGIT and clean up the resources it created, proceed with executing the terraform_infra_destruction job. This action is designed to dismantle all set up resources, clearing the environment neatly. We hope your experience with DIGIT was positive and that this guide makes the uninstallation process straightforward.
To initiate the destruction of a Terraform-managed infrastructure, follow these steps:
Navigate to Actions.
Click DIGIT-Install workflow.
Select Run workflow.
When prompted, type 'destroy'. This action starts the terraform_infra_destruction job.
You can observe the progress of the destruction job in the actions window.
DIGIT Health Campaign Management Demo | DIGIT HCM
This guide provides step-by-step instructions to clone and run the DIGIT Health Campaign Management (HCM) App locally on your machine. The app is a Flutter application developed for health campaigns.
Before you begin, ensure that you have the following installed on your PC:
Flutter 3.16.5 version
flutter setup for linux and android - flutter document
flutter setup for windows and android - flutter document
Android device
Open a terminal and run the following commands:
git checkout to branch installation-demo-setup
Open the project in your preferred IDE (Android Studio, Visual Studio Code). Make sure that your IDE is configured with the Flutter and Dart plugins.
Create a .env file inside the apps/health_campaign_field_worker_app folder.
copy the below contents to you newly created .env file
After successfully setting up the env file, navigate to apps/health_campaign_field_worker_app folder in the terminal,
Run the following command to generate the APK:
flutter build apk --release --no-tree-shake-icons
After successfully running the above command, the APK will be generated in the path
apps/health_campaign_field_worker_app/build\app\outputs\flutter-apk\app-release.apk
Install the generated APK on your preferred Android device.
This guide provides step-by-step instructions for installing DIGIT using GitHub Actions in an AWS environment.
Github account - signup
Kubectl installed in the system - installation guide
AWS account - signup
Install AWS CLI locally - installation guide
Postman - installation guide and import data guide
A domain host - (like go daddy to configure you server to a domain)
Prepare AWS IAM User
Create an IAM User in your AWS account - official document
Generate ACCESS_KEY and SECRET_KEY for the IAM user - AWS document
Assign administrator access to the IAM user for necessary permissions.
Set up the AWS profile locally by running the following commands:
aws configure --profile {profilename}
fill in the key values as they are prompted
AWS_ACCESS_KEY_ID: <GENERATED_ACCESS_KEY>
AWS_SECRET_ACCESS_KEY: <GENERATED_SECRET_KEY>
AWS_DEFAULT_REGION: ap-south-1
export AWS_PROFILE={profilename}
Note :: AWS Account should have S3 Bucket access to make Filestore service to work
Fork the following repositories with all the branches into your organisation account on GitHub:
Master data (We dont need master data repo, since we use mdms-v2 by default with data seeded)
Uncheck the copy, the master only box, as shown below:
Go to the forked health-campaign-devops repository:
Navigate to the repository settings.
Go to Secrets and Variables.
Click on the actions options below secrets and variables.
On the new page, choose the new repository secret option in repository secrets and add the following keys mentioned below:
AWS_ACCESS_KEY_ID: <GENERATED_ACCESS_KEY>
AWS_SECRET_ACCESS_KEY: <GENERATED_SECRET_KEY>
AWS_DEFAULT_REGION: ap-south-1
AWS_REGION: ap-south-1
Replace the aws key and secret with the actual value in the environment secret file.
Navigate to the release-githubactions branch in the forked DevOps repository.
Enable GitHub Actions.
Click on Actions, then click on "I understand my workflows, go ahead and enable them":
The following steps can be done either directly in the browser or the local system if you are familiar with git usage.
Before following any of the steps switch to the release-githubactions branch.
Steps to edit the git repository in the browser - Git guide
Steps to edit in the local system if you are familiar with Git basics:
Git clone {forked DevOps repolink}
Follow the below steps and make changes
Then commit and push to the release-githubactions branch
NOTE: Complete all changes at once then commit and push the code to remote to trigger the installation.
Note: Make these repository/branch changes before installation; making changes to the configuration repository link in the DevOps repository after installation without understanding what impact they may have will lead to failure in the application functionality.
Navigate to egov-demo.yaml (config-as-code/environments/egov-demo.yaml).
Under the egov-persister: change the gitsync link of the health-campaign-config repository to the forked config repository and the branch to DEMO.
Under the egov-indexer: change the gitsync link of the health-campaign-config repository to the forked config repository and the branch to DEMO.
Navigate to infra-as-code/terraform/sample-aws.
Open input.yaml and enter details such as domain_name, cluster_name, bucket_name, and db_name.
Generate SSH key pair.
How to Generate SSH Key Pair - choose one of the following methods to generate an SSH key pair:
Method a: Use an online website. (Note: This is not recommended for production setups, only for demo purposes): https://8gwifi.org/sshfunctions.jsp
Method b: Use OpenSSL commands:
OpenSSL genpkey -algorithm RSA -out private_key.pem
openssl rsa -pubout -in private_key.pem -out public_key.pem
To view the key run the commands or use any text editor to open the files
vi private_key.pem
vi public_key.pem
Once generated Navigate to config-as-code/environments
Open egov-demo-secrets.yaml
Replace ssh_private_key (note: please make sure the private key is indented as given)
Add the public_key to your GitHub account - Git guide
Once all details are entered, push these changes to the remote GitHub repository. Open the Actions tab in your GitHub account to view the workflow. You should see that the workflow has started, and the pipelines are completed successfully.
Connect to the Kubernetes cluster, from your local machine by using the following command:
Get the CNAME of the nginx-ingress-controller
The output of this will be something like this:
ae210873da6ff4c03bde2ad22e18fe04-233d3411.ap-south-1.elb.amazonaws.com
Add the displayed CNAME to your domain provider against your domain name. e.g. GoDaddy domain provider - https://www.godaddy.com/en-in/help/add-a-cname-record-19236
Installation steps for DIGIT HCM
The DIGIT HCM installation comprises five steps to create a new production-ready server that can be scaled on demand. The installation process is currently supported only for AWS. However, support for other cloud platforms such as Azure and GCP will be available in the future.
Step 1: Execute GitHub Action for installation: Execute a GitHub action for the installation process.
Step 2: Execute the System Data Setup: Execute this setup to load system-required data.
Step 3: Execute the Project Data Setup: Execute the minimum setup data required to run a campaign.
Step 4: Generating the APK for the Server: Generate the APK pointing to the server as mentioned above.
Step 5: Kibana Dashboard Setup: To create and configure health campaign dashboards in a different space within Kibana.
After completing the HCM Installation, if you want to uninstall/destroy the server, follow the steps mentioned in Server Cleanup.
This document outlines the steps required to create and configure health campaign dashboards in a different space within Kibana.
Knowledge of creating dashboards in Kibana. - Dashboard and visualizations
Transformer and indexer services are up and running to enrich data for KPI creation and push data to elastic search.
Retrieve the Kibana Credentials Secret:
kubectl get secret elasticsearch-master-credentials -n es-cluster -o yaml
Decode the credentials:
To decode the username
and password
, include the following steps:
Copy the base64 encoded values for username
and password
from the command output.
Paste the encoded values into the input box on the website.
Click on the "Decode" button to get the plain text username
and password
.
URL: {{HOST NAME}}/kibana
Replace the {{HOST NAME}} with your domain URL.
Check the Kibana version through UI in the ‘Help’ section.
By default, users will have access to the default space.
To create a new space or edit, open the main menu, then click Stack Management → Spaces for an overview of your spaces. This view provides actions to create, edit, and delete spaces.
Switch to or create a new space where the dashboards will be configured.
We have existing data views and dashboards from the product environment that you can import and use. To import existing data views, follow the steps below:
Navigate to the “Stack Management” in the sidebar.
Click on “Saved Objects” under Kibana.
Import the data views file here. Data views contain the queries and charts needed to fetch data from the indexes. The file to be imported is given below:
If you need to create your own data views, follow this guide: Creating Data Views in Kibana.
All dashboards and data views will be imported here and can be viewed under “Saved Objects”.
Before you run the DIGIT HCM product, you need to set up the basic system data such as boundaries of the geography and the master data. In this document, we will load the base data for the server.
Download the seed_data_dump.sql
file into your local system and save it in a folder/directory.
Get the database pod name by executing the following command and copy the NAME from the output. Refer to the screenshot below:
We also need db_name, and db_username. We configured these values earlier in the forked repository infra-as-code/terraform/sample-aws/input.yaml file.
Navigate to infra-as-code/terraform/sample-aws
.
Open input.yaml
and note down db_username, db_name, which configured earlier.
Get {DB_HOST} the value by executing the command below, and find and copy the "db-host" value. Refer to the screenshot below for select and copy of "db-host". This will be used in the next commands.
Now we have all the data required to run the command below which will load the data into the database.
Go to the folder/directory where the dump file was downloaded and open the terminal in that folder or use the 'cd
' command to change to that directory:
Replace {DATABASE_POD_NAME}
, {DB_HOST}
, {DB_USERNAME}
, {DB_NAME}
in command given below and run it:
After running the above command, the output should look like it is shown in below:
Run the below command to delete and restart all the services
Run the command below to check if all pods/services are running. If not, wait for some time and check again:
Check if the egov-user service is up and running by using the following command:
If the egov-user service is running with Ready 1/1, then connect to it by port forwarding:
Import the below curl in Postman or execute it in another terminal window:
Replace the username, password, and tenantId with proper values (keep tenantid as 'mz' if master data is loaded in DB unchanged).
This step involves the execution of the Postman collection for the minimum setup data required to run a campaign from the field worker app.
Before you start loading the project data, execute the below command to restart the Zuul gateway.
Check if all the services are up and running by using the following command:
If all the services are running with Ready 1/1, then restart the Zuul service by using the command given below:
All file examples in this document refer to the default branches in the health campaign DevOps and configuration repositories for example purposes. If you have replaced these repositories with your fork or clone, refer to the same here.
Create an environment variable file and add the below variables in Postman:
Click on New, and Environment, then add the following variables:
URL - domain_name provided in infra-as-code/terraform/sample-aws/input.yaml
e.g. https://{domain_name}
tenantId - mz
apiUserName and apiPassword - newly created superuser credentials
startDate and endDate in epoch format - epoch converter
boundaryCode - use the default value (VFTw0jbRf1y) if Master data is unchanged
Import the seed data script
This collection includes all the scripts to create users, projects, staff, and product variants.
Import the HCM setup script in Postman - import guide
Choose the new environment created in the environment tab:
Once Env is selected then click on the imported HCM setup collection click run:
Localisation is required to load and show data in different available languages. You can add localisation for English, French, and Portuguese languages by following the instructions given below. If only one specific language is required, download that language-specific collection and run it.
Create a port forward to the localisation pod by executing the below command:
Replace the URL variable in the Postman Environment to http://localhost:8080
Define locale code for localeEnglish, localeFrench and localePortuguese in the environment file.
localeEnglish - en_MZ
localeFrench - fr_MZ
localePortuguese - pt_MZ
Download the below file for all localisations (English, French, and Portuguese) data to be loaded at once:
Import the downloaded file into Postman and select all language localisation collections and click on run. See the screenshot below for reference:
To add localisations specific to the English language only, download the file given below:
Import the downloaded file into Postman, select the English collection, and click on run. See the screenshot below for reference:
To add localisations specific to the French language only, download the file given below:
Import the downloaded file into Postman, select the French collection, and click on run. See the screenshot below for reference:
To add localisations specific to the Portuguese language only, download the file given below:
Import the downloaded file into Postman, select the Portuguese collection, and click on run. See the screenshot below for reference:
Refer to the HCM Console User manual to set up a Campaign through HCM Console.
Help Section:
Ensure that the Filestore Service is running properly. If it's not, refer to this link for troubleshooting.
Create new products and use them when configuring delivery rules. Avoid using default values in the quick setup flow. Delete the default resources, click on "Add Resources," and then select the items you want to deliver.
To understand how to create dashboards in Kibana, refer to this guide: Create a Dashboard of Panels.
To edit existing dashboards, open the desired dashboard.
Click on the ‘Edit’ button.
For each chart you want to edit, click on “Edit Visualization”.
Check the chart type and data view from which data is getting fetched as shown in the image below:
Click on “Edit Lens” to get an overview of the available and selected fields in the respective data view.
You can go to the respective data view under Stack Management -> Saved Objects, update the indexes, and add a timestamp filter for the respective chart.
View/Edit Queries used to get the metric data
After going back to the “Edit Visualization”, you can see the metrics that are shown in the chart here as shown in the image below:
Click on the desired metric, for which you want to view/edit the query.
View Data Sources
To see which index a data view is pulling the data from, check the respective data views in Stack Management -> Data Views.
If you need to create your own data views, follow this guide: Creating Data Views in Kibana.
For example, if you click on “Edit Visualization” on a chart, you will see that the table chart is getting data from the DV-PT-PJT data view.
You can then view the DV-PT-PJT data, check the indices from where the data is getting pulled, and add a timestamp filter according to your requirements.