Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Setting up localisation strings
This guide goes through inserting basic localisation for core DIGIT modules post-installation. Currently, localisation is an extra step post-install. We enter localisation data in bulk via REST API calls. Postman collection is available to facilitate this process.
The releasekit repository contains all the localisation strings separated per module.
Base localisation strings are provided in the baseline folder. Localization is done per module per release. New strings in each release are contained in the respective release version folder. Depending on what modules have been installed, the localization strings have to be collated and then seeded using Postman Scripts.
For example, if DIGIT v2.7 with the PGR module has been installed, the localization strings for the PGR module have to be collated in the following order in JSON:
Baseline localization strings
v2.3
v2.4
v2.5
v2.6
v2.7
For convenience, a consolidated JSON file per module is created with each release under the consolidated folder. To add the messages, copy the json string of one module and paste it into the body of the JSON request and hit upsert. Repeat this for each module.
Download the postman collection - Setup an environment in postman and add the following variables:
authToken
tenantId
Login to DIGIT as a citizen user from the browser. To get auth token on your webpage, right-click and go to Inspect > Network > payload > RequestInfo. Here you will find a variable named authToken which will be a 32-bit string. Paste it in the values field of the authToken
variable in Postman and click on Save.
Run the Insert Localization script after adding the required localization messages for each module from releasekit consolidated folder in the Postman script body.
Run each module separately. Else, the server will throw a 40x error.
The modules to set up depending on what has been installed as part of DIGIT. For the DIGIT Core, we require localisation to be set up for the user module.
Search endpoint: domain/localization/messages/v1/_search
Upsert endpoint: domain/localization/messages/v1/_upsert
Adding data to MDMS-V2
MDMS v2 exposes APIs for defining schemas, searching schemas and then adding master data against these defined schemas. The data is now stored in PostgreSQL tables rather than in GitHub. MDMS v2 currently also exposes v1 search API which will fetch data from the database in the same format as MDMS v1 search API to maintain backward compatibility.
Create schema - MDMS-v2 allows you to create your schema with all the validations and properties supported by JSON schema draft 07. JSON Schema Draft 07 version of the JSON Schema specification provides a standardized way to validate the data format and content of JSON files.
Search schema - MDMS v2 contains an API that allows users to search schemas using criteria like the tenantid, schema code, and unique identifiers.
Create data - MDMS v2 allows users to create data as per the defined schemas.
Search data - MDMS v2 provides two search APIs - v1 and v2. The v1 search API is fully backwards compatible.
Update data - MDMS v2 allows users to update the master data fields.
Fallback functionality - Both search APIs have implemented fallback where if data is not found for a particular tenant, the services fall back to the parent tenant(s) and return the response. If data is not found even for the parent tenant, an empty response is sent to the user.
Create schema - below is a sample schema definition for your reference -
To create a basic schema definition, define the following keywords:
$schema: specifies which draft of the JSON Schema standard the schema adheres to.
title and description: state the intent of the schema. These keywords don’t add any constraints to the data being validated.
type: defines the first constraint on the JSON data.
Now, properties can be added under the schema definition. In JSON Schema terms, properties is a validation keyword. When you define properties, you create an object where each property represents a key in the JSON data that’s being validated. You can also specify the required properties described in the object.
Additionally, we have two keys which are not part of standard JSON schema attributes -
x-unique: specifies the fields in the schema utilizing which a unique identifier for each master data will be created.
x-ref-schema: specifies referenced data. This is useful in scenarios where the parent-child relationship needs to be established in the master data. For example, Trade Type can be a parent master data to Trade Sub Type. In the example provided, the "field path" indicates the JsonPath of the attribute in the master data that holds the unique identifier of the referenced parent. The "schema code" specifies the schema against which the referenced master data must be validated for existence.
Create data: MDMS v2 enables users to create data per the defined schemas. Below is an example of data for the specified schema:-
This JSON data adheres to the defined schema structure and can be used as a reference when creating data within MDMS v2.
Define the schema for the master that you want to promote to MDMS v2.
Make sure that the schema includes a unique field. This field can also be made composite if needed, ensuring that the data added against that schema remains unique.
If the data does not have the scope for having unique identifiers, for example, complex masters like - https://github.com/egovernments/health-campaign-mdms/blob/QA/data/default/health/project-task-configuration.json - consider adding a redundant field which can serve as the unique identifier.
Hit the following API endpoint to create schema in the system - /mdms-v2/schema/v1/_create
Verify the created schema by searching for it using the following API endpoint - /mdms-v2/schema/v1/_search
Create the data for the specified schema using the following API endpoint - /mdms-v2/v2/_create/{schemaCode}
Verify the created data by searching for it using the following API endpoint - /mdms-v2/v2/_search
Seeding data in DIGIT post installation
Post the installation of DIGIT, follow the guides below to set up data.
Steps to bootstrap DIGIT
Post-deployment, the application can now be accessed from the configured domain. This page provides the bootstrapping steps.
To try out employee login, let us create a sample tenant, city, and user to log in and assign the LME employee role through the seed script.
Perform the of the egov-user service running from the Kubernetes cluster to your localhost. This provides access to egov-user service and allows users to interact with the API directly.
2. Seed the sample data
Ensure the Postman is installed to run the following seed data API. if not, on your local machine.
Import the following Postman collection into the Postman and run it. This contains the seed data that enables sample test users and localisation data.
Execute the below commands to test your local machine's Kubernetes operations through kubectl.
You have successfully completed the DIGIT Infra, deployment setup and installed a DIGIT - PGR module.
Use the below link in the browser -
Use the below credentials to login into the complaint section
Username: GRO
Password: eGov@4321
City: CITYA
By now we have successfully completed the DIGIT setup on the cloud, use the URL you mentioned in your env.yaml.
Eg: https://mysetup.digit.org and create a grievance to ensure the PGR module deployed is working fine. Refer to the below product documentation for the steps.
Credentials:
Citizen: You can use your default mobile number (9999999999) to sign in using the default Mobile OTP 123456.
Employee: Username: GRO and password: eGov@4321
Post grievance creation and assignment of the same to LME, capture the screenshot of the same and share it to ensure your setup is working fine. Post validation, the PGR functionality shares the API response of the following request to assess the correctness of successful DIGIT PGR Deployment.
All done, we have successfully created infra on the cloud, deployed DIGIT, bootstrapped DIGIT, performed a transaction on PGR and finally destroyed the cluster.
Creating users on DIGIT post installation
This doc provides the steps to create and add users on the DIGIT platform post-installation.
Run the of the egov-user service from the Kubernetes cluster to your local host. This gives you access to egov-user service, bypassing Zuul auth, and you can now interact with the User API directly.
kubectl port-forward svc/egov-user 8080:8080 -n egov
You will see below text in the Terminal:
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
Ensure you have installed the Postman utility to run the following scripts. If not, on your local machine. Import the into Postman. Create an environment variable for “tenantId” and set its value to your tenant.
The collection creates four types of users and also provides a way to refresh the auth token:
Super User
System User
Citizen
Anonymous User
Run all the scripts in order.
The refresh auth token script logs into the server and refreshes the token. A sample script is provided for the employee user. Similar scripts can be copied for the citizen/super user etc..for convenience.
Setting up boundary hierarchies for tenants
The location module serves the boundary hierarchies for a tenant. Location is defined separately for every tenant (mostly ULBs) in DIGIT. Each tenant has a unique tenantId.
The tenantId key can be a combination of state
or state.city
or state.city.ulb
. The hierarchyType can be one of Revenue, Admin or Election. Multiple hierarchy types can be defined for the same city tenant.
Location data is stored in GitHub as part of MDMS. The egov-location
folder needs to be created inside the city tenant (see Amritsar example above). It is stored in the following format:
Enter the boundary hierarchy data in the <tenant>/egov-location/boundary-data.json
file in the appropriate branch of your forked MDMS repository. For example - if you want changes to be visible in the Dev environment, add the data in the DEV branch of the MDMS repository.
Follow the Git flow processes to ensure high-quality data. Make sure the PR requests are raised, verified and merged especially where multiple people are working on the same branch.
Restart the MDMS service in your environment once the new data is available in the desired branch.
The localisation code should follow the following format - *tenantId*_*moduleName*_*hierarchyType*_*cityCode*_*zoneCode*_*blockCode*_*areaCode
For example, to find the localisation code for "Ajitnagar area 1", assuming tenantId as "pb" and hierarchyType as "Revenue":
moduleName: EGOV-LOCATION
cityCode: PB.AMRITSAR
zoneCode: Z1
blockCode: B1
areaCode: SUN04
So finally the localisation label code for "Ajitnagar area 1" would be PB_EGOV-LOCATION_REVENUE_PB.AMRITSAR_Z1_B1_SUN04 (visible in the UI if no localisation is present).
Module | Localization folder |
---|---|
Reference - JSON Schema - Creating your first schema
Reference - JSON Schema - Creating your first schema
The entire hierarchy can be defined in a nested way as children. One sample of location data for the Amritsar ULB is .
Users also need to upsert localisation codes for any new boundary data in MDMS. Else, users see the label names (auto-generated) for the city names in the UI. Check the guide for more information on how to perform localisation.
egov-user