Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Configure role based user access and map actions to roles
DIGIT is an API-based platform where each API denotes a DIGIT resource. The primary job of Access Control Service (ACS) is to authorise end-users based on their roles and provide access to the DIGIT platform resources. Access control functionality is essentially based on the following points:
Actions: Actions are events performed by a user. This can be an API end-point or front-end event. This is the MDMS master.
Roles: Roles are assigned to users. A single user can hold multiple roles. Roles are defined in MDMS masters.
Role-Action: Role actions are mapped between Actions and Roles. Based on roles, the action mapping access control service identifies applicable actions for the role.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Java 17
MDMS service is up and running
Serve the applicable actions for a user based on user roles.
On each action performed by a user, access control looks at the user's roles and validates actions that map with the role.
Support tenant-level role action - For instance, an employee from Amritsar can have the role of APPROVER for other ULBs like Jalandhar and hence will be authorised to act as APPROVER in Jalandhar.
Deploy the latest version of the Access Control Service
Note: This video will give you an idea of how to deploy any Digit-service. Further, you can find the latest builds for each service in our latest release document here.
Deploy service to fetch the Role Action Mappings
Note: This video will give you an idea of how to deploy any Digit-service. Further, you can find the latest builds for each service in our latest release document here.
Define the roles:
Add the actions (URL)
Add the role action mapping
The details about the fields in the configuration can be found in the Swagger contract
Any service that requires authorisation can leverage the functionalities provided by the access control service.
To add a new service to the platform, simply update its role action mapping in the master data. The Access Control Service will handle authorisation each time the microservice API is called.
To integrate with the Access Control Service the role action mapping has to be configured (added) in the MDMS service.
The service needs to call /actions/_authorize
API of Access Control Service to check for authorisation of any request.
Title |
---|
Title | Link |
---|---|
/actions/_authorize
Boundary service provides APIs to create Boundary entities, define their hierarchies, and establish relationships within those hierarchies. You can search for boundary entities, hierarchy definitions, and boundary relationships. However, you can only update boundary entities and relationships.
Prior knowledge of Java/J2EE.
Prior knowledge of Spring Boot.
Prior knowledge of REST APIs and related concepts like path parameters, headers, JSON etc.
Prior knowledge of Git.
Advanced knowledge of operating JSON data would be an added advantage to understanding the service.
Create Boundary entity: It introduces functionality to define your boundary entity with all validations and properties supported by GeoJson. Currently, only the geometry type of Polygon and Point is supported by it.
Search Boundary entity: It has APIs to search boundaries based on the tenantid & codes, both being mandatory.
Update Boundary entity: This allows updating the geometry details of a boundary entity.
Create Boundary Hierarchy-definition: It allows defining boundary hierarchy definitions against a particular tenantId and hierarchyType which then can be referenced while creating boundary relationships.
Search Boundary Hierarchy-definition: boundary-service supports searching for hierarchy definitions based on tenantId and HierarchyType where tenantId is mandatory. In case the hierarchyType is not provided, it returns all hierarchy definitions for the given tenantId.
Create Boundary Relationship: It supports defining relationships between existing boundary entities according to the established hierarchy. It requires tenantId, code, hierarchyType, boundaryType, and parent fields. Where tenantId and code uniquely combine to determine a boundary entity, tenantId and hierarchType combine to define the hierarchy used in establishing the relationship between the boundary entity and its parent. It verifies if the parent relationship is already established before creating a new one. It also checks if the specified boundaryType is a direct descendant of the parent boundaryType according to the hierarchy definition.
Search Boundary Relationship: This functionality supports searching the boundary relationships based on the given params -
tenantId
hierarchyType
boundaryType
codes
includeChildren
includeParents
where tenantId and hierarchyType are mandatory and the rest are optional.
Update Boundary Relationship: This allows updating the parent boundary relationship within the same level as per the hierarchy.
/boundary/_create - Takes RequestInfo
and Boundary
in the request body. boundary has all attributes which define the boundary.
/boundary/_search - Takes RequestInfo
in the request body and search criteria fields (refer to the functionality above for exact fields ) as params
and return the boundary based on the provided search criteria.
/boundary/_update - Takes RequestInfo
and Boundary
in the request body where the boundary has all the info that needs to be updated and returns the updated boundary.
/boundary-hierarchy-definition/_create - Takes RequestInfo
and boundary hierarchy definition in the request body where BoundaryHierarchy
object has all the information for the hierarchy definition being created.
/boundary-hierarchy-definition/_search - Takes RequestInfo
and BoundaryTypeHierarchySearchCriteria
in the request body to return boundary hierarchy definition based on the provided search criteria.
/boundary-relationships/_create - This API takes RequestInfo
and BoundaryRelationship
in the request body where BoundaryRelationship
has all the info required to define a relationship between two boundaries.
/boundary-relationships/_search - This API takes RequestInfo
in the request body and search criteria fields (refer to functionality for the exact fields) are passed as params
to return master data based on the provided search criteria and return the response.
/boundary-relationships/_update - This API takes RequestInfo
and BoundaryRelationship
in the request body to update the fields given in the BoundaryRelationship
and returns the updated data.
DIGIT building blocks (as in LEGO pieces)
MDMS-client
Services-common
Rate limiting in gateways is a crucial configuration to manage traffic and ensure service availability. By implementing a rate limiter, we can control the number of requests a client can make to the server within a specified time frame. This protects the underlying services from being overwhelmed by excessive traffic, whether malicious or accidental.
The configuration typically involves -
Replenish Rate: The rate at which tokens are added to the bucket. For example, if the replenish rate is 2 tokens per second, two tokens are added to the bucket every second.
Burst Capacity: The maximum number of tokens that the bucket can hold. This allows for short bursts of traffic.
KeyResolver: A KeyResolver
is an interface used to determine a key for rate-limiting purposes.
NOTE: We currently provide two options for keyResolver and if none of them is specified the spring cloud will take a default go PrincipalNameKeyResolver
which retrieves the Principal
from the ServerWebExchange
and calls Principal.getName()
.
ipKeyResolver : Resolves key based on ip address of the request
userKeyResolver : Resolves key based on use UUID of the request
Let's say we have a rate limiter configured with:
replenishRate
: 2 tokens per second
burstCapacity
: 5 tokens
This means:
2 tokens are added to the bucket every second.
The bucket can hold a maximum of 5 tokens.
Scenario: A user makes requests at different intervals.
Initial State: The bucket has 5 tokens (full capacity).
First Request: The user makes a request and consumes 1 token. 4 tokens remain.
Second Request: The user makes another request and consumes 1 more token. 3 tokens remain.
Third Request: The user waits 1 second (2 tokens added) and then makes a request. The bucket has 4 tokens (3 remaining + 2 added - 1 consumed).
Let's consider a scenario where a user makes multiple requests in quick succession.
Configuration:
replenishRate
: 1 token per second
burstCapacity
: 3 tokens
Scenario: A user makes 4 requests in rapid succession.
Initial State: The bucket has 3 tokens (full capacity).
First Request: Consumes 1 token. 2 tokens remain.
Second Request: Consumes 1 token. 1 token remains.
Third Request: Consumes 1 token. 0 tokens remain.
Fourth Request: There are no tokens left, so the request is denied. The user must wait for more tokens to be added.
After 1 second, 1 token is added to the bucket. The user can make another request.
Here’s a practical example using Spring Cloud Gateway with Redis Rate Limiting.
Configuration: In Routes.properties you can set rate limiting as
Explanation:
Replenish Rate: 5 tokens per second.
Burst Capacity: 10 tokens.
Behavior:
A user can make up to 10 requests instantly (burst capacity).
After consuming the burst capacity, the user can make 5 requests per second (replenish rate).
API Service Rate Limiting
An API service wants to ensure clients do not overwhelm the server with too many requests. They set up rate limits as follows:
Replenish Rate: 100 tokens per minute.
Burst Capacity: 200 tokens.
Scenario:
A client can make 200 requests instantly.
After the burst capacity is exhausted, the client can make 100 requests per minute.
If any client tries to make more requests than allowed, they receive a response indicating they are being rate-limited.
Prevents Abuse: Limits the number of requests to prevent abuse or malicious attacks (e.g., DDoS attacks).
Fair Usage: Ensures fair usage among all users by preventing a single user from consuming all the resources.
Load Management: Helps manage server load and maintain performance by controlling the rate of incoming requests.
Improved User Experience: Prevents server overload, ensuring a smoother and more reliable experience for all users.
Rate limiting is crucial for traffic management, fair usage, and server resource protection. By setting parameters like replenishRate and burstCapacity, you can regulate request flow and manage traffic spikes efficiently. In Spring Cloud Gateway, the Redis Rate Limiter filter offers a robust solution for implementing rate limiting on your routes.
The objective of the audit service is listed below -
To provide a one-stop framework for signing data i.e. creating an immutable data entry to track activities of an entity. Whenever an entity is created/updated/deleted the operation is captured in the data logs and is digitally signed to protect it from tampering.
Prior knowledge of Java/J2EE
Prior knowledge of SpringBoot
Prior knowledge of PostgreSQL
Prior knowledge of REST APIs and related concepts like path parameters, headers, JSON etc.
The audit service will be parsing all the persister configs so that it can process data received by the persister and create audit logs out of it.
Step 1: Add the following metrics to the existing persister configs -
Step 2: If a custom implementation of ConfigurableSignAndVerify
interface is present, provide the signing algorithm implementation name as a part of audit.log.signing.algorithm
property. For example, if the signing algorithm is HMAC, the property will be set as follows -
Step 3: Set egov.persist.yml.repo.path
this property to the location of persister configs.
Step 4: Run the audit-service application along with the persister service.
Definitions
Config file - A YAML (xyz.yml) file which contains persister configuration for running audit service.
API - A REST endpoint to post audit logs data.
When audit-service create API is hit, it will validate request size, keyValueMap and operationType.
Upon successful validation, it will choose the configured signer and sign entity data.
Once audit logs are signed and ready, it will send it to audit-create
topic.
Persister will listen on this topic and persist the audit logs.
Add the required keys for enabling audit service in persister configs.
Deploy the latest version of the Audit service and Persister service.
Add Role-Action mapping for APIs.
The audit service is used to push signed data for tracking each and every create/modify/delete operation done on database entities.
Can be used to have tamper-proof audit logs for all database transactions.
Replaying events in chronological order will lead to the current state of the entity in the database.
To integrate, the host of the audit-service module should be overwritten in the helm chart.
audit-service/log/v1/_create
should be added as the create endpoint for the config added.
audit-service/log/v1/_search
should be added as the search endpoint for the config added.
1. URI: The format of the API to be used to create audit logs using the audit service is as follows: audit-service/log/v1/_create
Body: The body consists of 2 parts: RequestInfo and AuditLogs.
Sample Request Body -
2. URI: The format of the API to be used to search audit logs using audit-service is as follows: audit-service/log/v1/_search
Body: The body consists RequestInfo and search criteria is passed as query params.
Sample curl for search -
Postman Collection - Audit Service Postman Collection
Play around with the API's : DIGIT-Playground
This documents highlights the changes need to be done in a module to support the user privacy feature.
Prior Knowledge of Java/J2EE.
Prior Knowledge of Spring Boot.
MDMS Service
Encryption Service
Prior Knowledge of React Js / Javascript
Upgrade services-common library version.
Upgrade tracer library version.
After upgrading the above library version, there might be chance that an issue occur for RequestInfo creation because of change in variable type, please go through all java files including JUnit test cases before pushing the changes.
The above library version need to be upgraded in the dependant module also. For Example: If water service call the property service and property service is calling user service for owner details then the version of above library need to be upgraded in property service pom file also.
At service level or calculator service level where demand generation process take place, there payer details value must be pass in plain form. And these user details must get through SYSTEM user.
Create a system user in the particular environment and make sure that system user has role INTERNAL_MICROSERVICE_ROLE
. Use the curl mentioned below:
Do not create the system user with INTERNAL_MICROSERVICE_ROLE
if it is already existed. Use the UUID of that user, to get plain result data.
Mention the uuid of the system user in the environment file and in the application.properties file of service example:
Environment file:
egov-internal-microservice-user-uuid: b5b2ac70-d347-4339-98f0-5349ce25f99f
Application properties file:
egov.internal.microservice.user.uuid=b5b2ac70-d347-4339-98f0-5349ce25f99f
Create a method/ function which call the user search API to get the user details in plain form, in that call pass the userInfo of the user with INTERNAL_MICROSERVICE_ROLE
For reference follow the below code snippet
For the report where PII data is present and decryption is required, such report definition must have UUID values from user table in the query result.
And the Source column section of the report definition must have mention of the UUID column and for that entry, showColumn
flag should be set as false.
And the base sql query in the report config should be written in such a way that in the query it must have the user uuid.
In the config, for name: uuid Here replace uuid with the name of the column containing the user’s uuid in the query’s output.
Example:
In the report config where PII data is present and decryption is required, add this new filed in the report config decryptionPathId
. And in the Security Policy mdms file, must have the model configuration from the particular report. And the value of key model
in Security Policy mdms file and decryptionPathId
in a report must be same.
Example:-
For the searcher result where PII data is present and decryption is required, such searcher definition must have UUID values from user table in the query result. The base sql query in the searcher config should be written in such a way that in the query it must have the user uuid.
I searcher result, where data are coming from user table and to get those value in decrypted form, then in searcher config these following filed decryptionPathId
must be present. And the Security Policy mdms file, must have the model configuration from the particular searcher config. And the value of key model
in Security Policy mdms file and decryptionPathId
in searcher config must be same.
For details information on Security Policy MDMS file refer this document.
To Do
The enc-client library is a supplementary java library provided to support encryption-related functionalities so that every service does not need to pre-process the request before calling the encryption service.
MDMS Service
Encryption Service
Kafka
The MDMS configurations explained below are fetched by this library at boot time. So after you make changes in the MDMS repo and restart the MDMS service, you would also need to RESTART THE SERVICE which has imported the enc-client library. For example, the report service is using the enc-client library so after making configuration changes to Security Policy pertaining to any report, you will have to restart the report service.
Encrypt a JSON Object - The encryptJson
function of the library takes any Java Object as an input and returns an object which has encrypted values of the selected fields. The fields to be encrypted are selected based on an MDMS Configuration.
This function requires the following parameters:
Java/JSON object - The oject whose fields will get encrypted.
Model - It is used to identify the MDMS configuration to be used to select fields of the provided object.
Tenant Id - The encryption key will be selected based on the passed tenantId.
Encrypt a Value - The encryptValue
function of the library can be used to encrypt single values. This method also required a tenantId parameter.
Decrypt a JSON Object - The decryptJson
function of the library takes any Java Object as an input and returns an object that has plain/masked or no values of the encrypted fields. The fields are identified based on the MDMS configuration. The returned value(plain/masked/null) of each of the attribute depends on the user’s role and if it is a PlainAccess
request or a normal request. These configurations are part of the MDMS.
This function required following parameters:
Java/JSON object - The object containing the encrypted values that are to be decrypted.
Model - It is used to select a configuration from the list of all available MDMS configurations.
Purpose - It is a string parameter that passes the reason of the decrypt request. It is used for Audit purposes.
RequestInfo - The requestInfo parameter serves multiple purposes:
User Role - A list of user roles are extracted from the requestInfo parameter.
PlainAccess Request - If the request is an explicit plain access request, it is to be passed as a part of the requestInfo. It will contain the fields that user is requesting for decryption and the id of record.
While decrypting Java object, this method also audits the request.
All the configurations related to enc-client library are stored in the MDMS. These master data are stored in DataSecurity
module. It has two types of configurations:
Masking Patterns
Security Policy
The masking patterns for different types of attributes(mobile number, name, etc.) are configurable in MDMS. It contains the following attributes:
patternId
- It is the unique pattern identifier. This id is referred to in the SecurityPolicy MDMS.
pattern
- This defines the actual pattern according to which the value will be masked.
The Security Policy master data contains the policy used to encrypt and decrypt JSON objects. Each of the Security Policy contains the following details:
model
- This is the unique identifier of the policy.
uniqueIdentifier
- The field defined here should uniquely identify records passed to the decryptJson
function.
attributes
- This defines a list of fields from the JSON object that needs to be secured.
roleBasedDecryptionPolicy
- This defines attribute-level role-based policy. It will define visibility for each attribute.
The visibility is an enum with the following options:
PLAIN - Show text in plain form.
MASKED - The returned text will contain masked data. The masking pattern will be applied as defined in the Masking Patterns master data.
NONE - The returned text will not contain any data. It would contain string like “Confidential Information”.
It defines what level of visibility should the decryptJson
function return for each attribute.
The Attribute defines a list of attributes of the model that are to be secured. The attribute is defined by the following parameters:
name
- This uniquely identifies the attribute out of the list of attributes for a given model.
jsonPath
- It is the json path of the attribute from the root of the model. This jsonPath is NOT the same as Jayway JsonPath library. This uses /
and *
to define the json paths.
patternId
- It refers to the pattern to be used for masking which is defined in the Masking Patterns master.
defaultVisibility
- It is an enum configuring the default level of visibility of that attribute. If the visibility is not defined for a given role, then this defaultVisibility will apply.
This parameter is used to define the unique identifier of that model. It is used for the purpose of auditing the access logs. (This attribute’s jsonPath should be at the root level of the model.)
It defines attribute-level access policies for a list of roles. It consists of the following parameters:
roles
- It defines a list of role codes for which the policy will get applied. Please make sure not to duplicate role codes anywhere in the other policy. Otherwise, any one of the policies will get chosen for that role code.
attributeAccessList
- It defines a list of attributes for which the visibility differs from the default for those roles.
There are two levels of visibility:
First level Visibility - It applies to normal search requests. The search response could have multiple records.
Second level Visibility - It is applied only when a user explicitly requests for plain access of a single record with a list of fields required in plain.
Second level visibility can be requested by passing plainAccessRequest
in the RequestInfo
.
Any user will be able to get plain access to the secured data(citizen’s PII) by requesting through the plainAccessRequest
parameter. It takes the following parameters:
recordId
- It is the unique identifier of the record that is requested for plain access.
fields
- It defines a list of attributes that are requested for plain access.
Every decrypt request is audited. Based on the uniqueIdentifier
defined as part of the Security Policy, it lists out the identifiers of the records that were decrypted as part of the request.
Each Audit Object contains the following attributes:
The Indexer Service operates independently and is responsible for all indexing tasks on the DIGIT platform. It processes records from specific Kafka topics and utilizes the corresponding index configuration defined in YAML files by each module.
Objectives:
Efficiently read and process records from Kafka topics.
Retrieve and apply appropriate index configurations from YAML files.
To provide a one-stop framework for indexing the data to Elasticsearch.
To create provisions for indexing live data, reindexing from one index to the other and indexing legacy data from the data store.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Prior knowledge of Java/J2EE
Prior knowledge of SpringBoot
Prior knowledge of Elasticsearch
Prior knowledge of REST APIs and related concepts like path parameters, headers, JSON etc.
Prior knowledge of Kafka and related concepts like Producer, Consumer, Topic etc.
Performs three major tasks namely: LiveIndex, Reindex and LegacyIndex.
LiveIndex: Task of indexing the live transaction data on the platform. This keeps the es data in sync with the DB.
Reindex: Task of indexing data from one index to the other. ES already provides this feature, the indexer does the same but with data transformation.
LegacyIndex: Task of indexing legacy data from the tables to ES.
Provides flexibility to index the entire object, a part of the object or an entirely different custom object all using one input JSON from modules.
Provides features for customizing index JSON by field mapping, field masking, data enrichment through external APIs and data denormalization using MDMS.
One-stop shop for all the es index requirements with easy-to-write and easy-to-maintain configuration files.
Designed as a consumer to save API overhead. The consumer configs are written from scratch for complete control over consumer behaviour.
Step 1: Write the configuration as per your requirement. The structure of the config file is explained later in the same doc.
Step 3: Provide the absolute path of the checked-in file to the DevOps team. They will add it to the file-read path of egov-indexer by updating the environment manifest file, ensuring it is read at the time of the application's startup.
Step 4: Run the egov-indexer app. Since it is a consumer, it starts listening to the configured topics and indexes the data.
a) POST /{key}/_index
Receive data and index. There should be a mapping with the topic as {key} in index config files.
b) POST /_reindex
This is used to migrate data from one index to another index
c) POST /_legacyindex
This is to run the LegacyIndex job to index data from DB. In the request body, the URL of the service which would be called by the indexer service to pick data must be mentioned.
In legacy indexing and for collection-service records LiveIndex kafka-connect is used to do part of pushing records to elastic search. For more details please refer to the document mentioned in the document list.
The objective of this service is to create a common point to manage all the email notifications being sent out of the platform. The notification email service consumes email requests from the Kafka notification topic and processes them to send them to a third-party service. Modules like PT, TL, PGR etc make use of this service to send messages through the Kafka Queue.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Prior knowledge of Java/J2EE
Prior knowledge of SpringBoot
Prior knowledge of Third party API integration
Prior knowledge of REST APIs and related concepts like path parameters, headers, JSON etc
Prior knowledge of Kafka and related concepts like Producer, Consumer, Topic etc.
Provide a common platform to send email notifications to the user
Support localised email.
The egov-notification-mail is a consumer which listens to the egov.core.notification.email topic reads the message and generates email using SMTP Protocol. The services need the senders' email configured. On the other hand, if the senders' email is not configured, the services get the email id by internally calling egov-user service to fetch the email id. Once the email is generated, the content is localized by egov-localization service after which it is notified to the email id.
Deploy the latest version of the notification email service.
Make sure the consumer topic name for email service is added in deployment configs
The email notification service is used to send out email notifications for all miscellaneous/ad-hoc services which citizens avail from ULBs.
Can perform service-specific business logic without impacting the other module.
In the future, if we want to expose the application to citizens then it can be done easily.
To integrate, the client service should send email requests to email notification consumer topics.
An eGov core application which handles uploading different kinds of files to server including images and different document types.
Prior Knowledge of Java/J2EE.
Prior Knowledge of Spring Boot.
Prior Knowledge of REST APIs and related concepts like path parameters, headers, JSON etc.
Prior knowledge of aws and azure
The filestore application takes in a request object which contains an image/document or any kind of file and stores them in a disk/AWS/azure depending upon the configurations, additional implementations can be written for the app interface to interact with any other remote storages.
The request file to be uploaded is taken in form of multipart file part then saved to the storage and a uuid will returned as a unique identifier for that resource, which can be used to fetch the documents later.
Incase of images, the application will create three more additional copies of the file in the likes of large, medium and small for the usage of thumbnails or low quality images incase of mobile applications.
The search api takes the uuid, tenantid as mandatory url params and a few optional parameters and returns the presigned url of the files from the server, In case of images a single string containing multiple urls separated by commas will be returned representing different sizes of images stored.
The Application is present among the core group of applications available in the eGov-services git repository. The spring boot application but needs lombok extension added in your ide to load it. Once the application is up and running API requests can be posted to the url and ids can be generated.
NOTE : In case of intellij the plugin can be installed directly, for eclipse the lombok jar location has to be added in eclipse.ini file in this format -javaagent:lombok.jar.
For the API information please refer the swagger YAML
NOTE : The application needs at least one type of storage available for it to store the files either file-storage, AWS S3 or azure. More storage types can be added by extending the application interface also.
IMPORTANT : To work work any of the file storage there are some application properties which needs to be configured.
DiskStorage:
The mount path of the disk should be provided in the following variable to save files in the disc. file.storage.mount.path=path.
Following are the variables that needs to be populated based on the aws/azure account you are integrating with.
How to enable Minio SDC:
isS3Enabled = true(Should be true)
aws.secretkey = {minio_secretkey}
aws.key = {minio_accesskey}
fixed.bucketname = egov-rainmaker(Minio bucket name)
minio.source = minio
How to enable AWS S3:
isS3Enabled = true(Should be true)
aws.secretkey = {s3_secretkey}
aws.key = {s3_accesskey}
fixed.bucketname = egov-rainmaker(S3 bucket name)
minio.source = minio
AZURE:
isAzureStorageEnabled - informing the application whether Azure is available or not
azure.defaultEndpointsProtocol - type of protocol https
azure.accountName - name of the user account
azure.accountKey - secret key of the user account
NFS :
isnfsstorageenabled-informing the application whether NFS is available or not <True/False>
file.storage.mount.path - <NFS location, example /filestore>
source.disk - diskStorage - name of storage
disk.storage.host.url=<Main Domain URL>
Allowed formats to be uploaded The default format of the files is the use of set brackets with strings inside it - {"jpg", "png"}. Make sure to follow the same.
allowed.formats.map: {jpg:{'image/jpg','image/jpeg'},jpeg:{'image/jpeg','image/jpg'},png:{'image/png'},pdf:{'application/pdf'},odt:{'application/vnd.oasis.opendocument.text'},ods:{'application/vnd.oasis.opendocument.spreadsheet'},docx:{'application/x-tika-msoffice','application/x-tika-ooxml','application/vnd.oasis.opendocument.text'},doc:{'application/x-tika-msoffice','application/x-tika-ooxml','application/vnd.oasis.opendocument.text'},dxf:{'text/plain'},csv:{'text/plain'},txt:{'text/plain'},xlsx:{'application/x-tika-ooxml','application/x-tika-msoffice'},xls:{'application/x-tika-ooxml','application/x-tika-msoffice'}}
The key in the map is the visible extension of the file types, the values on the right in curly braces are the respective tika types of the file. these values can be found in tika website or by passing the file through tika functions.
Upload POST API to save the files in the server
Search Files GET API to retrieve files based only on id and tenantid
Search URLs GET API to retrieve pre-signed URLs for a given array of ids
Deploy the latest version of Filestore Service.
Add role-action mapping for APIs.
The filestore service is used to upload and store documents which citizens add while availing of services from ULBs.
Can perform file upload independently without having to add fileupload specific logic in each module.
To integrate, the host of the filestore module should be overwritten in the helm chart.
/filestore/v1/files
should be added as the endpoint for uploading files in the system
/filestore/v1/files/url
should be added as the search endpoint. This method handles all requests to search existing files depending on different search criteria
Encryption Service is used to secure sensitive data that is being stored in the database. The encryption services uses envelope encryption.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Java 17.
Kafka server is up and running.
Encryption Service offers following features :
Encrypt - The service will encrypt the data based on given input parameters and data to be encrypted. The encrypted data will be mandatorily of type string.
Decrypt - The decryption will happen solely based on the input data (any extra parameters are not required). The encrypted data will have identity of the key used at the time of encryption, the same key will be used for decryption.
Sign - Encryption Service can hash and sign the data which can be used as unique identifier of the data. This can also be used for searching gicen value from a datastore.
Verify - Based on the input sign and the claim, it can verify if the the given sign is correct for the provided claim.
Rotate Key - Encryption Service supports changing the key used for encryption. The old key will still remain with the service which will be used to decrypt old data. All the new data will be encrypted by the new key.
Following are the properties in application.properties file in egov-enc-service which are configurable.
The Encryption service is used to encrypt sensitive data that needs to be stored in the database. For each tenant, a different data encryption key (DEK) is used. The DEK is encrypted using Key Encryption Keys (KEK). Currently, there are two implementations of encrypting the data encryption keys available. The first one is using AWS KMS service and the second one is using a master password. For any custom implementation, the MasterKeyProvider interface in the service should be extended. Based on the master.password.provider flag in appication.properties you can enable which implementation of MasterKeyProvider interface to use
Can perform encryption without having to re-write encryption logic every time in every service.
To integrate, the host of encryption-services module should be overwritten in the helm chart.
/crypto/v1/_encrypt
should be added as an endpoint for encrypting input data in the system
/crypto/v1/_decrypt
should be added as the decryption endpoint.
/crypto/v1/_sign
should be added as the endpoint for providing a signature for a given value.
/crypto/v1/_verify
should be added as the endpoint for verifying whether the signature for the provided value is correct.
/crypto/v1/_rotatekey
should be added as an endpoint to deactivate the keys and generate new keys for a given tenant.
a) POST /crypto/v1/_encrypt
Encrypts the given input value/s OR values of the object.
b) POST /crypto/v1/_decrypt
Decrypts the given input value/s OR values of the object.
c) /crypto/v1/_sign
Provide signature for a given value.
d) POST /crypto/v1/_verify
Check if the signature is correct for the provided value.
e) POST /crypto/v1/_rotatekey
Deactivate the keys for the given tenant and generate new keys. It will deactivate both symmetric and asymmetric keys for the provided tenant.
The signed audit service provides a one-stop framework for signing data i.e. creating an immutable data entry to track activities of an entity. Whenever an entity is created/updated/deleted the operation is captured in the data logs and is digitally signed to protect it from tampering.
Infra configuration -
Number of concurrent users - 15000
Duration of bombarding service with requests ~ 17 minutes
Number of signed audit pod(s) - 1
Deployment With Spring Cloud
We are updating our core services to remove outdated dependencies and ensure long-term support for the DIGIT platform. The current API gateway uses Netflix Zuul, which has dependencies that will soon be obsolete. To address this, we are building a new gateway using Spring Cloud Gateway.
What is Spring Cloud Gateway and how is it different from Zuul?
Spring Cloud Gateway and Zuul both function as API gateways but differ in architecture and design. Spring Cloud Gateway is ideal for modern, reactive applications, while Zuul is better suited for traditional, blocking I/O environments. The choice between them depends on your specific needs.
Navigating the new Gateway codebase
The new Gateway codebase is well-organized, with each module containing similar files and names. This makes it easy to understand the tasks each part performs. Below is a snapshot of the current directory structure.
Config: It contains the configuration-related files for example Application Properties etc.
Constants: It contains the constants referenced frequently within the codebase. Add any constant string literal here and then access it via this file.
Filters: This folder is the heart of the API gateway since it contains all the PRE, POST, and ERROR filters. For those new to filters: Filters intercept incoming and outgoing requests, allowing developers to apply various functionalities such as authentication, authorisation, rate limiting, logging, and transformation to these requests and responses.
Model: contains the P.O.J.O required in the gateway.
Producer: contains code related to pushing data onto Kafka.
Rate limiters contain files for initialising the relevant bean for custom rate limiting.
Utils: contains the helper function which can be reused across the project.
The above paragraphs provide a basic overview of the gateway's functionality and project structure.
When a request is received, the gateway checks if it matches any predefined routes. If a match is found, the request goes through a series of filters, each performing specific validation or enrichment tasks. The exact order of these filters is discussed later.
The gateway also ensures that restricted routes have proper authentication and authorization. Some APIs can be whitelisted as open or mixed-mode endpoints, allowing them to bypass authentication or authorization.
Upon receiving a request, the gateway first looks for a matching route definition. If a match is found, it starts executing the pre-filters in the specified order.
Pre-Filter
RequestStartTimerFilter: Sets request start time
CorrelationIdFilter: Generate and set a correlationId in each request to help track it in the downstream service
AuthPreCheckFilter: Checks for if Authorisation has to be performed
PreHookFilter: Sends a pre-hook request
RbacPreFilter: Checks if Authentication has to be performed or not
AuthFilter: Authenticate the request
RbacFilter: Authorise the request
RequestEnrichmentFilter: Enrich the request with userInfo & correlationId
Error-Filter
This filter handles all the errors shown either during the request processing or from the downstream service.
There are two ways to configure Rate Limits in Gateway
Default Rate Limiting
Service Level Rate Limiting
Default rate limiting sets a standard limit on the number of requests that can be made to the gateway within a specified time frame. This limit applies to all services unless specific rate limits are configured at the service level.
Add these properties in Values.YAML of Gateway helm file and then configure the values as per the use case. Read Configuring Gateway Rate Limiting for more information about these properties.
Service level rate limiting allows you to set specific rate limits for individual services. This means each service can have its request limits, tailored to its unique needs and usage patterns, providing more granular control over traffic management.
If you want to define rate limiting for each service differently you can do so by defining these properties in the Values.YAML of the respective service. Read Configuring Gateway Rate Limiting for more information about these properties.
Note: We currently provide two options for keyResolver and if none of them is specified the spring cloud will take a default go PrincipalNameKeyResolver
which retrieves the Principal
from the ServerWebExchange
and calls Principal.getName()
.
ipKeyResolver : Resolves key based on ip address of the request
userKeyResolver : Resolves key based on use UUID of the request
To enable gateway routes, a service must activate the gateway flag in the Helm chart. Based on this flag, a Go script runs in the Init container, automatically generating the necessary properties for all services using the gateway.
NOTE: Restart the Gateway after making changes in service Values.YAML so that it can pick up the changes.
This document provides information on migrating location data from an older JSON-based configuration to a newly developed boundary service. A boundary-migration service has been created to facilitate this process. Users can migrate the data to the new boundary service by calling a single API.
Below is the step-by-step procedure for the same:-
Import the boundary migration service from in an IDE (preferably IntelliJ) and start the service after building it.
Copy the boundary-data JSON that needs to be migrated as an example we are taking . and adding RequestInfo inside this as shown below.
Open Postman and create a new HTTP endpoint with a path as
and the above as the endpoint request body.
PS: The boundary-migration service is assumed to run on port 8081. If it is running on a different port, check the active port and update the path accordingly.
Send the request and if the endpoint path and request body are correct the response would be 200 OK.
Below is a reference CURL:
User Service stores the PII data in the database in encrypted form. So whichever service is reading that data directly from the database will have to decrypt the data before responding to the user. As of now, to the best of our knowledge following services are reading the user data from the database:
User Service
Report Service
Searcher Service
Enc-Client Library is a supplementary library with features like masking, auditing, etc. The above services call the functions from the library to decrypt the data read from the database.
In a bid to avoid unnecessary repetition of codes which generate ids and to have central control over the logic so that the burden of maintenance is reduced from the developers. To create a config-based application which can be used without writing even a single line of coding.
Prior knowledge of Java/J2EE.
Prior knowledge of Spring Boot and Flyway.
Prior knowledge of REST APIs and related concepts like path parameters, headers, JSON etc.
The application exposes a Rest API to take in requests and provide the ids in response in the requested format. The request format varies from current date information, , random number, and sequence generated number. Id can be generated by providing a request with any of the above-mentioned information.
The Id Amritsar-PT-2019/09/12-000454-99 contains
Amritsar - which is the name of the city
PT - a fixed string representing the module code(PROPERTY TAX)
2019/09/12 - date
000454 - sequence generated number
99 - random number
The id generated in the above-mentioned eg needs the following format
[city]-PT-[cy:yyyy/mm/dd]-[SEQ_SEQUENCE_NAME]-[d{4}]
Everything in the square brackets will be replaced with the appropriate values by the app.
By default, the IDGen service will now read its configuration from MDMS. DB Configuration requires access to the DB, so the new preferred method for the configuration is MDMS. The configuration needs to be stored in common-masters/IdFormat.json in MDMS
It is recommended to have the IdFormat as a state-level master. To disable the configuration to be read from DB instead, the environment variable IDFORMAT_FROM_MDMS should be set to false.
ID-FORMAT-REPLACEABLES:
[FY:] - represents the financial year, the string will be replaced by the value of starting year and the last two numbers of the ending year separated by a hyphen. For instance: 2018-19 in case of the financial year 2018 to 2019.
[cy:] - any string that starts with cy will be considered as the date format. The values after the cy: is the format using which output will be generated.
[d{5}] - d represents the random number generator, the length of the random number can be specified in flower brackets next to d. If the value is not provided then the random length of 2 will be assigned.
[city] - The string city will be replaced by the city code provided by the respective ULB in location services.
[SEQ_*] - String starting with seq will be considered as sequence names, which will be queried to get the next seq number. If your sequence does not start with the namespace containing “SEQ” then the application will not consider it as a sequence. In the absence of the sequence from DB error will be thrown.
[tenantid] - replaces the placeholder with the tenantid passed in the request object.
[tenant_id] - replaces the placeholder with the tenantid passed in the request object. Replaces all `.` with `_`
[TENANT_ID] - replaces the placeholder with the tenantid passed in the request object. Replaces all `.` with `_`, and changes the case to upper case.
When you use both idName and format in a request. IDGEN will first try to see if the format for the given idName exists, if not then the format will be used.
If you want a state-level sequence then you need to use a fixed sequence name
But if you want a ULB level sequence, the sequence name should be dynamic based on the tenantid as given in the below example.
The SEQ_* replaceable used in id generation are by default expected for the sequence to already exist in the DB. But this behaviour can be changed and controlled using two environmental variables while deploying the service
AUTOCREATE_NEW_SEQ: Default to false. When set to true, this will auto-create sequences when the format has been derived using provided idName. Since the idName format comes from DB or MDMS, it is a trusted value and this value should be set to true. This will make sure that no DB configuration needs to be done as long as MDMS has been configured. It is recommended that each service using idgen should generate an ID using idName instead of just using passing the format directly. This will make sure that no DB configuration needs to be done for creating sequences.
AUTOCREATE_REQUEST_SEQ: Default to false. When set to true, this will auto-create sequences when the format has been derived using the format parameter from the request. This is recommended to keep as false, as anyone with access to idgen can create any number of sequences in DB and overload the DB. Though during the initial setup of an environment, this variable can be kept as true to create all the sequences when the initial flows are run from the UI to generate the sequences. And afterwards, the flags should be disabled.
Add MDMS configs required for ID Gen service and restart the MDMS service.
Deploy the latest version of the ID generation service.
Add role-action mapping for APIs.
The ID Gen service is used to generate unique ID numbers for all miscellaneous/ad-hoc services which citizens avail from ULBs.
Can perform service-specific business logic without impacting the other module.
Provides the capability of generating the unique identifier of the entity calling ID Gen service.
To integrate, the host of idgen-services module should be overwritten in the helm chart.
/egov-idgen/id/_generate
should be added as the endpoint for generating ID numbers in the system
Here is a link to a sample master data.
To know more about regular expression refer the below articles To test regular expression refer the below link.
Step 2: Check in the config file to a remote location preferably Github. Currently, we check the files into this folder -for dev
Click here to access the details.
Play around with the API's :
Go To : and click on file -> import url
Then add the raw url of the API doc in the pop up.
Incase the url is unavailable, please go to the of egov-services git repo and find the yaml for egov-filestroe.
minio.url = .backbone:9000(Minio server end point)
minio.url =
Property | Default Value | Remarks |
---|
the latest version of Encryption Service
Note: This video will give you an idea of how to deploy any Digit-service. Further you can find the latest builds for each service in out latest here.
Add mapping for API’s.
Title | Link |
---|
When any of these services reads the data from the database, it will be in encrypted form. Before responding to a request, they call the enc-client library to convert the data to plain/masked form. The data returned as part of the response should only be in plain/masked form. It should not contain any encrypted values. Detailed guidelines on how to write the necessary configurations are provided in document.
An eGov core application which provides locale-specific components and translating text for the eGov group of applications.
Prior knowledge of Java/J2EE.
Prior knowledge of Spring Boot.
Prior knowledge of REST APIs and related concepts like path parameters, headers, JSON etc.
Prior knowledge of Redis and Postgres.
The localization application stores the locale data in the format of key and value along with the module, tenantid and locale. The module defines which application of eGov owns the locale data and tenantId does the same for the tenant, locale is the specific locale for which data is being added.
The request can be posted through the post API with the above-mentioned variables in the request body.
Once posted the same data can be searched based on the module, locale and tenantId as keys.
The data posted to the localization service is permanently stored in the database and loaded into the Redis cache for easy access and every time new data is added to the application the Redis cache will be refreshed.
Deploy the latest version of the Localization Service.
Add role-action mapping for APIs.
The localization service is used to store key-value pairs of metadata in different languages for all miscellaneous/ad-hoc services which citizens avail from ULBs.
Can perform service-specific business logic without impacting the other module.
Provides the capability of having multiple languages in the module.
To integrate, the host of the localization-services module should be overwritten in the helm chart.
/localization/messages/v1/_upsert
should be added as the create endpoint for creating localization key-value pairs in the system
/localization/messages/v1/_search
should be added as the search endpoint. This method handles all requests to search existing records depending on different search criteria
https://www.getpostman.com/collections/a140e7426ab4419ed5b5
The Internal Gateway is a simplified Zuul service which provides an easy integration of services running different namespaces of a multistate instance, the clients need not know about all the details of microservices and their namespace in the K8s setup.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Java 17
API Gateway
Provides an easier API interface between services running in different tenants(namespaces) where direct access between microservices is blocked by default.
Allows refactoring microservices independently without forcing the clients to refactor integrating logic with other tenants.
Route filter - a single route filter enables the routing based on tenatId from the HTTP header of the incoming requests.
For each service, the below-mentioned property has to be added in internal-gateway.json
Open
Follow the steps below to adopt the new MDMS -
Define the schema for the master that you want to promote to MDMS v2.
Ensure that the schema has a unique field (a unique field can also be composite) to enforce data integrity.
In case the data does not have unique identifiers e.g. complex masters like the one mentioned here, consider adding a redundant field which can serve as the unique identifier.
Use the following API endpoint to create a schema - /mdms-v2/schema/v1/_create
Search and verify the created schema using the following API endpoint - /mdms-v2/schema/v1/_search
Once the schema is in place, add the data using the following API endpoint - /mdms-v2/v2/_create/{schemaCode}
Verify the data by using the following API endpoint - /mdms-v2/v2/_search
| asd@#$@$!132123 | Master password for encryption/ decryption. It can be any string. |
| qweasdzx | A salt is random data that is used as an additional input to a one-way function that hashes data, a password or passphrase. It needs to be an alphanumeric string of length 8. |
| qweasdzxqwea | An initialization vector is a fixed-size input to a cryptographic primitive. It needs to be an alphanumeric string of length 12. |
| 256 | Default size of Symmetric key. |
| 1024 | Default size of Asymmetric key. |
| 12 | Default size of Initial vector. |
| software | Name of the implementation to be used for encrypting DEKs |
NA | AWS access key to access the KMS service (Note: this field is required only if |
NA | AWS secret to access the KMS service (Note: this field is required only if |
NA | AWS region to access the KMS service (Note: this field is required only if |
NA | Id of the KMS key to be used for encrypting the DEK (Note: this field is required only if |
API Swagger Documentation |
Indexer uses a config file per module to store all the configurations pertaining to that module. The Indexer reads multiple such files at start-up to support indexing for all the configured modules. In config, we define source and, destination elastic search index names, custom mappings for data transformation and mappings for data enrichment.
Below is the sample configuration for indexing TL application creation data into elastic search.
The table below lists the key configuration variables.
DIGIT supports multiple languages. To enable this feature begin with setting up the base product localisation. The multilingual UI support makes it easier for users to understand the DIGIT operations.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Before starting the localisation setup one should know the React and eGov FrameWork.
Before setting up localisation, make sure that all the keys are pushed to the Create API and also get prepared with the values that need to be added to the Localisation key specific to particular languages that are being added to the product.
Make sure you know where to add the localisation in the code.
After localisation, users can view DIGIT screens in their preferred language. Completing the application is simple as the DIGIT UI allows easy language selection.
Once the key is added to the code as per requirement, the deployment can be done in the same way as the code is deployed.
Select a label that needs to be localised from the Product code. Here is the example code for a header before setting up Localisation.
As we see the above which supports only the English language, To set up Localisation to that header we need to the code in the following manner.
When comparing the code before and after the Localisation setup, we can see that the following code has been added.
{
labelName: "Trade Unit ",
labelKey: "TL_NEW_TRADE_DETAILS_TRADE_UNIT_HEADER"
},
The values here can be added to the key using two methods: either via the newly developed localisation screen or by updating the key values through the Postman application to create an API.
Notification service can notify the user through SMS and email for there action on DIGIT as an acknowledgement that their action has been successfully completed.
ex: actions like property create, TL create, etc.
To send SMS we need the help of 2 services, one on which the user is taking action and the second SMS service.
To send an email we need the help of 2 services, one on which the user is taking action and second email service.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Prior Knowledge of Spring boot.
Prior Knowledge of Kafka.
Prior Knowledge of localization service.
For a specific action of the user, he/she will get a SMS and email as an acknowledgment.
Users can get SMS and email based on the localization language.
If you want to take action for a specific action on that action the service has to listen to a particular topic so that each time any record comes to the topic consumer will know that action has been taken and can trigger a notification for it.
ex: if you want to trigger a notification for Property create then the property service’s NotificationConsumer class should listen to topic egov.pt.assessment.create.topic so that each time any record comes to the topic egov.pt.assessment.create.topic NotificationConsumer will know that Property creates action that has been taken and can trigger a notification for it.
when any record comes into the topic first the service will fetch all the required data like user name, property id, mobile number, tenant id, etc from the record which we fetched from the topic.
Then we will fetch the message contain from localization and the service replaces the placeholders with the actual data.
Then put the record in SMS topic in which SMS service is listening.
email service is also listening to the same topic which SMS service is listening.
Configure service to enable fetch and share of location details
A core application that provides location details of the tenant for which the services are being provided.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Java 17
PostgreSQL server is running and the DB is created
Follow this guide to see how to setup and create DB in postgreSQL.
Working Knowledge of egov-mdms service to add location data in master data.
egov-mdms service is running and all the required MDMS masters are loaded in it
The location information is also known as the boundary data of ULB
Boundary data can be of different hierarchies - ADMIN or ELECTION hierarchy defined by the Administrators and Revenue hierarchy defined by the Revenue department.
The election hierarchy has the locations divided into several types like zone, election wards, blocks, streets and localities. The Revenue hierarchy has the locations divided into a zone, ward, block and locality.
The model which defines the localities like zone, ward and etc is a boundary object which contains information like name, lat, long, parent or children boundary if any. The boundaries come under each other in a hierarchy. For instance, a zone contains wards, a ward contains blocks, and a block contains locality. The order in which the boundaries are contained in each other differs based on the tenants.
Add/Update the MDMS master file which contains boundary data of ULBs.
Add Role-Action mapping for the egov-location APIs.
Deploy/Redeploy the latest version of the egov-mdms service.
Fill the above environment variables in the egov-location with proper values.
Deploy the latest version of the egov-location service.
The boundary data has been moved to MDMS from the master tables in DB. The location service fetches the JSON from MDMS and parses it to the structure of the boundary object as mentioned above. A sample master would look like below.
The egov-location APIs can be used by any module which needs to store the location details of the tenant.
Get the boundary details based on boundary type and hierarchy type within the tenant boundary structure.
Get the geographical boundaries by providing appropriate GeoJson.
Get the tenant list in the given latitude and longitude.
To integrate, the host of egov-location should be overwritten in the helm chart.
/boundarys/_search should be added as the search endpoint for searching boundary details based on tenant Id, Boundary Type, Hierarchy Type etc.
/geography/_search should be added as the search endpoint. This method handles all requests related to geographical boundaries by providing appropriate GeoJson and other associated data based on tenantId or lat/long etc.
/tenant/_search should be added as the search endpoint. This method tries to resolve a given lat, long to a corresponding tenant, provided there exists a mapping between the reverse geocoded city to the tenant.
The MDMS tenant boundary master file should be loaded in the MDMS service.
Please refer to the Swagger API contract for the location service to understand the structure of APIs and to have a visualisation of all internal APIs.
MDMS v2 provides APIs for defining schemas, searching schemas, and adding master data against these defined schemas. All data is now stored in PostgreSQL tables instead of GitHub. MDMS v2 currently also includes v1 search API for fetching data from the database in the same format as MDMS v1 search API to ensure backward compatibility.
Prior knowledge of Java/J2EE.
Prior knowledge of Spring Boot.
Prior knowledge of REST APIs and related concepts like path parameters, headers, JSON etc.
Prior knowledge of Git.
Advanced knowledge of operating JSON data would be an added advantage to understanding the service.
Create schema: MDMS v2 introduces functionality to define your schema with all validations and properties supported by JSON schema draft 07. Below is a sample schema definition for your reference -
To create a basic schema definition, use the following keywords:
$schema: specifies which draft of the JSON Schema standard the schema adheres to.
title and description: state the intent of the schema. These keywords don’t add any constraints to the data being validated.
type: defines the first constraint on the JSON data.
Additionally, we have two keys which are not part of standard JSON schema attributes -
x-unique: specifies the fields in the schema utilizing which a unique identifier for each master data will be created.
x-ref-schema: specifies referenced data. This is useful in scenarios where the parent-child relationship needs to be established in master data. For example, Trade Type can be a parent master data to Trade Sub Type. In the example above, the field path represents the JsonPath of the attribute in the master data which contains the unique identifier of the parent which is being referenced. Schema code represents the schema under which the referenced master data needs to be validated for existence.
Search schema: MDMS v2 has API to search schema based on the tenantid, schema code, and unique identifier.
Create data: MDMS v2 enables data creation according to the defined schema. Below is an example of data based on the mentioned schema:
Search data: MDMS v2 exposes two search APIs - v1 and v2 search where v1 search API is completely backward compatible.
Update data: MDMS v2 allows the updation of master data fields.
Fallback functionality: Both the search APIs have fallback implemented where if data is not found for a particular tenant, the services fall back to the parent tenant(s) and return the response. If data is not found even for the parent tenant, an empty response is sent to the user.
/mdms-v2/schema/v1/_create - Takes RequestInfo and SchemaDefinition in the request body. SchemaDefinition has all attributes which define the schema.
/mdms-v2/schema/v1/_search - Takes RequestInfo and SchemaDefSearchCriteria in the request body and returns schemas based on the provided search criteria.
/mdms-v2/v2/_create/{schemaCode} - Takes RequestInfo and Mdms in the request body where the MDMS object has all the information for the master being created and it takes schemaCode as path param to identify the schema against which data is being created.
/mdms-v2/v2/_search - Takes RequestInfo and MdmsCriteria in the request body to return master data based on the provided search criteria. It also has a fallback functionality where if data is not found for the tenant which is sent, the services fall back to the parent tenant(s) to look for the data and return it.
/mdms-v2/v2/_update/{schemaCode} - Takes RequestInfo and Mdms in the request body where the MDMS object has all the information for the master being updated and it takes schemaCode as path param to identify the schema against which data is being updated.
/mdms-v2/v1/_search - This is a backwards-compatible API which takes RequestInfo and MdmsCriteria in the request body to return master data based on the provided search criteria and returns the response in MDMS v1 format. It also has fallback functionality where if data is not found for the tenant which is sent, the services fall back to the parent tenant(s) to look for the data and return it.
MDMS stands for Master Data Management Service. MDMS is one of the applications in the eGov DIGIT core group of services. This service aims to reduce the time spent by developers on writing codes to store and fetch master data (primary data needed for module functionality ) which doesn’t have any business logic associated with them.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Prior knowledge of Java/J2EE.
Prior knowledge of Spring Boot.
Prior knowledge of REST APIs and related concepts like path parameters, headers, JSON, etc.
Prior knowledge of Git.
Advanced knowledge of how to operate JSON data would be an added advantage to understanding the service.
The MDMS service reads the data from a set of JSON files from a pre-specified location.
It can either be an online location (readable JSON files from online) or offline (JSON files stored in local memory).
The JSON files are in a prescribed format and store the data on a map. The tenantID of the file serves as a key and a map of master data details as values.
Once the data is stored in the map the same can be retrieved by making an API request to the MDMS service. Filters can be applied in the request to retrieve data based on the existing fields of JSON.
For deploying the changes in MDMS data, the service needs to be restarted.
The changes in MDMS data could be adding new data, updating existing data, or deleting it.
The config JSON files to be written should follow the listed rules
The config files should have JSON extension
The file should mention the tenantId, module name, and master name first before defining the data
Example config JSON for “Billing Service”
Master Data Management Service is a core service made available on the DIGIT platform. It encapsulates the functionality surrounding Master Data Management. The service creates, updates and fetches Master Data pertaining to different modules. This eliminates the overhead of embedding the storage and retrieval logic of Master Data into other modules. The functionality is exposed via REST API.
Prior Knowledge of Java/J2EE, Spring Boot, and advanced knowledge of operating JSON data would be an added advantage to understanding the service.
The MDM service reads the data from a set of JSON files from a pre-specified location. It can either be an online location (readable JSON files from online) or offline (JSON files stored in local memory). The JSON files should conform to a prescribed format. The data is stored in a map and tenantID of the file serves as the key.
Once the data is stored in the map the same can be retrieved by making an API request to the MDM service. Filters can be applied in the request to retrieve data based on the existing fields of JSON.
The spring boot application needs lombok extension added to your IDE to load it. Once the application is up and running API requests can be posted to the URL.
The config JSON files to be written should follow the listed rules
The config files should have JSON extension.
The file should mention the tenantId, module name, and master name first before defining the data.
The Master Name in the above sample will be substituted by the actual name of the master data. The array succeeding it will contain the actual data.
Example config JSON for “Billing Service”
APIs are available to create, update and fetch master data pertaining to different modules. Refer to the segment below for quick details.
BasePath:/mdms/v1/[API endpoint] Method
POST /_create
Creates or Updates Master Data on GitHub as JSON files
MDMSCreateRequest
Request Info + MasterDetail — Details of the master data to be created or updated on GitHub.
MasterDetail
MdmsCreateResponse
Response Info
Method
POST /_search
This method fetches a list of masters for a specified module and tenantId.
MDMSCriteriaReq (mdms request) -
Request Info + MdmsCriteria — Details of module and master which need to be searched using MDMS.
MdmsCriteria
MdmsResponse
Response Info + Mdms
MDMS
Common Request/Response/Error Structures:
RequestInfo should be used to carry meta information about the requests to the server as described in the fields below. All DIGIT APIs will use requestinfo as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseInfo in the response body to ensure correlation.
ResponseInfo should be used to carry metadata information about the response from the server. apiId, ver, and msgId in ResponseInfo should always correspond to the same values in the respective request's RequestInfo.
ErrorRes
All DIGIT APIs will return ErrorRes in case of failure which will carry ResponseInfo as metadata and Error object as an actual representation of the error. When the request processing status in the ResponseInfo is ‘FAILED’ the HTTP status code 400 is returned.
Tenant represents a body in a system. In the municipal system, a state and its ULBs (Urban local bodies) are tenants. ULB represents a city or a town in a state. Tenant configuration is done in MDMS.
Before proceeding with the configuration, the following pre-requisites are met -
Knowledge of json and how to write a json is required.
Knowledge of MDMS is required.
User with permission to edit the git repository where MDMS data is configured.
On the login page, city name selection is required. Tenant added in MDMS shows in the city drop-down of the login page.
In reports or in the employee inbox page the details related to ULB are displayed from the fetched ULB data added in MDMS.
Modules i.e., TL, PT, MCS can be enabled based on the requirement of the tenant.
After adding the new tenant, the MDMS service needs to be restarted to read the newly added data.
Tenant is added in tenant.json. In MDMS, file tenant.json, under the tenant folder holds the details of the state and ULBs to be added in that state.
To enable tenants the above data should be pushed in tenant.json file. Here "ULB Grade" and City "Code" are important fields. ULB Grade can have a set of allowed values that determines the ULB type, (Municipal corporation (Nagar Nigam), Municipality (municipal council, municipal board, municipal committee) (Nagar Parishad), etc). City "Code" has to be unique to each tenant. This city-specific code is used in all transactions. Not permissible to change the code. If changed we will lose the data of the previous transactions done.
Naming Convention for Tenants Code
“Code”:“uk.citya” is StateTenantId.ULBTenantName"
"logoId": "https://s3.ap-south-1.amazonaws.com/uk-egov-assets/uk.citya/logo.png", Here the last section of the path should be "/<tenantId>/logo.png". If we use anything else, the logo will not be displayed on the UI. <tenantId> is the tenant code ie “uk.citya”.
Localization should be pushed for ULB grade and ULB name. The format is given below.
Localization for ULB Grade
Localization for ULB Name
Format of localization code for tenant name <MDMS_State_Tenant_Folder_Name>_<Tenants_Fille_Name>_<Tenant_Code> (replace dot with underscore)
Boundary data should be added for the new tenant.
Configuring master data for a new module requires creating a new module in the master config file and adding master data. For better organizing, create all the master data files belonging to the module in the same folder. Organizing in the same folder is not mandatory it is based on the moduleName in the master data file.
Before you proceed with the configuration, make sure the following pre-requisites are met -
User with permission to edit the git repository where MDMS data is configured.
These data can be used to validate the incoming data.
After adding the new module data, the MDMS service needs to be restarted to read the newly added data.
The master config file is structured as below. Each key in the master config is a module and each key in the module is a master.
The new module can be added below the existing modules in the master config file.
OTP Service is a core service that is available on the DIGIT platform. The service is used to authenticate the user in the platform. The functionality is exposed via REST API.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Java 17
egov-otp is called internally by the user-otp service that fetches the mobile number and feeds to egov-otp to generate the 'n' DIGIT OTP.
The below properties define the OTP configurations -
a) egov.otp.length
: Number of digits in the OTP
b) egov.otp.ttl
: Controls the validity time frame of the OTP. The default value is 900 seconds. Another OTP generated within this time frame is also allowed.
c) egov.otp.encrypt
: Controls if the OTP is encrypted and stored in the table.
Deploy the latest version of egov-otp service.
Add role-action mapping for APIs.
The egov-otp service is used to authenticate the user in the platform.
Can perform user authentication without impacting the other module.
In the future, this application can be used in a standalone manner in any other platforms that require a user authentication system.
To integrate, the host of egov-otp module should be overwritten in the helm chart.
/otp/v1/_create
should be added as the create endpoint. Create OTP configuration API is an internal call from v1/_send endpoint. This endpoint is present in the user-otp service and removes the need for explicit calls.
/otp/v1/_validate
should be added as the validate endpoint. The OTP configuration end point validates the OTP with respect to the mobile number.
/otp/v1/_search
should be added as the search endpoint. This API searches the mobile number and OTP using uuid - mapping uuid to OTP reference number.
BasePath
/egov-otp/v1
Egov-otp service APIs - contains create, validate and search endpoints
a) POST /otp/v1/_create
- create OTP configuration this API is an internal call from v1/_send endpoint. This endpoint present in the user-otp service removes the need for explicit calls.
b) POST /otp/v1/_validate
- validate OTP configuration this endpoint is to validate the OTP with respect to the mobile number
c) POST /otp/v1/_search
- Search the mobile number and OTP using uuid, uuid using the OTP reference number
MDMS supports the configuration of data at different levels. While we enable a state there can be data that is common to all the ULBs of the state and data specific to each ULB. The data further can be configured at each module level as state-specific or ULB’s specific.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Prior Knowledge of Java/J2EE.
Prior Knowledge of Spring Boot.
Prior Knowledge of REST APIs and related concepts like path parameters, headers, JSON, etc.
Prior knowledge of Git.
Advanced knowledge of operating JSON data would be an added advantage to understanding the service.
State Level Masters are maintained in a common folder.
ULB Level Masters are maintained in separate folders named after the ULB.
Module Specific State Level Masters are maintained by a folder named after the specific module that is placed outside the common folder.
For deploying the changes(adding new data, updating existing data or deletion) in MDMS, the MDMS service needs to be restarted.
The common master data across all ULBs and modules like department, designation, etc are placed under the common-masters folder which is under the tenant folder of the MDMS repository.
The common master data across all ULBs and are module-specific are placed in a folder named after each module. These folders are placed directly under the tenant folder.
Module data that are specific to each ULB like boundary data, interest, penalty, etc are configured at the ULB level. There will be a folder per ULB under the tenant folder and all the ULB’s module-specific data are placed under this folder.
Persister service provides a framework to persist data in transactional fashion with low latency based on a config file. Removes repetitive and time consuming persistence code from other services.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Prior Knowledge of Java/J2EE.
Prior Knowledge of SpringBoot.
Prior Knowledge of PostgresSQL.
Prior Knowledge of JSONQuery in Postgres. (Similar to PostgresSQL with a few aggregate functions.).
Kafka server is up and running.
Persist data asynchronously using kafka providing very low latency
Data is persisted in batch
All operations are transactional
Values in prepared statement placeholder are fetched using JsonPath
Easy reference to parent object using ‘{x}’ in jsonPath which substitutes the value of the variable x in the JsonPath with value of x for the child object.(explained in detail below in doc)
Supported data types ARRAY("ARRAY"), STRING("STRING"), INT("INT"),DOUBLE("DOUBLE"), FLOAT("FLOAT"), DATE("DATE"), LONG("LONG"),BOOLEAN("BOOLEAN"),JSONB("JSONB")
Persister uses configuration file to persist data. The key variables are described below:
serviceName: Name of the service to which this configuration belongs.
description: Description of the service.
version: the version of the configuration.
fromTopic: The kafka topic from which data is fetched
queryMaps: Contains the list of queries to be executed for the given data.
query: The query to be executed in form of prepared statement:
basePath: base of json object from which data is extrated
jsonMaps: Contains the list of jsonPaths for the values in placeholders.
jsonPath: The jsonPath to fetch the variable value.
To persist large quantity of data bulk setting in persister can be used. It is mainly used when we migrate data from one system to another. The bulk persister have the following two settings:
Any kafka topic containing data which has to be bulk persisted should have '-batch' appended at the end of topic name example: save-pt-assessment-batch.
Every incoming request [via kafka] is expected to have a version attribute set, [jsonpath, $.RequestInfo.ver] if versioning is to be applied.
If the request version is absent or invalid [not semver] in the incoming request, then a default version defined by the following property in application.propertiesdefault.version=1.0.0
is used.
The request version is then matched against the loaded persister configs and applied appropriately.
Write configuration as per the requirement. Refer the example given earlier.
In the environment file, mention the file path of configuration under the variable egov.persist.yml.repo.path
while mentioning the file path we have to add file:///work-dir/
as prefix. for example: egov.persist.yml.repo.path = file:///work-dir/configs/egov-persister/abc-persister.yml
. If there are multiple file separate it with comma (,
) .
Deploy latest version of egov-persister service and push data on kafka topic specified in config to persist it in DB.
The persister configuration can be used by any module to store records in particular table of database.
Insert/Update Incoming Kafka messages to Database.
Add Modify kafka message before putting it into database.
Persist data asynchronously.
Data is persisted in batch.
Write configuration as per your requirement. Structure of the config file is explained above in the same document.
Check-in the config file to a remote location preferably github.
Provide the absolute path of the checked-in file to DevOps, to add it to the file-read path of egov-persister. The file will be added to egov-persister's environment manifest file for it to be read at start-up of the application.
Run the egov-persister app and push data on kafka topic specified in config to persist it in DB
Learn how to configure Localization service.
Configure master data management service
The MDMS service aims to reduce the time spent by developers on writing codes to store and fetch master data (primary data needed for module functionality) which doesn’t have any business logic associated with them. Instead of writing APIs and creating tables in different services to store and retrieve data that is seldom changed, the MDMS service keeps them in a single location for all modules and provides data on demand with the help of no more than three lines of configuration.
Prior knowledge of Java/J2EE
Prior knowledge of Spring Boot
Prior knowledge of REST APIs and related concepts like path parameters, headers, JSON etc.
Prior knowledge of Git
Advanced knowledge of how to operate JSON data would be an added advantage to understanding the service
Adds master data for usage without the need to create master data APIs in every module.
Reads data from GIT directly with no dependency on any database services.
Environment Variables | Description |
---|---|
Deploy the latest version of the MDMS-service
Note: This video will give you an idea of how to deploy any Digit-service. Further you can find the latest builds for each service in out latest release document here.
Add conf path for the file location
Add master config JSON path
Note : See the Reference Docs for the values of conf path and master config.
The MDMS service provides ease of access to master data for any service.
No time spent writing repetitive codes with no business logic.
To integrate, the host of egov-mdms-service should be overwritten in the helm chart
egov-mdms-service/v1/_search
should be added as the search endpoint for searching master data.
MDMS client from eGov snapshots should be added as mvn entity in pom.xml for ease of access since it provides MDMS request pojos.
Learn how to setup DIGIT master data.
Steps to migrate MDMS data to enable use of workbench UI v1.0
Follow the steps below to migrate the MDMS data to enable the use of Workbench UI v1.0.
Follow the steps below to generate the schema for Workbench UI v1.0:
Clone the migration utility: Start by cloning the migration utility from this .
Clone the MDMS Repository: Start by cloning the MDMS repository on your local machine.
Configure application.properties
: Open the application.properties
file in the workbench utility and configure it as follows:
Add the hostname of the environment.
Add the MDMS cloned folder path in the egov.mdms.conf.path
.
Add master-config.json
in masters.config.url
.
Specify the output folder path for the created schema in master.schema.files.dir
.
Port-forward MDMSv2 Service: Port-forward the MDMSv2 service to port 8094.
Run the Curl Command:
This command generates the schema and saves it in the path specified by master.schema.files.dir
.
After generating the schema, you may need to update it with additional attributes:
Add x-unique
Attribute: This defines unique fields in the schema.
Add x-ref-schema
Attribute: Use this attribute if a field within MDMS data needs to refer to another schema.
Set Default Value for a Field: Use the default
keyword to set default values.
To migrate the schema, use the following curl command:
To migrate data for a specific master/module name, use the following curl command:
Here is an example of a schema:
NOTE: To migrate data for a specific master/module name, use the following code changes in migrateMasterData
For creating a new master in MDMS, create the JSON file with the master data and configure the newly created master in the master config file.
Before proceeding with the configuration, make sure the following pre-requisites are met -
User with permission to edit the git repository where MDMS data is configured.
After adding the new master, the MDMS service needs to be restarted to read the newly added data.
The new JSON file needs to contain 3 keys as shown in the below code snippet. The new master can be created either State-wise or ULB-wise. Tenant ID and config in the master config file determine this.
The master config file is structured as below. Each key in the master config is a module and each key in the module is a master.
Each master contains the following data and the keys are self-explanatory
Configure bulk generation of PDF documents
The objective of the PDF generation service is to bulk generate pdf as per requirement.
Before proceeding with the documentation, ensure the following prerequisites are met:
NPM
Ensure the Kafka server is operational
Confirm that the service is running and configured with a persister
Verify that the PostgreSQL (PSQL) server is running, and a database is created to store filestore IDs and job IDs of generated PDFs
Note : Refer to this to know how to install postgreSQL locally and then creating a DB
Provide a common framework to generate PDFs
Provide flexibility to customise the PDF as per the requirement
Provide functionality to add an image or QR code in a PDF
Provide functionality to generate PDFs in bulk
Provide functionality to specify the maximum number of records to be written in one PDF
Create data config and format config for a PDF according to product requirements.
Add data config and format config files in PDF configuration.
Add the file path of data and format config in the environment yaml file.
Deploy the latest version of pdf-service in a particular environment.
The PDF configuration can be used by any module which needs to show particular information in PDF format that can be printed/downloaded by the user.
Functionality to generate PDFs in bulk.
Avoid regeneration.
Support QR codes and Images.
Functionality to specify the maximum number of records to be written in one PDF.
Uploading generated PDF to filestore and return filestore id for easy access.
To download and print the required PDF _create API has to be called with the required key (For Integration with UI, please refer to the links in Reference Docs)
Note: All the APIs are in the same Postman collection, therefore, the same link is added in each row.
eGov Payment Gateway acts as a liaison between eGov apps and external payment gateways facilitating payments, reconciliation of payments and lookup of transactions' status'.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Java 17
Kafka server is up and running
service is running and has pg service added in it
PSQL server is running and the database is created to store transaction data.
Note : You can follow this to setup postgreSQL locally and create a DB in it.
Create or initiate a transaction, to make a payment against a bill.
Make payment for multiple bill details [multi module] for a single consumer code at once.
Transaction to initiate a call to the transaction/_create API, various validations are carried out to ensure the sanctity of the request.
The response includes a generated transaction id and a redirect URL to the payment gateway itself.
Various validations are carried out to verify the authenticity of the request and the status is updated accordingly. If the transaction is successful, a receipt is generated for the same.
Reconciliation is carried out by two jobs scheduled via a Quartz clustered scheduler.
The early Reconciliation job is set to run every 15 minutes [configurable via app properties] and is aimed at reconciling transactions which were created 15 - 30 minutes ago and are in a PENDING state.
The daily Reconciliation job is set to run once per day and is aimed at reconciling all transactions that are in a PENDING state, except for ones which were created 30 minutes ago.
Axis, Phonepe and Paytm payment gateways are implemented.
The following properties in the application.properties file in egov-pg-service have to be added and set to default value after integrating with the new payment gateway. In the below table properties for AXIS bank, payment gateway is shown the same relevant property needs to be added for other payment gateways.
Deploy the latest version of egov-pg-service.
Add pg service persister yaml path in persister configuration.
The egov-pg-service acts as communication/contact between eGov apps and external payment gateways.
Record every transaction against a bill.
Record of payment for multiple bill details for a single consumer code at once.
To integrate, the host of egov-pg-service should be overwritten in helm chart
/pg-service/transaction/v1/_create should be added in the module to initiate a new payment transaction, on successful validation
/pg-service/transaction/v1/_update should be added as the update endpoint to updates an existing payment transaction. This endpoint is issued only by payment gateways to update the status of payments. It verifies the authenticity of the request with the payment gateway and forwards all query params received from a payment gateway
/pg-service/transaction/v1/_search should be added as the search endpoint for retrieving the current status of a payment in our system.
(Note: All the APIs are in the same postman collection therefore the same link is added in each row)
Variable Name | Descriptions |
---|---|
Title | Link |
---|---|
Title | Link |
---|---|
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
Environment Variables | Description |
---|---|
Attribute Name | Description |
---|---|
Title |
---|
APIs |
---|
Reference - JSON Schema - Creating your first schemaNow, properties can be added under the schema definition. In JSON Schema terms, properties is a validation keyword. When you define properties, you create an object where each property represents a key in the JSON data that’s being validated. You can also specify which properties described in the object are required.
Reference - JSON Schema - Creating your first schema
Title | Description |
---|---|
Description |
---|
Description |
---|
Input Field | Description | Mandatory | Data Type |
---|---|---|---|
Input Field | Description | Mandatory | Data Type |
---|---|---|---|
Input Field | Description | Mandatory | Data Type |
---|---|---|---|
Input Field | Description | Mandatory | Data Type |
---|---|---|---|
Output Field | Description | Mandatory | Data Type |
---|---|---|---|
Output Field | Description | Mandatory | Data Type |
---|---|---|---|
Description |
---|
Please check the link to create a new master
Description |
---|
Title | Link |
---|
ex: ///common-masters/ Here “pb” is the tenant folder name.
ex: ///TradeLicense/ Here “pb” is the tenant folder name and “TradeLicense“ is the module name.
ex: ////TradeLicense/ Here “amritsar“ is the ULB name and “TradeLicense“ is the module name. All the data specific to this module for the ULB are configured inside this folder.
Description |
---|
Description |
---|
Variable Name | Default Value | Description |
---|
Each persister config has a version attribute which signifies the service version, this version can contain custom DSL; defined here,
Title |
---|
Title |
---|
Description |
---|
Environment Variables | Description |
---|
PDFMake: ( ):- for generating PDFs.
Mustache.js: ( ):- as templating engine to populate format as defined in format config, from request json based on mappings defined in data config.
For configuration details, refer to .
Title |
---|
Title |
---|
Additional gateways can be added by implementing the interface. No changes are required to the core packages.
Property | Remarks |
---|
Title |
---|
Title |
---|
serviceName
Name of the module to which this configuration belongs.
summary
Summary of the module.
version
Version of the configuration.
mappings
List of definitions within the module. Every definition corresponds to one index requirement. Which means, every object received onto the kafka queue can be used to create multiple indexes, each of these indexes will need configuration, all such configurations belonging to one topic forms one entry in the mappings list. The keys listed henceforth together form one definition and multiple such definitions are part of this mappings key.
topic
The topic on which the data is to be received to activate this particular configuration.
configKey
Key to identify to what type of job is this config for. values: INDEX, REINDEX, LEGACYINDEX. INDEX: LiveIndex, REINDEX: Reindex, LEGACYINDEX: LegacyIndex.
indexes
Key to configure multiple index configurations for the data received on a particular topic. Multiple indexes based on a different requirement can be created using the same object.
name
Index name on the elastic search. (Index will be created if it doesn't exist with this name.)
type
Document type within that index to which the index json has to go. (Elasticsearch uses the structure of index/type/docId to locate any file within index/type with id = docId)
id
Takes comma-separated JsonPaths. The JSONPath is applied on the record received on the queue, the values hence obtained are appended and used as ID for the record.
isBulk
Boolean key to identify whether the JSON received on the Queue is from a Bulk API. In simple words, whether the JSON contains a list at the top level.
jsonPath
Key to be used in case of indexing a part of the input JSON and in case of indexing a custom json where the values for custom json are to be fetched from this part of the input.
timeStampField
JSONPath of the field in the input which can be used to obtain the timestamp of the input.
fieldsToBeMasked
A list of JSONPaths of the fields of the input to be masked in the index.
customJsonMapping
Key to be used while building an entirely different object using the input JSON on the queue
indexMapping
A skeleton/mapping of the JSON that is to be indexed. Note that, this JSON must always contain a key called "Data" at the top-level and the custom mapping begins within this key. This is only a convention to smoothen dashboarding on Kibana when data from multiple indexes have to be fetched for a single dashboard.
fieldMapping
Contains a list of configurations. Each configuration contains keys to identify the field of the input JSON that has to be mapped to the fields of the index json which is mentioned in the key 'indexMapping' in the config.
inJsonPath
JSONPath of the field from the input.
outJsonPath
JSONPath of the field of the index json.
externalUriMapping
Contains a list of configurations. Each configuration contains keys to identify the field of the input JSON that is to be enriched using APIs from the external services. The configuration for those APIs also is a part of this.
path
URI of the API to be used. (it should be POST/_search API.)
queryParam
Configuration of the query params to be used for the API call. It is a comma-separated key-value pair, where the key is the parameter name as per the API contract and value is the JSONPath of the field to be equated against this parameter.
apiRequest
Request Body of the API. (Since we only use _search APIs, it should be only RequestInfo.)
uriResponseMapping
Contains a list of configuration. Each configuration contains two keys: One is a JSONPath to identify the field from response, Second is also a JSONPath to map the response field to a field of the index json mentioned in the key 'indexMapping'.
mdmsMapping
Contains a list of configurations. Each configuration contains keys to identify the field of the input JSON that is to be denormalized using APIs from the MDMS service. The configuration for those MDMS APIs also is a part of this.
path
URI of the API to be used. (it should be POST/_search API.)
moduleName
Module Name from MDMS.
masterName
Master Name from MDMS.
tenantId
Tenant id to be used.
filter
Filter to be applied to the data to be fetched.
filterMapping
Maps the field of input json to variables in the filter
variable
Variable in the filter
valueJsonpath
JSONPath of the input to be mapped to the variable.
Adding new language to the DIGIT System. Refer to the link provided to find out how languages are added in DIGIT.
NotificationConsumer
egov.services.egov_mdms.hostname
Host name for MDMS service.
egov.services.egov_mdms.searchpath
MDMS Search URL.
egov.service.egov.mdms.moduleName
MDMS module which contain boundary master.
egov.service.egov.mdms.masterName
MDMS master file which contain boundary detail.
tenantId
The tenantId (ULB code) for which the boundary data configuration is defined.
moduleName
The name of the module where TenantBoundary master is present.
TenantBoundary.hierarchyType.name
Unique name of the hierarchy type.
TenantBoundary.hierarchyType.code
Unique code of the hierarchy type.
TenantBoundary.boundary.id
Id of boundary defined for particular hierarchy.
boundaryNum
Sequence number of boundary attribute defined for the particular hierarchy.
name
Name of the boundary like Block 1 or Zone 1 or City name.
localname
Local name of the boundary.
longitude
Longitude of the boundary.
latitude
Latitude of the boundary.
label
Label of the boundary.
code
Code of the boundary.
children
Details of its sub-boundaries.
egov.mdms.conf.path
The default value of folder where master data files are stored
masters.config.url
The default value of the file URL which contains master-config values
egov-mdms sample data - https://github.com/egovernments/egov-mdms-data/tree/DEV/data - Connect to preview [Download this and refer it's path as conf path value]
master-config.json - https://github.com/egovernments/egov-mdms-data/blob/DEV/master-config.json - Connect to preview [Refer to this path as master config value]
tenantId
Serves as a Key
moduleName
Name of the module to which the master data belongs
MasterName
The Master Name will be substituted by the actual name of the master data. The array succeeding it will contain the actual data.
tenantId
Unique id for a tenant.
Yes
String
filePath
file-path on git where master data is to be created or updated
Yes
String
masterName
Master Data name to be created or updated
Yes
String
masterData
content to be written on to the Config file
Yes
Object
tenantId
Unique id for a tenant
Yes
String
moduleDetails
module for which master data is required
Yes
Array
mdms
Array of modules
Yes
Array
apiId
unique API ID
Yes
String
ver
API version - for HTTP based request this will be same as used in path
Yes
String
ts
time in epoch format: int64
Yes
Long
action
API action to be performed like _create, _update, _search (denoting POST, PUT, GET) or _oauth etc
Yes
String
DID
Device ID from which the API is called
No
String
Key
API key (API key provided to the caller in case of server to server communication)
No
String
msgId
Unique request message id from the caller
Yes
String
requestorId
UserId of the user calling
No
String
authToken
//session/jwt/saml token/oauth token - the usual value that would go into HTTP bearer token
Yes
String
apiId
unique API ID
Yes
String
ver
API version
Yes
String
ts
response time in epoch format: int64
Yes
Long
resMsgId
unique response message-id (UUID) - will usually be the correlation id from the server
No
String
msgId
message-id of the request
No
String
status
status of request processing -
Enum: SUCCESSFUL (HTTP 201) or FAILED (HTTP 400)
Yes
String
code
Error Code will be a module-specific error label/code to identify the error. All modules should also publish the Error codes with their specific localized values in localization service to ensure clients can print locale-specific error messages. An example of an error code would be UserNotFound to indicate User Not Found by User/Authentication service. All services must declare their possible Error Codes with a brief description in the error response section of their API path.
Yes
String
message
English locale message of the error code. Clients should make a separate call to get the other locale description if configured with the service. Clients may choose to cache these locale-specific messages to enhance performance with a reasonable TTL (May be defined by the localization service based on tenant + module combination).
Yes
String
description
Optional long description of the error to help clients take remedial action. This will not be available as part of the localization service.
No
String
params
Some error messages may carry replaceable fields (say $1, $2) to provide more context to the message. E.g. Format related errors may want to indicate the actual field for which the format is invalid. Client's should use the values in the param array to replace those fields.
No
Array
| Maximum number of records to be written in one PDF |
| Date timezone which will be used to convert epoch timestamp into date ( |
| Default value of localisation locale |
| Default value of localisation tenant |
| File path/URL'S of data config |
| File path/URL'S of format config |
| Bollean lag to set the payment gateway active/inactive |
| Currency representation for merchant, default(INR) |
| Payment merchant Id |
| Secret key for payment merchant |
| User name to access the payment merchant for transaction |
| Password of the user tp access payment merchant |
| Access code |
| Pay command |
| commans status |
| Url for making payment |
| URL to get the status of the transaction |
persister.bulk.enabled | false | Switch to turn on or off the bulk kafka consumer |
persister.batch.size | 100 | The batch size for bulk update |
To use the generic GET/POST SMS gateway, first, configure the service application properties sms.provider.class=Generic
This will set the generic interface to be used. This is the default implementation, which can work with most SMS providers. The generic implementation supports below -
GET or POST-based API
Supports query params, form data, JSON body
To configure the URL of the SMS provider use sms.provider.url property.
To configure the HTTP method use configure the sms.provider.requestType property to either GET or POST.
To configure form data or json api set sms.provider.contentType=application/x-www-form-urlencoded or sms.provider.contentType=application/json respectively.
To configure which data needs to be sent to the API set the below property:
sms.config.map={'uname':'$username', 'pwd': '$password', 'sid':'$senderid', 'mobileno':'$mobileno', 'content':'$message', 'smsservicetype':'unicodemsg', 'myParam': '$extraParam' , 'messageType': '$mtype'}
sms.category.map={'mtype': {'*': 'abc', 'OTP': 'def'}}
sms.extra.config.map={'extraParam': 'abc'}
sms.extra.config.map is not used currently and is only kept for custom implementation which requires data that does not need to be directly passed to the REST API call. sms.config.map is a map of parameters and their values.
Special variables that are mapped -
$username maps to sms.provider.username
$password maps to sms.provider.password
$senderid maps to sms.senderid
$mobileno maps to mobileNumber from kafka fetched message
$message maps to the message from the kafka fetched message
$<name> any variable that is not from above list, is first checked in sms.category.map and then in application.properties and then in the environment variable with full upper case and _ replacing -, space or .
So if you use sms.config.map={'u':'$username', 'p':'password'}. Then the API call will be passed <url>?u=<$username>&p=password
Message success delivery can be controlled using the below properties
sms.verify.response (default: false)
sms.print.response (default: false)
sms.verify.responseContains
sms.success.codes (default: 200,201,202)
sms.error.codes
If you want to verify some text in the API call response set sms.verify.response=true and sms.verify.responseContains to the text that should be contained in the response.
It is possible to whitelist or blacklist phone numbers to which the messages should be sent. This can be controlled using the below properties:
sms.blacklist.numbers
sms.whitelist.numbers
Both of them can be given a separate list of numbers or number patterns. To use patterns use X for any digit match and * for any number of digits match.
sms.blacklist.numbers=5*,9999999999,88888888XX will blacklist any phone number starting with 5, or the exact number 9999999999 and all numbers starting from 8888888800 to 8888888899
Few 3rd parties require a prefix of 0 or 91 or +91 with the mobile number. In such a case you can use the sms.mobile.prefix to automatically add the prefix to the mobile number coming in the message queue.
Service request allows users to define a service and then create a service against service definitions. A service definition can be a survey or a checklist which the users might want to define and a service against the service definition can be a response against the survey or a filled-out checklist.
Prior knowledge of Java/J2EE
Prior knowledge of SpringBoot
Prior knowledge of PostgreSQL
Prior knowledge of REST APIs and related concepts like path parameters, headers, JSON etc.
Users can -
Create and search service definitions.
Create and search services.
/service-request/service/definition/v1/_create
- Takes RequestInfo and ServiceDefinition in request body. ServiceDefinition has all the parameters related to the service definition being created.
/service-request/service/definition/v1/_search
- Allows searching of existing service definitions. Takes RequestInfo, ServiceDefinitionCriteria and Pagination objects in the request body.
/service-request/service/v1/_create
- Takes RequestInfo and Service in the request body. Service has all the parameters related to the service being created against a particular ServiceDefinition.
/service-request/service/v1/_search
- Allows searching of existing services created against service definitions. Takes RequestInfo, ServiceCriteria and Pagination objects in the request body.
Detailed API payloads for interacting with Service Request for all four endpoints can be found in the following collection - Service Request Collection
The link to the swagger documentation can be found below - Service Request Contract
Configure escalation flows based on predefined criteria
The objective of this functionality is to provide a mechanism to trigger action on applications which satisfy certain predefined criteria.
Looking at sample use cases provided by the product team, the majority of use cases can be summarised as performing action ‘X’ on applications in state ‘Y’ and have exceeded the state SLA by ‘Z’ days. We can write one query builder which takes this state ‘Y’ and SLA exceeded by ‘Z’ as search params and then we can perform action X on the search response. This has been achieved by defining an MDMS config like below:
In the above configuration, we define the condition for triggering the escalation of applications. The above configuration triggers escalation for applications in RESOLVED
state which have exceeded stateSLA by more than 1.0
day and this triggers the escalation by performing CLOSERESOLVEDCOMPLAIN
on the applications. Once the applications are escalated the processInstances are pushed on the pgr-auto-escalation
topic. We have done a sample implementation for pgr-services, where we have updated the persister configuration to listen on this topic and update the complaint status accordingly.
The auto escalation for businessService PGR
will be triggered when the following API is called:
Note: The businessService is a path param. (For example, if the escalation has to be done for tl-services NewTL workflow the URL will be 'http://egov-workflow-v2.egov:8080/egov-workflow-v2/egov-wf/auto/NewTL/_escalate
').
These APIs have to be configured in the cron job config so that they can be triggered periodically as per requirements. Only the user with the role permission AUTO_ESCALATE
can trigger auto escalations. Hence, create the user with statelevel AUTO_ESCALATE
role permission and then add that user in the userInfo of the requestInfo. This step has to be done because the cron job does internal API calls and ZUUL will not enrich the userInfo.
For setting up an auto-escalation trigger, the workflow must be updated. For example, to add an auto escalate trigger on RESOLVED
state with action CLOSERESOLVEDCOMPLAIN
in PGR
businessService, we will have to search the businessService and add the following action in the actions array of RESOLVED
state and call update API.
Suppose an application gets auto-escalated from state ‘X' to state 'Y’, employees can look at these escalated applications through the escalate search API. The following sample cURL can be used to search auto-escalated applications of the PGR module belonging to Amritsar tenant -
The inbox service is an event-based service that fetches pre-aggregated data of municipal services and workflow performs complex search operations and returns applications and workflow data in a paginated manner. The service also returns the total count matching the search criteria.
The first step is to capture pre-aggregated events for the module which needs to be enabled in event-based inbox. This index needs to hold all the fields against which a search needs to be performed and any other fields which need to be shown in the UI when applications are listed.
Now, this service allows to search both the module objects as well as processInstance
(Workflow record) based on the provided criteria for any of the municipal services. For this, it uses a module-specific configuration. A sample configuration is given below -
Inside each query configuration object, we have the following keys -
module
- Module code for which inbox has been configured.
index
- Index where pre-aggregated events are stored.
allowedSearchCriteria
- This lists out various parameters on which searching is allowed for the given module
sortBy
- Specifies the field path inside the pre-aggregated record present in the index against which the result has to be sorted. Default order can be specified to ASC
or DESC
.
sourceFilterPathList
- This is a list which specifies the fields which should appear and which should not appear as part of the search result in order to avoid clutter and to improve query performance.
allowedSearchCriteria
- Within each object is going to have the following keys -
name
- Name of the incoming search parameter in the inbox request body.
path
- Path inside the pre-aggregated record present in the index against which the incoming search parameter needs to be matched.
isMandatory
- This specifies whether a particular parameter is mandatory to be sent in inbox search request or not.
operator
- This specifies which operator clause needs to be applied while forming ES queries. Currently, we support equality and range comparison operators only.
A new inbox service needs to be enabled via the configuration present in MDMS. The path to the MDMS configuration is - https://github.com/egovernments/egov-mdms-data/blob/DEV/data/pb/inbox-v2/InboxConfiguration.json.
Once the configuration is added to this MDMS file, the MDMS service for that particular environment has to be restarted.
If any existing module needs to be migrated onto a new inbox, data needs to be reindexed, and configuration like the above needs to be written to enable these modules on the new inbox.
For modules where search has to be given on PII data like mobile number, a migration API needs to be written which will fetch data from the database, decrypt it, hash it using the encryption client and store it in the index to enable search.
For modules where a search has to be given on non-PII data, the indexer service’s _legacyIndex API can be invoked to move data to the new index.
API Swagger Documentation |
Configure user data management services
User service is responsible for user data management and providing functionality to login and logout into the DIGIT system
Before you proceed with the configuration, make sure the following pre-requisites are met
Java 17
Encryption and MDMS services are running
PostgreSQL server is running
Redis is running
Store, update and search user data
Provide Authentication
Provide login and logout functionality into the DIGIT platform
Store user data PIIs in encrypted form
Setup the latest version of egov-enc-service and egov-mdms- service
Deploy the latest version of egov-user service
Note: This video will give you an idea of how to deploy any Digit-service. Further you can find the latest builds for each service in out latest release document here.
Add role-action mapping for APIs
Note : This is a sample JSON file containing role-action mapping , If you don't have any of the master data setup yet you can you this to create on for you and then add all these files and start making changed in your repo.
The following application properties file in user service are configurable.
User data management and functionality to log in and log out into the DIGIT system using OTP and password.
Providing the following functionalities to citizen and employee-type users
Employee:
User registration
Search user
Update user details
Forgot password
Change password
User role mapping(Single ULB to multiple roles)
Enable employees to login into the DIGIT system based on a password.
Citizen:
Create user
Update user
Search user
User registration using OTP
OTP based login
To integrate, the host of egov-user should be overwritten in the helm chart.
Use /citizen/_create
endpoint for creating users into the system. This endpoint requires the user to validate his mobile number using OTP. First, the OTP is sent to the user's mobile number and then the OTP is sent as otpReference
in the request body.
Use /v1/_search
and /_search
endpoints to search users in the system depending on various search parameters.
Use /profile/_update
for updating the user profile. The user is validated (either through OTP-based validation or password validation) when this API is called.
/users/_createnovalidate
and /users/_updatenovalidate
are endpoints to create user data into the system without any validations (no OTP or password required). They should be strictly used only for creating/updating users internally and should not be exposed outside.
Forgot password: In case the user forgets the password it can be reset by first calling /user-otp/v1/_send
which generates and sends OTP to the employee’s mobile number. The password is then updated using this OTP by calling the API /password/nologin/_update
in which a new password along with the OTP is sent.
Use /password/_update
to update the existing password by logging in. Both old and new passwords are sent to the request body. Details of the API can be found in the attached swagger documentation.
Use /user/oauth/token
for generating tokens, /_logout
for logout and /_details
for getting user information from the token.
Multi-Tenant User: The multi-tenant user functionality allows users to perform actions across multiple ULBs. For example, employees belonging to Amritsar can perform the role of say Trade License Approver for Jalandhar by assigning them the tenant-level role of tenantId pb.jalandhar.
Following is an example of the user:
If an employee has a role with statelevel tenantId
they can perform actions corresponding to that role across all tenants.
Refresh Token: Whenever the /user/oauth/token
is called to generate the access_token
along with access_token,
one more token is generated called refresh_token
. The refresh token is used to generate a new access_token
whenever the existing one expires. Till the time the refresh token is valid, users will not have to log in even if their access_token
expires since this is generated using refresh_token
. The validity time of the refresh token is configurable and can be configured using the property: refresh.token.validity.in.minutes
Since User service handles PII (Personal Identifiable information) encrypting the data before saving in DB becomes crucial.
DIGIT manages these as security policy in Master Data which is then referred by encryption service to encrypt the data before persisting it to DB.
There are two security policy models for user data - User and UserSelf.
User model
attributes
contains a list of fields from the user object that needs to be secured and the field
roleBasedDecryptionPolicy
is an attribute-level role-based policy. It defines visibility for each attribute.
User security model is used for Search API response
UserSelf
It contains the same structure of security policy but the UserSelf is used for Create/Update API response.
The visibility of the PII data is based on the above MDMS configuration. There are three types of visibility mentioned in the config.
PLAIN - Show text in plain form.
MASKED - The returned text contains masked data. The masking pattern is applied as defined in the Masking Patterns master data.
NONE - The returned text does not contain any data. It contains strings like “Confidential Information”.
Any user can get plain access to the secured data (citizen’s PII) by requesting through the plainAccessRequest
parameter. It takes the following parameters:
recordId
- It is the unique identifier of the record that is requested for plain access.
fields
- It defines a list of attributes that are requested for plain access.
To know more about the encryption policy, refer to the document Encryption Service docs.
Persister Service persists data in the database in a sync manner providing very low latency. The queries which have to be used to insert/update data in the database are written in yaml file. The values which have to be inserted are extracted from the json using jsonPaths defined in the same yaml configuration.
Below is a sample configuration which inserts data in a couple of tables.
The above configuration is used to insert data published on the kafka topic save-pgr-request
in the tables eg_pgr_service_v2
and eg_pgr_address_v2
. Similarly, the configuration can be written to update data. Following is a sample configuration:
The above configuration is used to update the data in tables. Similarly, the upsert operation can be done using ON CONFLICT() function in psql.
The table below describes each field variable in the configuration.
The objective of this service is to create a common point to manage all the SMS notifications being sent out of the platform. Notification SMS service consumes SMS from the Kafka notification topic and processes them to send it to a third-party service. Modules like PT, TL, PGR etc make use of this service to send messages through the Kafka Queue.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Prior knowledge of Java/J2EE
Prior knowledge of SpringBoot
Prior knowledge of Third party API integration
Prior knowledge of REST APIs and related concepts like path parameters, headers, JSON etc
Prior knowledge of Kafka and related concepts like Producer, Consumer, Topic etc.
Provide a common platform to send an SMS notification to the user.
Support localised SMS.
Easily configurable with a different SMS service provider.
This service is a consumer, which means it reads from the Kafka queue and does not provide a facility to be accessed through API calls, there’s no REST layer here. The producers willing to integrate with this consumer will be posting a JSON onto the topic configured at ‘kafka.topics.notification.sms.name’. The notification-sms service reads from the queue and sends the sms to the mentioned phone number using one of the SMS providers configured.
The implementation of the consumer is present in the directory
src/main/java/org/egov/web/notification/sms/service/impl
.
These are the current providers available
Generic
Console
MSDG
The implementation to be used can be configured by setting sms.provider.class
.
The Console
implementation just prints the message mobile number and message to the console
This is the default implementation, which can work with most SMS providers. The generic implementation supports below
GET or POST-based API
Supports query params, form data, JSON Body
To configure the URL of the SMS provider use sms.provider.url
property To configure the HTTP method use this to configure the sms.provider.requestType
property to either GET
or POST
.
To configure form data or json API set sms.provider.contentType=application/x-www-form-urlencoded
or sms.provider.contentType=application/json
respectively
To configure which data needs to be sent to the API below property can be configured:
sms.config.map
={'uname':'$username', 'pwd': '$password', 'sid':'$senderid', 'mobileno':'$mobileno', 'content':'$message', 'smsservicetype':'unicodemsg', 'myParam': '$extraParam' , 'messageType': '$mtype'}
sms.category.map
={'mtype': {'*': 'abc', 'OTP': 'def'}}
sms.extra.config.map
={'extraParam': 'abc'}
sms.extra.config.map
is not used currently and is only kept for custom implementation which requires data that doesn't need to be directly passed to the REST API call
sms.config.map
is a map of parameters and their values
Special variables that are mapped
$username
maps to sms.provider.username
$password
maps to sms.provider.password
$senderid
maps to sms.senderid
$mobileno
maps to mobileNumber
from kafka fetched message
$message
maps to the message
from the kafka fetched message
$<name>
any variable that is not from the above list, is first checked in sms.category.map
and then in application.properties
and then in the environment variable with full upper case and _
replacing -
, space or .
So if you use sms.config.map={'u':'$username', 'p':'password'}
. Then the API call will be passed <url>?u=<$username>&p=password
Message Success or Failure
Message success delivery can be controlled using the below properties
sms.verify.response
(default: false)
sms.print.response
(default: false)
sms.verify.responseContains
sms.success.codes
(default: 200,201,202)
sms.error.codes
If you want to verify some text in the API call response set sms.verify.response=true
and sms.verify.responseContains
to the text that should be contained in the response.
Blacklisting or Whitelisting numbers
It is possible to whitelist or blacklist phone numbers to which the messages should be sent. This can be controlled using below properties:
sms.blacklist.numbers
sms.whitelist.numbers
Both of them can be given a ,
separated list of numbers or number patterns. To use patterns use X
for any digit match and *
for any number of digits match.
sms.blacklist.numbers=5*,9999999999,88888888XX
will blacklist any phone number starting with 5
, or the exact number 9999999999
and all numbers starting from 8888888800
to 8888888899
Prefixing
Few 3rd parties require a prefix of 0
or 91
or +91
with the mobile number. In such a case you can use sms.mobile.prefix
to automatically add the prefix to the mobile number coming into the message queue.
Error Handling
There are different topics to which the service will send messages. Below is a list of the same:
1kafka.topics.backup.sms 2kafka.topics.expiry.sms=egov.core.sms.expiry 3kafka.topics.error.sms=egov.core.sms.error
In an event of a failure to send an SMS, if kafka.topics.backup.sms
is specified, then the message will be pushed onto that topic.
Any SMS which expires due to Kafka lags, or some other internal issues, they will be passed to the topic configured in kafka.topics.expiry.sms
If a backup
the topic has not been configured, then in an event of an error the same will be delivered to kafka.topics.error.sms
Following are the properties in the application.properties file in the notification sms service which are configurable.
Add the variables present in the above table in a particular environment file
Deploy the latest version of egov-notification-sms service.
Notification SMS service consumes SMS from the Kafka notification topic and processes them to send it to a third-party service. Modules like PT, TL, PGR etc make use of this service to send messages through the Kafka Queue.
Provide an interface to send notification SMS on user mobile number.
Support SMS in various languages.
To integrate, create the SMS request body given in the example below. Provide the correct mobile number and message in the request body and send it to the Kafka topic:- egov.core.notification.sms
The notification-sms service reads from the queue and sends the sms to the mentioned phone number using one of the SMS providers configured.
Create and modify workflow configuration
Each service integrated with egov-workflow-v2 service needs to first define the workflow configuration which describes the workflow states, the action that can be taken on these states, the user roles that can perform those actions, SLAs etc. This configuration is created using APIs and is stored in the DB. The configuration can be created at either the state level or the tenant level based on the requirements.
Before you proceed with the configuration, make sure the following pre-requisites are met -
egov-workflow-v2 service is up and running
Role action mapping is added for the BusinessService APIs
Create and modify workflow configuration
Configure state level as well BusinessService level SLA
Control access to workflow actions from the configuration
Validates if the flow defined in the configuration is complete during the creation
Deploy the latest version of egov-workflow-v2 service.
Add role action mapping for BusinessService APIs (preferably add _create and update only for SUPERUSER. Search can be added for CITIZEN and required employee roles like TL__CEMP etc.
Overwrite the egov.wf.statelevel flag (true for state level and false for tenant level).
Add businessService persister yaml path in persister configuration.
Create the businessService JSON based on product requirements. Following is a sample json of a simple 2-step workflow where an application can be applied by a citizen or counter employee and then can be either rejected or approved by the approver.
Once the businessService json is created add it in the request body of _create API of workflow and call the API to create the workflow.
To update the workflow first search the workflow object using _search API and then make changes in the businessService object and then call _update using the modified search result.
States cannot be removed using _update API as it will leave applications in that state in an invalid state. In such cases first, all the applications in that state should be moved forward or backward state and then the state should be disabled through DB directly.
The workflow configuration can be used by any module which performs a sequence of operations on an application/entity. It can be used to simulate and track processes in organisations to make them more efficient and increase accountability.
Integrating with workflow service provides a way to have a dynamic workflow configuration which can be easily modified according to the changing requirements. The modules don’t have to deal with any validations regarding workflow such as authorisation of the user to take an action if documents are required to be uploaded at a certain stage etc. since it will be automatically handled by egov-workflow-v2 service based on the defined configuration. It also automatically keeps updating SLAs for all applications which provide a way to track the time taken by an application to get processed.
To integrate, the host of egov-workflow-v2
should be overwritten in the helm chart
/egov-workflow-v2/egov-wf/businessservice/_search
should be added as the endpoint for searching workflow configuration. (Other endpoints are not required once workflow configuration is created)
The configuration can be fetched by calling
_search
API
Note: All the APIs are in the same Postman collection therefore the same link is added in each row.
User-OTP service handles the OTP for user registration, user log in and password reset for a particular user.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Java 17
egov-user service is running
egov-localization service is running
egov-otp service is running
The User-OTP service sends the OTP to the user on login request, on password change request and during new user registration.
Deploy the latest version of user-otp.
Make sure egov-otp is running.
Add Role-Action mapping for APIs.
User-OTP service handles the OTP for user registration, user login and password reset for a particular user.
Can perform user registration, login, and password reset.
In the future, if we want to expose the application to citizens then it can be done easily.
To integrate, the host of user-otp module should be overwritten in the helm chart.
/user-otp/v1/_send
should be added as the endpoint for sending OTP to the user via sms or email
BasePath
/user-otp/v1/[API endpoint]
Method
a) POST /_send
This method sends the OTP to a user via sms or email based on the below parameters -
Following are the Producer topic.
egov.core.notification.sms.otp
:- This topic is used to send OTP to user mobile number.
org.egov.core.notification.email
:- This topic is used to send OTP to user email id.
Tracer is a library that intercepts API calls to DIGIT services with imported tracer and logs errors.
A new utility method has been added to the tracer using which modules can prepare error details and invoke this utility method to persist these error objects on the database for retrying them.
Here are the steps using which any module can utilize this functionality to store error objects -
The concerned module has to prepare error details.
The concerned module can then make a call to exceptionHandler
method which takes a list of errorDetails
as its argument. This method will do a couple of enrichments and then emit these errorDetails
to Kafka for indexer service to pick it up and persist.
Create an index with the name - egov-tracer-error-details
using this command on Kibana -
PUT egov-tracer-error-details { }
4. Create mapping for this index -
5. Setup indexer with the following indexer configuration file -
6. Now, whenever exceptionHandler
method will be invoked, errorDetails
will be persisted on the index that was created in step 3.
The enc-client library is a supplementary Java library that supports encryption-related functionalities so that every service does not need to pre-process the request before calling the encryption service.
MDMS Service
Encryption Service
Kafka
Important Note: This library fetches the MDMS configurations explained below at boot time. So after you make changes in the MDMS repo and restart the MDMS service, you would also need to RESTART THE SERVICE which has imported the enc-client library. For example, the report service is using the enc-client library so after making configuration changes to the Security Policy about any report, you will have to restart the report service.
Encrypt a JSON Object - The encryptJson
function of the library takes any Java Object as input and returns an object which has encrypted values of the selected fields. The fields to be encrypted are selected based on an MDMS Configuration.
This function requires the following parameters:
Java/JSON object - The object whose fields will get encrypted.
Model - It is used to identify the MDMS configuration to be used to select fields of the provided object.
Tenant ID - The encryption key will be selected based on the passed tenantId.
Encrypt a Value - The encryptValue
function of the library can be used to encrypt single values. This method also required a tenantId parameter.
Decrypt a JSON Object - The decryptJson
function of the library takes any Java Object as input and returns an object that has plain/masked or no values of the encrypted fields. The fields are identified based on the MDMS configuration. The returned value(plain/masked/null) of each of the attributes depends on the user’s role and if it is a PlainAccess
request or a normal request. These configurations are part of the MDMS.
This function required the following parameters:
Java/JSON object - The object containing the encrypted values that are to be decrypted.
Model - It is used to select a configuration from the list of all available MDMS configurations.
Purpose - It is a string parameter that passes the reason of the decrypt request. It is used for Audit purposes.
RequestInfo - The requestInfo parameter serves multiple purposes:
User Role - A list of user roles is extracted from the requestInfo parameter.
PlainAccess Request - If the request is an explicit plain access request, it is to be passed as a part of the requestInfo. It will contain the fields that the user is requesting for decryption and the id of the record.
While decrypting Java objects, this method also audits the request.
All the configurations related to the enc-client library are stored in the MDMS. These master data are stored in DataSecurity
module. It has two types of configurations:
Masking Patterns
Security Policy
{ "patternId": "001", "pattern": ".(?=.{4})" }
The masking patterns for different types of attributes(mobile number, name, etc.) are configurable in MDMS. It contains the following attributes:
patternId
- It is the unique pattern identifier. This id is referred to in the SecurityPolicy MDMS.
pattern
- This defines the actual pattern according to which the value will be masked.
Here is a link to a sample Masking Patterns master data.
The Security Policy master data contains the policy used to encrypt and decrypt JSON objects. Each of the Security Policy contains the following details:
model
- This is the unique identifier of the policy.
uniqueIdentifier
- The field defined here should uniquely identify records passed to the decryptJson
function.
attributes
- This defines a list of fields from the JSON object that needs to be secured.
roleBasedDecryptionPolicy
- This defines attribute-level role-based policy. It will define visibility for each attribute.
The visibility is an enum with the following options:
PLAIN - Show text in plain form.
MASKED - The returned text will contain masked data. The masking pattern will be applied as defined in the Masking Patterns master data.
NONE - The returned text will not contain any data. It would contain string like “Confidential Information”.
It defines what level of visibility should the decryptJson
function return for each attribute.
{ "name": "mobileNumber", "jsonPath": "mobileNumber", "patternId": "001", "defaultVisibility": "MASKED" }
The Attribute defines a list of attributes of the model that are to be secured. The attribute is defined by the following parameters:
name
- This uniquely identifies the attribute out of the list of attributes for a given model.
jsonPath
- It is the JSON path of the attribute from the root of the model. This jsonPath is NOT the same as the Jayway JsonPath library. This uses /
and *
to define the json paths.
patternId
- It refers to the pattern to be used for masking which is defined in the Masking Patterns master.
defaultVisibility
- It is an enum configuring the default level of visibility of that attribute. If the visibility is not defined for a given role, then this defaultVisibility will apply.
This parameter is used to define the unique identifier of that model. It is used to audit the access logs. (This attribute’s jsonPath should be at the root level of the model.)
{ "name": "uuid", "jsonPath": "uuid" }
It defines attribute-level access policies for a list of roles. It consists of the following parameters:
roles
- It defines a list of role codes for which the policy will be applied. Please make sure not to duplicate role codes anywhere in the other policy. Otherwise, any one of the policies will get chosen for that role code.
attributeAccessList
- It defines a list of attributes for which the visibility differs from the default for those roles.
There are two levels of visibility:
First-level Visibility - This applies to normal search requests. The search response could have multiple records.
Second level Visibility - It is applied only when a user explicitly requests plain access to a single record with a list of fields required in plain.
Second-level visibility can be requested by passing plainAccessRequest
in the RequestInfo
.
Any user can get plain access to the secured data(citizen’s PII) by requesting through the plainAccessRequest
parameter. It takes the following parameters:
recordId
- It is the unique identifier of the record requested for plain access.
fields
- It defines a list of attributes that are requested for plain access.
Every decrypt request is audited. Based on the uniqueIdentifier
defined as part of the Security Policy, it lists out the identifiers of the records that were decrypted as part of the request.
Each audit object contains the following attributes:
Whenever any user logs an authorization token, a refresh token is generated for the user. Using the auth token the client can make rest API calls to the server to fetch data. The auth token has an expiry period. Once the authorization token expires, it cannot be used to make API calls. The client has to generate a new authorization token. This is done by authenticating the refresh token with the server which then generates and sends a new authorization token to the client. The refresh token avoids the need for the client to log in again whenever the authorization token expires.
Refresh token also has an expiry period and once it gets expired it cannot be used to generate new authorization tokens. The user has to log in again to get a new pair of authorization tokens and refresh tokens. Generally, the duration before the expiry of the refresh token is more as compared to that of authorization tokens. If the user logs out of the account both authorization tokens and refresh tokens become invalid.
Variables to configure expiry time:
Here are the articles in this section:
Deploy workflow 2.0 in an environment where workflow is already running
In workflow 2.0 assignee is changed from an object to a list of objects.
To accommodate this change a new table named 'eg_wf_assignee_v2' is added that maps the processInstaceIds to assignee UUIDs. To deploy workflow 2.0 in an environment where workflow is already running assignee column needs to be migrated to the eg_wf_assignee_v2 table.
The following query does this migration:
Persister config for egov-workflow-v2 is updated. Insert query for the table eg_wf_assignee_v2 is added in egov-workflow-v2-persister.yml.
The latest updated config can be referred to from the below link:
The employee inbox has an added column to display the locality of the applications. This mapping of the application number to locality is fetched by calling the searcher API for the respective module. If a new module is integrated with workflow its searcher config should be added in the locality searcher yaml with module code as a name in the definition. All the search URLs and role action mapping details must be added to the MDMS.
The format of the URL is given below:
Sample request for TL:
The searcher yaml can be referred from the below link:
For sending back the application to citizens, the action with the key 'SENDBACKTOCITIZEN' must be added. The exact key should be used. The resultant state of the action should be a new state. If pointing to an existing state the action in that state will be visible to the CITIZEN even when the application reaches the state without Send Back as the workflow is role-based.
To update the businessService for Send Back feature, add the following state and action in the search response at required places and add call businessService update API. This assigns the UUID to the new state and action and creates the required references. The Resubmit action is added as optional action for counter employees to take action on behalf of the citizen.
State json:
Action json:
Each item in the above dropdown is displayed by adding an object in the link below -
For example:
{
"id": 1928,
"name": "rainmaker-common-tradelicence",
"url": "quickAction",
"displayName": "Search TL",
"orderNumber": 2,
"parentModule": "",
"enabled": true,
"serviceCode": "",
"code": "",
"path": "",
"navigationURL": "tradelicence/search",
"leftIcon": "places:business-center",
"rightIcon": "",
"queryParams": ""
}
id , url, displayName, navigationURL
Mandatory properties.
The value of the URL property should be “quickAction” as shown above.
Accordingly, add the role-actions:
{
"rolecode": "TL_CEMP",
"actionid": 1928,
"actioncode": "",
"tenantId": "pb"
}
SLA slots and the background colour of the SLA days remaining on the Inbox page are defined in the MDMS configuration as shown above.
For example: If the maximum SLA is 30 then, it has 3 slots
30 - 30*(1/3) => 20 - 30: will have green colour defined
0 - 20: will have yellow colour defined
<0: will have red colour defined
The colours are also configured in the MDMS.
For API /egov-workflow-v2/egov-wf/process/_transition: The field assignee of type User in ProcessInstance object is changed to a list of 'User' called assignees. User assignee --> List<User> assignees
For Citizen Sendback: When the action SENDBACKTOCITIZEN is called on the entity the module has to enrich the assignees with the UUIDs of the owners and creator of the entity.
Configure workflows for a new product
Workflow is defined as a sequence of tasks that has to be performed on an application/Entity to process it. The egov-workflow-v2
is a workflow engine which helps in performing these operations seamlessly using a predefined configuration. We will discuss how to create this configuration for a new product in this document.
Before you proceed with the configuration, make sure the following pre-requisites are met -
egov-workflow-v2
service is up and running
Role action mapping is added for business service APIs
Create and modify workflow configuration according to the product requirements
Configure State level as well BusinessService level SLA to efficiently track the progress of the application
Control access to perform actions through configuration
Attributes | Description |
---|---|
Deploy the latest version of the egov-workflow-v2 service.
Add businessService persister yaml path in persister configuration.
Add role action mapping for BusinessService APIs.
Overwrite the egov.wf.statelevel flag (true for state level and false for tenant level).
The Workflow configuration has 3 levels of hierarchy:
BusinessService
State
Action
The top-level object is BusinessService which contains fields describing the workflow and a list of States that are part of the workflow. The businessService can be defined at the tenant level like pb.amritsar or at the state level like pb. All objects maintain an audit sub-object which keeps track of who is creating and updating and the time of it.
Each state object is a valid status for the application. The State object contains information about the state and what actions can be performed on it.
The action object is the last object in the hierarchy, it defines the name of the action and the roles that can perform the action.
The workflow should always start from the null state as the service treats new applications as having null as the initial state. eg:
In the action object whatever nextState is defined, the application will be sent to that state. It can be to another forward state or even some backward state from where the application has already passed (generally, such actions are named SENDBACK)
SENDBACKTOCITIZEN is a special keyword for an action name. This action sends back the application to the citizen’s inbox for him to take action. A new State should be created on which Citizen can take action and should be the nextState of this action. While calling this action from the module assignees should be enriched by the module with the UUIDs of the owners of the application
For integration-related steps, refer to the document Setting Up Workflows.
Note: All the APIs are in the same Postman collection therefore the same link is added in each row.
The URL shortening service is used to shorten long URLs. There may be a requirement when we want to avoid sending very long URLs to the user via SMS, WhatsApp etc. This service compresses the URL.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Prior knowledge of Java/J2EE
Prior knowledge of SpringBoot
Prior knowledge of REST APIs and related concepts like path parameters, headers, JSON etc.
Compress long URLs
Converted short URLs contain id, which is used by this service to identify and get longer URLs.
Environment Variable | Description |
---|---|
Deploy the latest version of the URL Shortening service
POST /egov-url-shortening/shortener
Receive long URLs and converts them to shorter URLs. Shortened URLs contain URLs to the endpoint mentioned next. When a user clicks on shortened URL, the user is redirected to a long URL.
GET /{id}
This shortened URL contains the path to this endpoint. The service uses the id used in the last endpoint to get the long URL. In response, the user is redirected to the long URL.
Property | Value | Remarks |
---|---|---|
APIs |
---|
Variable Name | Description |
---|---|
Property | Value | Remarks |
---|---|---|
Title |
---|
Title |
---|
Title |
---|
Title | Link |
---|---|
Input Field | Description | Mandatory | Data Type |
---|---|---|---|
To know more about regular expression refer the below articles https://towardsdatascience.com/regular-expressions-clearly-explained-with-examples-822d76b037b4 Java Regular Expressions for Masks To test regular expression refer to the below link. regex101: build, test, and debug regex
Param | Description |
---|---|
API | Description |
---|---|
Title |
---|
Title |
---|
Title |
---|
access.token.validity.in.minutes
Duration in minutes for which the authorization token is valid
refresh.token.validity.in.minutes
Duration in minutes for which the refresh token is valid
/user/oauth/token
Used to start the session by generating Auth token and refresh token from username and password using grant_type as password. The same API can be used to generate new auth token from refresh token by using grant_type as refresh_token and sending the refresh token with key refresh_token
/user/_logout
This API is used to end the session. The access token and refresh token will become invalid once this API is called. Auth token is sent as param in the API call
egov.user.search.default.size
10
default search record number limit
citizen.login.password.otp.enabled
true
whether citizen login otp based
employee.login.password.otp.enabled
false
whether employee login otp based
citizen.login.password.otp.fixed.value
123456
fixed otp for citizen
citizen.login.password.otp.fixed.enabled
false
allow fixed otp for citizen
otp.validation.register.mandatory
true
whether otp compulsory for registration
access.token.validity.in.minutes
10080
validity time of access token
refresh.token.validity.in.minutes
20160
validity time of refresh token
default.password.expiry.in.days
90
expiry date of a password
account.unlock.cool.down.period.minutes
60
unlock time
max.invalid.login.attempts.period.minutes
30
window size for counting attempts for lock
max.invalid.login.attempts
5
max failed login attempts before account is locked
egov.state.level.tenant.id
pb
serviceName
The module name to which the configuration belongs
version
Version of the config
description
Detailed description of the operations performed by the config
fromTopic
Kafka topic from which data has to be persisted in DB
isTransaction
Flag to enable/disable perform operations in Transaction fashion
query
Prepared Statements to insert/update data in DB
basePath
JsonPath of the object that has to be inserted/updated.
jsonPath
JsonPath of the fields that has to be inserted in table columns
type
Type of field
dbType
DB Type of the column in which field is to be inserted
egov.core.notification.sms
It is the topic name to which the notification sms consumer would subscribe. Any module wanting to integrate with this consumer should post data on this topic only.
sms.provider.class
Generic
This property decides which SMS provider is to be used by the service to send messages. Currently, Console, MSDG and Generic have been implemented.
sms.provider.contentType
application/x-www-form-urlencoded
To configure form data or json api set sms.provider.contentType=application/x-www-form-urlencoded or sms.provider.contentType=application/json respectively
sms.provider.requestType
POST
Property to configure the http method used to call provider
sms.provider.url
URL of the provider. This will be given by the SMS provider only.
sms.provider.username
egovsms
Username as provided by the provider which is passed during the API call to the provider.
sms.provider.password
abc123
Password as provided by the provider which is passed during the API call to the provider. This has to be encrypted and stored
sms.senderid
EGOV
SMS sender id provided by the provider, this will show up as the sender on receiver’s phone.
sms.config.map
{'uname':'$username', 'pwd': '$password', 'sid':'$senderid', 'mobileno':'$mobileno', 'content':'$message', 'smsservicetype':'unicodemsg', 'myParam': '$extraParam' , 'messageType': '$mtype'}
Map of parameters to be passed to the API provider. This is provider-specific. $username maps to sms.provider.username
$password maps to sms.provider.password
$senderid maps to sms.senderid
$mobileno maps to mobileNumber from kafka fetched message
$message maps to the message from the kafka fetched message
$<name> any variable that is not from above list, is first checked in sms.category.map and then in application.properties and then in environment variable with full upper case and _ replacing -, space or
sms.category.map
{'mtype': {'*': 'abc', 'OTP': 'def'}}
replace any value in sms.config.map
sms.blacklist.numbers
5*,9999999999,88888888XX
For blacklisting, a “,” separated list of numbers or number patterns. To use patterns use X for any digit match and * for any number of digits match
sms.whitelist.numbers
5*,9999999999,88888888XX
For whitelisting, a “,” separated list of numbers or number patterns. To use patterns use X for any digit match and * for any number of digits match
sms.mobile.prefix
91
add the prefix to the mobile number coming in the message queue
API Postman Collection
tenantId
Unique id for a tenant.
Yes
String
mobileNumber
Mobile number of the user
Yes
String
type
OTP type ex: login/register/password reset
Yes
String
userType
Type of user ex: Citizen/Employee
No
String
tenantId
The tenantId (ULB code) for which the workflow configuration is defined
businessService
The name of the workflow
business
The name of the module which uses this workflow configuration
businessServiceSla
The overall SLA to process the application (in milliseconds)
state
Name of the state
applicationStatus
Status of the application when in the given state
docUploadRequired
Boolean flag representing if document are required to enter the state
isStartState
Boolean flag representing if the state can be used as starting state in workflow
isTerminateState
Boolean flag representing if the state is the leaf node or end state in the workflow configuration. (No Actions can be taken on states with this flag as true)
isStateUpdatable
Boolean flag representing whether data can be updated in the application when taking action on the state
currentState
The current state on which action can be performed
nextState
The resultant state after action is performed
roles
A list containing the roles which can perform the actions
auditDetails
Contains fields to audit edits on the data. (createdTime, createdBy,lastModifiedTIme,lastModifiedby)
host.name
Host name to append in short URL
db.persistance.enabled
The boolean flag stores the short URL in the database when the flag is set as TRUE.
Configure workflows as per requirements
Workflows are a series of steps that moves a process from one state to another state by actions performed by different kind of Actors - Humans, Machines, Time based events etc. to achieve a goal like onboarding an employee, or approve an application or granting a resource etc. The egov-workflow-v2 is a workflow engine which helps in performing these operations seamlessly using a predefined configuration.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Java 17
Kafka server is up and running
egov-persister service is running and has a workflow yml added yo persister config path.
PostgreSQL server is running and a database is created to store workflow configuration and data
Always allow anyone with a role in the workflow state machine to view the workflow instances and comment on it
On the creation of workflow, it will appear in the inbox of all employees that have roles that can perform any state transitioning actions in this state.
Once an instance is marked to an individual employee it will appear only in that employee's inbox although point 1 will still hold true and all others participating in the workflow can still search it and act if they have the necessary action available to them
If the instance is marked to a person who cannot perform any state transitioning action, they can still comment/upload and mark to anyone else.
Overall SLA: SLA for the complete processing of the application/Entity
State-level SLA: SLA for a particular state in the workflow
Deploy the latest version of eGov-workflow-V2 service
Note: This video will give you an idea of how to deploy any Digit-service. Further you can find the latest builds for each service in out latest release document here.
Add BusinessService Persister YAML path in persister configuration
Add Role-Action mapping for BusinessService APIs
Overwrite the egov.wf.statelevel flag ( true for state level and false for tenant level)
Create businessService (workflow configuration) according to product requirements
Add Role-Action mapping for /processInstance/_search API
Add workflow persister yaml path in persister configuration
For configuration details, refer to the links in Reference Docs.
The workflow configuration can be used by any module which performs a sequence of operations on an application/Entity. It can be used to simulate and track processes in organisations to make it more efficient too and increase accountability.
Role-based workflow
An easy way of writing rule
File movement within workflow roles
To integrate, the host of eGov-workflow-v2 should be overwritten in the helm chart.
/process/_search should be added as the search endpoint for searching workflow process Instance objects.
/process/_transition should be added to perform an action on an application. (It’s for internal use in modules and should not be added in Role-Action mapping).
The workflow configuration can be fetched by calling _search API to check if data can be updated or not in the current state.
Note: All the APIs are in the same Postman collection therefore the same link is added in each row.
Environment Variables | Description |
---|---|
Title |
---|
Title |
---|
egov.wf.default.offset
The default value of offset in search
egov.wf.default.limit
The default value of limit in search
egov.wf.max.limit
The maximum number of records that are returned in search response
egov.wf.inbox.assignedonly
Boolean flag if set to true default search will return records assigned to the user only, if false it will return all the records based on the user’s role. (default search is the search call when no query params are sent and based on the RequestInfo of the call, records are returned, it’s used to show applications in employee inbox)
egov.wf.statelevel
Boolean flag set to true if a state-level workflow is required