Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Few core service guides
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
To develop a new service, one needs to create a microservice and make it available through the API gateway. The API gateway calls the for authentication and the for authorisation. The service developer can configure the roles and map the roles and actions using the .
- The service user interface can be developed as part of the Citizen Dashboard or can be an independent solution. The citizen can log in using a mobile number and OTP. They can apply for a new service using the service UI. The allows users to upload relevant documentation.
The stores the submitted application asynchronously into the registry. The PII data is encrypted using the before storing. All changes are digitally signed and logged using the (ongoing). The transforms and enriches the application data. The also strips the PII data and sends it to the .
The executes configured queries on the stripped data and makes the aggregated data available to the for the administrator or employee view. The views are in accordance with the user role access which is also configurable.
The generates a demand based on the calculation logic for the given service delivery. Based on this demand, a bill is generated against which the payment has to be made. The citizen can either make an online payment or can pay at the counter (offline payment). The is called for online payments and this is integrated with third-party service providers. The service routes the citizen to the service provider's website and then back to the citizen's UI once the payment is successful.
The is the payment registry and records all the successful payments. For offline payments, a record is made in the collection service after the collection of the Cash/Cheque/DD/RTGS. The allowed payment modes are configurable. The PDF service is used to generate receipts based on a configurable template.
The service then triggers the to assign tasks for verification and approval. Workflows can be configured. The Employee Service allows employee registrations and enables them to log in using the Employee Dashboard. The dashboard displays the list of pending applications as per the employee's role. The employee can perform actions on these applications using the employee UI for the service.
As the status changes or actions are taken on the applications the relevant sends the updates to the applicant. Once the application is approved, the applicant can download the final certificate which is generated using .
DIGIT has multi-tenant configuration capabilities. The allows the configuration of the tenant hierarchy and maps multiple tenants like Country, State, District or State, Department, and Sub Department. Each tenant can have their own configurations for service, roles, workflows etc. This allows for variations across different agencies in tune with the local context.
The facilitates the support of multiple languages in DIGIT. This service stores the translations in multiple languages.
The foundational approach and design principles of the DIGIT platform
The DIGIT platform design is based on the following key principles:
Single Source of Truth - Shared data registries ensure data is updated and reused without duplicating transactional data. DIGIT registries offer standard API access that multiple service providers can leverage.
Security and Privacy - DIGIT is designed to be secure by default featuring configurable role-based access. Personally Identifiable Information (PII) is encrypted and inaccessible by analytical systems.
Scalability - DIGIT is built for scale. Its event-driven asynchronous microservices architecture enables DIGIT to handle millions of service requests with ease.
Flexible - DIGIT adopts a multi-user approach. Users (agencies or service providers) have the provision to configure their master data and workflows. They also have the flexibility to develop and implement custom service versions.
Modular - DIGIT components are designed as modular microservices with interoperable APIs. Each service can be scaled, updated and deployed autonomously independent of others.
Interoperable - API first design and use of Open Standards wherever possible ensures DIGIT is interoperable with other systems. Data can be exchanged seamlessly as and when required.
Ease of Access - DIGIT enables easy access to all stakeholders including citizens, employees, administrators, developers or system administrators. Data available through API can be routed to specific applications on the web, mobile, or chat.
Open - DIGIT is open source and uses open source technologies and standards to ensure that data and core services are never vendor locked-in. MIT License gives rights to any agency or its partners to copy the source code and deploy it for their needs.
List and details of technologies used to develop and deploy DIGIT
DIGIT core services are developed using the following technologies -
Click the links below to learn more about the open-source tools and API gateways used in DIGIT.
Open Source Tools
Complete list of tool stack used in DIGIT with licence details
API Gateway
Details of the API gateway used in DIGIT
Continuous Integration/Continuous Deployment (CI/CD) software is the backbone of the modern DevOps environment. bridges the gap between development and operations teams by automating the build, test, and deployment of applications.
Authorisation Management
The value proposition for DIGIT stakeholders
Governments have used DIGIT to transform the way they deliver public services.
By establishing a digital public infrastructure (DPI), multiple states have built capacity, removed administrative burdens and improved governance for faster, better, and cheaper delivery of public services.
Governments face multiple challenges in transformation initiatives. Capacity issues, emergent needs and behaviours, variance in context and requirements at a different scale.
DIGIT was designed ground-up to address such challenges and has evolved over multiple years of implementations to solve for accelerated reforms.
Timely and trustworthy data for effective decision-making at different administrative levels and to fix accountability. DIGIT comes ready with data dashboards to enable decision-making at all levels of government, right from administrators to frontline workers.
DIGIT makes possible:
interoperability
Easy sharing of data
Single source of truth
This ensures that products built on DIGIT enhance visibility, increase trust and reduce the cost of coordination across multiple departments and programs.
Reduce overheads for technology teams and learning curve for the final product users.
Single login for multiple applications and a standardised user interface make adoption quicker and easier. Role-based access takes away the overheads of multiple environments for different systems and multiple user accounts for each system.
For employees, it removes having to do tedious, repetitive tasks by automating processes, improving productivity, and allowing them to focus on more meaningful aspects of their jobs.
Scalability is at the core of the DIGIT platform. It has been deployed as a digital public infrastructure serving large-scale sub-national and national populations.
What makes it possible? A technologically robust platform with open and freely available specifications and standardised APIs that have proven its performance at massive deployments at the population scale.
DIGIT's source code is open source, data is stored in shared registries and data stores are owned by the government, mitigating the risks associated with vendor lock-ins, and allowing digital transformation initiatives to move ahead with speed at scale.
Products built on DIGIT can be accessed by multiple channels. So that, services can be delivered to citizens via mobile, web, WhatsApp, chatbots, through intermediaries or over the counter in physical offices. All are seamlessly connected.
It comes ready with a well-defined design system with a standardised user interface that has been created considering the realities of population-scale digital products. The user interface is straightforward, minimal and follows a guided approach for citizens using these digital products.
All products on DIGIT are designed to be configured, customised, and extended to suit the needs of the local context. These changes can be made at different levels of government.
A federated architecture supports the different realities of each unit of governance.
A set of reusable building blocks can be leveraged by market players to build products and services rapidly. Well-defined specifications, documentation, guides, and training materials make deployment quick and easy for teams. Thus, making it possible to build products for different programs and services, urgent needs and take them to the ground as fast as possible.
The DIGIT team works actively with partners and has created a robust ecosystem of application developers and system integrators that are trained and certified in DIGIT.
Open training sessions on design, development, deployment and implementation are held regularly. In addition to that, DIGIT provides certifications to trained personnel.
For governments using DIGIT as their technology platform, the DIGIT team offers:
Advisory support in vendor selection
Product design review, architecture design review
Enablement of system integrators selected by the government
Open training sessions | Certifications | Implementation guidance | Platform performance review
DIGIT promises long-term support of the platform for all its core components and also underwrites the performance of the core platform.
DIGIT has the capabilities needed to rapidly develop digital applications to address global development challenges. From its origin in solving for urban governance, the platform has rapidly evolved to address challenges in areas of sanitation, public health welfare, public finance, rural governance and legal case management among others.
DIGIT is certified as a Digital Public Good (DPG) by the Digital Public Goods Alliance (DPGA). The DIGIT team sits on the technical advisory committee of the Govstack that works to identify and support the advancement of digital public goods with relevance to the whole of the government transformation approach.
Platform architecture details
Digital Infrastructure for Governance and Inclusive Transformation
Digital Infrastructure for Governance and Inclusive Transformation (DIGIT) is an open-source, scalable, interoperable platform for responsive public service delivery and good governance. It enables government agencies to digitize public service delivery - providing unified interfaces for citizens, front-line employees and administrators to exchange information with each other in a seamless and trusted manner. DIGIT is an open-source platform built for scale. DIGIT is multi-tenant and can enable the digital transformation of multiple government agencies at speed and scale using a common shared infrastructure.
Government agencies deliver and manage a host of public services. Each of them are increasingly leveraging information technology to automate and manage delivery of these services. Inadvertently, they endup building systems have common functionality and also end up duplicating similar data sets that are all out of synch. This leads to several challenges in the long term.
Lack of Single Source of Truth - Multiple systems with similar datasets leads to multiple sources of truth.
Siloed - These systems often don't talk to each other hence citizens have to run pillar to post to get same data in multiple systems updated to receive services and benefits. Administrators also struggle to get access to data to identify areas of improvement. Employees have to learn to work with multiple systems to deliver services.
Scalability - Many of these systems are built for internal users and cannot be opened up for direct access to citizens. This makes it difficult for citizens to access their own data.
Vendor Lock-In - Often these systems have been developed by external vendors and government agencies do not have capacity to take these over leading to vendor lock-in.
Multiple other challenges like security concerns and limited technical capabilities hinder the government's ability to effectively utilise technology for large-scale public service delivery.
DIGIT platform is developed using sound architectural principles to enable multiple government agencies to deliver public services is an inclusive and accessible to all citizens and business in at scale and speed. DIGIT provide a shared infrastructure consist of shared data registries and common services like authentication, authorisation, workflow, notification, payments etc. that enable agencies to develop and deliver services in a secure, scalable and cost effective manner.
A typical public service delivery starts with a citizen applying for a service or a service connection. The diagram below abstracts the typical information exchange flow required to enable the delivery of a service.
A digital system offers the following capabilities to improve citizen experience:
Register/Login - Citizens can register, sign in, or authenticate themselves. Single sign-on capability streamlines access to digital services.
Apply/Upload - Citizens can request services by completing an application form and uploading necessary documents. Services should be accessible through multiple channels and in various languages.
Pay/Bill - Citizens can make payments for services and download bills. Multiple payment options, including digital and cash, should be available..
Inform/Track - Citizens can track service progress or receive updates via SMS, email, or app notifications.
Support/Feedback - Citizens can raise support requests or provide feedback on service closure. The system should allow configuring surveys and feedback forms, with the ability to analyse responses for corrective action.
The following capabilities support the employee and management experience:
Assign - Employees or vendors can assign service applications or trigger workflows tailored to different user/agency local requirements.
Search/View - Employees or vendors can search for and view applications based on multiple attributes.
Update/Comment - Employees can update and comment on service applications, facilitating structured and unstructured data exchange between parties.
Deliver/Verify- Employees or vendors can deliver services, such as setting up a new connection, or verify and certify information provided by citizens. Offline-capable apps are crucial in low mobile coverage areas, with data synchronization capabilities to handle load spikes.
Monitor/Improve - Administrators can monitor service performance and identify areas for improvement. All transactions should generate data events in an analytical database for comprehensive performance monitoring.
Plan/Budget - Administrators and policymakers can plan and budget for service improvement, linking expenditures to outcomes.
All services are built on DIGIT emits micro-level transactional data into an analytical database. This is key for real-time monitoring and auditing of the operations. It facilitates planning, budgeting and policy decision-making.
DIGIT is proven and well-supported serving over 1000 cities. Ecosystem partners including System Integrators are using DIGIT to build and deliver services. DIGIT Core LTS 2.9 version comes with long-term support. This implies that any technical issues like security or performance reported on any of these services in the DIGIT Core 2.9 LTS version will be fixed by the DIGIT team.
Infrastructure resources required to deploy DIGIT
Refer to the page.
DIGIT offers a set of (i.e. services) that can leveraged to develop any while adhering to the . Additionally, DIGIT provides optional accelerators like DIGIT UI and Dashboard Framework that can be extended to create citizen, employee and administrator dashboards.
The diagram below illustrates how reusable building blocks interact to realise a typical public service delivery. Leveraging reusable building blocks not only speeds up delivery but also ensures well-structured digital services. The interactions between these services are detailed in the .
DIGIT is easy to install, configure and use. To install DIGIT on your servers use the . To design and develop a new service on the DIGIT Platform go through the and . You can also take up the DIGIT Certification through the .
If you want to experience DIGIT or develop on DIGIT, you can also access the DIGIT demo using .
DIGIT comes with several accelerators. These include and . These are optional. You may choose to develop or reuse your UI framework and Integration Adapters.
We organize several events for architects and developers on DIGIT. Keep a watch on the Page for upcoming events.
Several volunteers are contributing to DIGIT in many ways. If you want to volunteer, check out our page.
Various infrastructure resources required for DIGIT deployment can be provisioned from the AWS cloud manually. However, the best practice is to maintain consistency, automation, audit, reusability, and cost forecast. The use of infra-as-code using tools like templates works well to this end.
DIGIT being an open source platform, all the tools and tech stack used to build, deploy and operate DIGIT - are also and community edition. The various tools used are listed below with the specific versions used and their short description.
Kafka
3.6.1
Apache Kafka is an open-sourced distributed event streaming platform capable of handling trillions of events a day.
Elasticsearch
8.11.3
Elasticsearch is a distributed, free and open search and analytics engine for all types of data, including textual, numerical, geospatial, structured and unstructured.
Kibana
8.11.3
Kibana is a free and open frontend application that sits on top of the Elastic Stack, providing search and data visualization capabilities for data indexed in Elasticsearch.
Postgresql
14.0 or later
PostgreSQL is a powerful, open source object-relational database system with over 30 years of active development that has earned it a strong reputation for reliability, feature robustness, and performance
Redis
7.2
Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache, and message broker. Redis provides data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs, geospatial indexes, and streams.
Jaeger
1.52
Jaeger is open source software for tracing transactions between distributed services. It's used for monitoring and troubleshooting complex microservices environments.
JDK17
17
OpenJDK is completely open source and can be used it freely
3.2.2
Spring Boot is an open-source micro framework maintained by a company called Pivotal. It provides Java developers with a platform to get started with an auto configurable production-grade Spring application.
16.7.0
React is one of Facebook's first open source projects that is both under very active development and is also being used to ship code to everybody on facebook.com.
16.8.0
Material-UI CE (Community Edition) has been 100% open-source (MIT) since the very beginning, and always will be. Developers can ensure Material-UI is the right choice for their React applications through Material-UI's community maintenance strategy.
14.0
8.4.0
Node. js is an open-source, cross-platform, JavaScript runtime environment. It executes JavaScript code outside of a browser.
Kubernetes
1.30
1.27.x
Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications.
Docker
24.0.6
19.x
Docker, a subset of the Moby project, is a software framework for building, running, and managing containers on servers and the cloud.
Helm
3.6.3
3.x.x
Helm helps you manage Kubernetes applications — Helm Charts help you define, install, and upgrade even the most complex Kubernetes.
1.8.2
v1.5.7
Terraform allows infrastructure to be expressed as code in a simple, human readable language called HCL (HashiCorp Configuration Language).
Jenkins
2.306
2.289
Jenkins – an open source automation server which enables developers around the world to reliably build, test, and deploy their software.
Go Lang
1.21.2
1.13.3
Go is an open source programming language that makes it easy to build simple, reliable, and efficient software.
Groovy
3.0
Apache Groovy is a powerful, optionally typed and dynamic language, with static-typing and static compilation capabilities, for the Java platform aimed at improving developer productivity thanks to a concise, familiar and easy to learn syntax.
Python
Python software and documentation are licensed under the PSF License Agreement. Starting with Python 3.8.6,
PSF
sops is an editor of encrypted files that supports YAML, JSON, ENV, INI and BINARY formats and encrypts with AWS KMS, GCP KMS, Azure Key Vault, age, and PGP.
1.2
1.2
YAML is a human-readable data serialization standard that can be used in conjunction with all programming languages and is often used to write configuration files.
DIGIT offers several API collections that help integrate services on the platform. The table below enlists the DIGIT Core Service APIs and the link to related service configuration docs.
Create, Update, Search, Password Update, Reset Password
Create new roles, Update existing roles, Search for a list of roles based on role codes, Create new actions, Update existing actions, Map roles and features, Map roles and actions, Validate tenant actions & roles
Add employee, Update employee data, Generate count for list of active & inactive employees, Search for employees
Search list of boundaries, search geographical boundaries, Search for tenant id based on latitude and longitude
Localise messages, Update localised messages, Delete localised messages, Get messages by locale and tenant ID
Encrypt given values, Decrypt given values, Provide signature for given values, Verify signature, Deactivate keys for given tenant and generate new key
Index records, Reindex records, Start legacy index job to reindex records
Upload files to servers, Search file url based on tenantid and filestoreid, Search file url based on tenantid and tag name, Get metadata of file based on tenantID and filestoreID
Record payments in the system, Perform workflow actions on payment, Validate payment request, Search payment based on given search criteria
Create/Update master data, Get master data for listed tenantID or module
Generate new ID
Shorten URL, Redirects user to the original URL
Create/Update/Search new business service, Create new workflow entry, Get the list of workflows, Get count of applications
Create/Validate/Search OTP configuration entry
Create/Search pdf
Create new payment instruction / Update existing payment /Retrieve current status of payment
Decision support systems - manage and facilitate data ingest
DIGIT building blocks (as in LEGO pieces)
MDMS-client
Services-common
Configure role based user access and map actions to roles
DIGIT is an API-based platform where each API denotes a DIGIT resource. The primary job of Access Control Service (ACS) is to authorise end-users based on their roles and provide access to the DIGIT platform resources. Access control functionality is essentially based on the following points:
Actions: Actions are events performed by a user. This can be an API end-point or front-end event. This is the MDMS master.
Roles: Roles are assigned to users. A single user can hold multiple roles. Roles are defined in MDMS masters.
Role-Action: Role actions are mapped between Actions and Roles. Based on roles, the action mapping access control service identifies applicable actions for the role.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Java 17
Serve the applicable actions for a user based on user roles.
On each action performed by a user, access control looks at the user's roles and validates actions that map with the role.
Support tenant-level role action - For instance, an employee from Amritsar can have the role of APPROVER for other ULBs like Jalandhar and hence will be authorised to act as APPROVER in Jalandhar.
Define the roles:
Add the actions (URL)
Add the role action mapping
Any service that requires authorisation can leverage the functionalities provided by the access control service.
To add a new service to the platform, simply update its role action mapping in the master data. The Access Control Service will handle authorisation each time the microservice API is called.
The service needs to call /actions/_authorize
API of Access Control Service to check for authorisation of any request.
service is up and running
the latest version of the Access Control Service
Note: This video will give you an idea of how to deploy any Digit-service. Further, you can find the latest builds for each service in our latest here.
service to fetch the Role Action Mappings
Note: This video will give you an idea of how to deploy any Digit-service. Further, you can find the latest builds for each service in our latest here.
The details about the fields in the configuration can be found in the
To integrate with the Access Control Service the has to be configured (added) in the MDMS service.
/actions/_authorize
The objective of the audit service is listed below -
To provide a one-stop framework for signing data i.e. creating an immutable data entry to track activities of an entity. Whenever an entity is created/updated/deleted the operation is captured in the data logs and is digitally signed to protect it from tampering.
Prior knowledge of Java/J2EE
Prior knowledge of SpringBoot
Prior knowledge of PostgreSQL
Prior knowledge of REST APIs and related concepts like path parameters, headers, JSON etc.
The audit service will be parsing all the persister configs so that it can process data received by the persister and create audit logs out of it.
Step 1: Add the following metrics to the existing persister configs -
Step 2: If a custom implementation of ConfigurableSignAndVerify
interface is present, provide the signing algorithm implementation name as a part of audit.log.signing.algorithm
property. For example, if the signing algorithm is HMAC, the property will be set as follows -
Step 3: Set egov.persist.yml.repo.path
this property to the location of persister configs.
Step 4: Run the audit-service application along with the persister service.
Definitions
Config file - A YAML (xyz.yml) file which contains persister configuration for running audit service.
API - A REST endpoint to post audit logs data.
When audit-service create API is hit, it will validate request size, keyValueMap and operationType.
Upon successful validation, it will choose the configured signer and sign entity data.
Once audit logs are signed and ready, it will send it to audit-create
topic.
Persister will listen on this topic and persist the audit logs.
Add the required keys for enabling audit service in persister configs.
Deploy the latest version of the Audit service and Persister service.
Add Role-Action mapping for APIs.
The audit service is used to push signed data for tracking each and every create/modify/delete operation done on database entities.
Can be used to have tamper-proof audit logs for all database transactions.
Replaying events in chronological order will lead to the current state of the entity in the database.
To integrate, the host of the audit-service module should be overwritten in the helm chart.
audit-service/log/v1/_create
should be added as the create endpoint for the config added.
audit-service/log/v1/_search
should be added as the search endpoint for the config added.
1. URI: The format of the API to be used to create audit logs using the audit service is as follows: audit-service/log/v1/_create
Body: The body consists of 2 parts: RequestInfo and AuditLogs.
Sample Request Body -
2. URI: The format of the API to be used to search audit logs using audit-service is as follows: audit-service/log/v1/_search
Body: The body consists RequestInfo and search criteria is passed as query params.
Sample curl for search -
Postman Collection -
Play around with the API's :
The signed audit service provides a one-stop framework for signing data i.e. creating an immutable data entry to track activities of an entity. Whenever an entity is created/updated/deleted the operation is captured in the data logs and is digitally signed to protect it from tampering.
Infra configuration -
Number of concurrent users - 15000
Duration of bombarding service with requests ~ 17 minutes
Number of signed audit pod(s) - 1
Boundary service provides APIs to create Boundary entities, define their hierarchies, and establish relationships within those hierarchies. You can search for boundary entities, hierarchy definitions, and boundary relationships. However, you can only update boundary entities and relationships.
Prior knowledge of Java/J2EE.
Prior knowledge of Spring Boot.
Prior knowledge of REST APIs and related concepts like path parameters, headers, JSON etc.
Prior knowledge of Git.
Advanced knowledge of operating JSON data would be an added advantage to understanding the service.
Create Boundary entity: It introduces functionality to define your boundary entity with all validations and properties supported by GeoJson. Currently, only the geometry type of Polygon and Point is supported by it.
Search Boundary entity: It has APIs to search boundaries based on the tenantid & codes, both being mandatory.
Update Boundary entity: This allows updating the geometry details of a boundary entity.
Create Boundary Hierarchy-definition: It allows defining boundary hierarchy definitions against a particular tenantId and hierarchyType which then can be referenced while creating boundary relationships.
Search Boundary Hierarchy-definition: boundary-service supports searching for hierarchy definitions based on tenantId and HierarchyType where tenantId is mandatory. In case the hierarchyType is not provided, it returns all hierarchy definitions for the given tenantId.
Create Boundary Relationship: It supports defining relationships between existing boundary entities according to the established hierarchy. It requires tenantId, code, hierarchyType, boundaryType, and parent fields. Where tenantId and code uniquely combine to determine a boundary entity, tenantId and hierarchType combine to define the hierarchy used in establishing the relationship between the boundary entity and its parent. It verifies if the parent relationship is already established before creating a new one. It also checks if the specified boundaryType is a direct descendant of the parent boundaryType according to the hierarchy definition.
Search Boundary Relationship: This functionality supports searching the boundary relationships based on the given params -
tenantId
hierarchyType
boundaryType
codes
includeChildren
includeParents
where tenantId and hierarchyType are mandatory and the rest are optional.
Update Boundary Relationship: This allows updating the parent boundary relationship within the same level as per the hierarchy.
/boundary/_create - Takes RequestInfo
and Boundary
in the request body. boundary has all attributes which define the boundary.
/boundary/_search - Takes RequestInfo
in the request body and search criteria fields (refer to the functionality above for exact fields ) as params
and return the boundary based on the provided search criteria.
/boundary/_update - Takes RequestInfo
and Boundary
in the request body where the boundary has all the info that needs to be updated and returns the updated boundary.
/boundary-hierarchy-definition/_create - Takes RequestInfo
and boundary hierarchy definition in the request body where BoundaryHierarchy
object has all the information for the hierarchy definition being created.
/boundary-hierarchy-definition/_search - Takes RequestInfo
and BoundaryTypeHierarchySearchCriteria
in the request body to return boundary hierarchy definition based on the provided search criteria.
/boundary-relationships/_create - This API takes RequestInfo
and BoundaryRelationship
in the request body where BoundaryRelationship
has all the info required to define a relationship between two boundaries.
/boundary-relationships/_search - This API takes RequestInfo
in the request body and search criteria fields (refer to functionality for the exact fields) are passed as params
to return master data based on the provided search criteria and return the response.
/boundary-relationships/_update - This API takes RequestInfo
and BoundaryRelationship
in the request body to update the fields given in the BoundaryRelationship
and returns the updated data.
Rate limiting in gateways is a crucial configuration to manage traffic and ensure service availability. By implementing a rate limiter, we can control the number of requests a client can make to the server within a specified time frame. This protects the underlying services from being overwhelmed by excessive traffic, whether malicious or accidental.
The configuration typically involves -
Replenish Rate: The rate at which tokens are added to the bucket. For example, if the replenish rate is 2 tokens per second, two tokens are added to the bucket every second.
Burst Capacity: The maximum number of tokens that the bucket can hold. This allows for short bursts of traffic.
KeyResolver: A KeyResolver
is an interface used to determine a key for rate-limiting purposes.
Let's say we have a rate limiter configured with:
replenishRate
: 2 tokens per second
burstCapacity
: 5 tokens
This means:
2 tokens are added to the bucket every second.
The bucket can hold a maximum of 5 tokens.
Scenario: A user makes requests at different intervals.
Initial State: The bucket has 5 tokens (full capacity).
First Request: The user makes a request and consumes 1 token. 4 tokens remain.
Second Request: The user makes another request and consumes 1 more token. 3 tokens remain.
Third Request: The user waits 1 second (2 tokens added) and then makes a request. The bucket has 4 tokens (3 remaining + 2 added - 1 consumed).
Let's consider a scenario where a user makes multiple requests in quick succession.
Configuration:
replenishRate
: 1 token per second
burstCapacity
: 3 tokens
Scenario: A user makes 4 requests in rapid succession.
Initial State: The bucket has 3 tokens (full capacity).
First Request: Consumes 1 token. 2 tokens remain.
Second Request: Consumes 1 token. 1 token remains.
Third Request: Consumes 1 token. 0 tokens remain.
Fourth Request: There are no tokens left, so the request is denied. The user must wait for more tokens to be added.
After 1 second, 1 token is added to the bucket. The user can make another request.
Here’s a practical example using Spring Cloud Gateway with Redis Rate Limiting.
Configuration: In Routes.properties you can set rate limiting as
Explanation:
Replenish Rate: 5 tokens per second.
Burst Capacity: 10 tokens.
Behavior:
A user can make up to 10 requests instantly (burst capacity).
After consuming the burst capacity, the user can make 5 requests per second (replenish rate).
API Service Rate Limiting
An API service wants to ensure clients do not overwhelm the server with too many requests. They set up rate limits as follows:
Replenish Rate: 100 tokens per minute.
Burst Capacity: 200 tokens.
Scenario:
A client can make 200 requests instantly.
After the burst capacity is exhausted, the client can make 100 requests per minute.
If any client tries to make more requests than allowed, they receive a response indicating they are being rate-limited.
Prevents Abuse: Limits the number of requests to prevent abuse or malicious attacks (e.g., DDoS attacks).
Fair Usage: Ensures fair usage among all users by preventing a single user from consuming all the resources.
Load Management: Helps manage server load and maintain performance by controlling the rate of incoming requests.
Improved User Experience: Prevents server overload, ensuring a smoother and more reliable experience for all users.
Rate limiting is crucial for traffic management, fair usage, and server resource protection. By setting parameters like replenishRate and burstCapacity, you can regulate request flow and manage traffic spikes efficiently. In Spring Cloud Gateway, the Redis Rate Limiter filter offers a robust solution for implementing rate limiting on your routes.
Deployment With Spring Cloud
We are updating our core services to remove outdated dependencies and ensure long-term support for the DIGIT platform. The current API gateway uses Netflix Zuul, which has dependencies that will soon be obsolete. To address this, we are building a new gateway using Spring Cloud Gateway.
What is Spring Cloud Gateway and how is it different from Zuul?
Spring Cloud Gateway and Zuul both function as API gateways but differ in architecture and design. Spring Cloud Gateway is ideal for modern, reactive applications, while Zuul is better suited for traditional, blocking I/O environments. The choice between them depends on your specific needs.
Navigating the new Gateway codebase
The new Gateway codebase is well-organized, with each module containing similar files and names. This makes it easy to understand the tasks each part performs. Below is a snapshot of the current directory structure.
Config: It contains the configuration-related files for example Application Properties etc.
Constants: It contains the constants referenced frequently within the codebase. Add any constant string literal here and then access it via this file.
Filters: This folder is the heart of the API gateway since it contains all the PRE, POST, and ERROR filters. For those new to filters: Filters intercept incoming and outgoing requests, allowing developers to apply various functionalities such as authentication, authorisation, rate limiting, logging, and transformation to these requests and responses.
Model: contains the P.O.J.O required in the gateway.
Producer: contains code related to pushing data onto Kafka.
Rate limiters contain files for initialising the relevant bean for custom rate limiting.
Utils: contains the helper function which can be reused across the project.
The above paragraphs provide a basic overview of the gateway's functionality and project structure.
When a request is received, the gateway checks if it matches any predefined routes. If a match is found, the request goes through a series of filters, each performing specific validation or enrichment tasks. The exact order of these filters is discussed later.
The gateway also ensures that restricted routes have proper authentication and authorization. Some APIs can be whitelisted as open or mixed-mode endpoints, allowing them to bypass authentication or authorization.
Upon receiving a request, the gateway first looks for a matching route definition. If a match is found, it starts executing the pre-filters in the specified order.
Pre-Filter
RequestStartTimerFilter: Sets request start time
CorrelationIdFilter: Generate and set a correlationId in each request to help track it in the downstream service
AuthPreCheckFilter: Checks for if Authorisation has to be performed
PreHookFilter: Sends a pre-hook request
RbacPreFilter: Checks if Authentication has to be performed or not
AuthFilter: Authenticate the request
RbacFilter: Authorise the request
RequestEnrichmentFilter: Enrich the request with userInfo & correlationId
Error-Filter
This filter handles all the errors shown either during the request processing or from the downstream service.
There are two ways to configure Rate Limits in Gateway
Default Rate Limiting
Service Level Rate Limiting
Default rate limiting sets a standard limit on the number of requests that can be made to the gateway within a specified time frame. This limit applies to all services unless specific rate limits are configured at the service level.
Service level rate limiting allows you to set specific rate limits for individual services. This means each service can have its request limits, tailored to its unique needs and usage patterns, providing more granular control over traffic management.
Note: We currently provide two options for keyResolver and if none of them is specified the spring cloud will take a default go PrincipalNameKeyResolver
which retrieves the Principal
from the ServerWebExchange
and calls Principal.getName()
.
ipKeyResolver : Resolves key based on ip address of the request
userKeyResolver : Resolves key based on use UUID of the request
To enable gateway routes, a service must activate the gateway flag in the Helm chart. Based on this flag, a Go script runs in the Init container, automatically generating the necessary properties for all services using the gateway.
NOTE: Restart the Gateway after making changes in service Values.YAML so that it can pick up the changes.
The objective of this service is to create a common point to manage all the email notifications being sent out of the platform. The notification email service consumes email requests from the Kafka notification topic and processes them to send them to a third-party service. Modules like PT, TL, PGR etc make use of this service to send messages through the Kafka Queue.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Prior knowledge of Java/J2EE
Prior knowledge of SpringBoot
Prior knowledge of Third party API integration
Prior knowledge of REST APIs and related concepts like path parameters, headers, JSON etc
Prior knowledge of Kafka and related concepts like Producer, Consumer, Topic etc.
Provide a common platform to send email notifications to the user
Support localised email.
The egov-notification-mail is a consumer which listens to the egov.core.notification.email topic reads the message and generates email using SMTP Protocol. The services need the senders' email configured. On the other hand, if the senders' email is not configured, the services get the email id by internally calling egov-user service to fetch the email id. Once the email is generated, the content is localized by egov-localization service after which it is notified to the email id.
Deploy the latest version of the notification email service.
Make sure the consumer topic name for email service is added in deployment configs
The email notification service is used to send out email notifications for all miscellaneous/ad-hoc services which citizens avail from ULBs.
Can perform service-specific business logic without impacting the other module.
In the future, if we want to expose the application to citizens then it can be done easily.
To integrate, the client service should send email requests to email notification consumer topics.
Add these properties in Values.YAML of Gateway helm file and then configure the values as per the use case. Read for more information about these properties.
If you want to define rate limiting for each service differently you can do so by defining these properties in the Values.YAML of the respective service. Read for more information about these properties.
Play around with the API's :
This documents highlights the changes need to be done in a module to support the user privacy feature.
Prior Knowledge of Java/J2EE.
Prior Knowledge of Spring Boot.
MDMS Service
Encryption Service
Prior Knowledge of React Js / Javascript
Upgrade services-common library version.
Upgrade tracer library version.
At service level or calculator service level where demand generation process take place, there payer details value must be pass in plain form. And these user details must get through SYSTEM user.
Create a system user in the particular environment and make sure that system user has role INTERNAL_MICROSERVICE_ROLE
. Use the curl mentioned below:
Mention the uuid of the system user in the environment file and in the application.properties file of service example:
Environment file:
egov-internal-microservice-user-uuid: b5b2ac70-d347-4339-98f0-5349ce25f99f
Application properties file:
egov.internal.microservice.user.uuid=b5b2ac70-d347-4339-98f0-5349ce25f99f
Create a method/ function which call the user search API to get the user details in plain form, in that call pass the userInfo of the user with INTERNAL_MICROSERVICE_ROLE
For reference follow the below code snippet
For the report where PII data is present and decryption is required, such report definition must have UUID values from user table in the query result.
And the Source column section of the report definition must have mention of the UUID column and for that entry, showColumn
flag should be set as false.
And the base sql query in the report config should be written in such a way that in the query it must have the user uuid.
Example:
In the report config where PII data is present and decryption is required, add this new filed in the report config decryptionPathId
. And in the Security Policy mdms file, must have the model configuration from the particular report. And the value of key model
in Security Policy mdms file and decryptionPathId
in a report must be same.
\
Example:-
For the searcher result where PII data is present and decryption is required, such searcher definition must have UUID values from user table in the query result. The base sql query in the searcher config should be written in such a way that in the query it must have the user uuid.
I searcher result, where data are coming from user table and to get those value in decrypted form, then in searcher config these following filed decryptionPathId
must be present. And the Security Policy mdms file, must have the model configuration from the particular searcher config. And the value of key model
in Security Policy mdms file and decryptionPathId
in searcher config must be same.\
To Do
For details information on Security Policy MDMS file refer this .
The enc-client library is a supplementary java library provided to support encryption-related functionalities so that every service does not need to pre-process the request before calling the encryption service.
MDMS Service
Encryption Service
Kafka
The MDMS configurations explained below are fetched by this library at boot time. So after you make changes in the MDMS repo and restart the MDMS service, you would also need to RESTART THE SERVICE which has imported the enc-client library. For example, the report service is using the enc-client library so after making configuration changes to Security Policy pertaining to any report, you will have to restart the report service.
Encrypt a JSON Object - The encryptJson
function of the library takes any Java Object as an input and returns an object which has encrypted values of the selected fields. The fields to be encrypted are selected based on an MDMS Configuration.
This function requires the following parameters:
Java/JSON object - The oject whose fields will get encrypted.
Model - It is used to identify the MDMS configuration to be used to select fields of the provided object.
Tenant Id - The encryption key will be selected based on the passed tenantId.
Encrypt a Value - The encryptValue
function of the library can be used to encrypt single values. This method also required a tenantId parameter.
Decrypt a JSON Object - The decryptJson
function of the library takes any Java Object as an input and returns an object that has plain/masked or no values of the encrypted fields. The fields are identified based on the MDMS configuration. The returned value(plain/masked/null) of each of the attribute depends on the user’s role and if it is a PlainAccess
request or a normal request. These configurations are part of the MDMS.
This function required following parameters:
Java/JSON object - The object containing the encrypted values that are to be decrypted.
Model - It is used to select a configuration from the list of all available MDMS configurations.
Purpose - It is a string parameter that passes the reason of the decrypt request. It is used for Audit purposes.
RequestInfo - The requestInfo parameter serves multiple purposes:
User Role - A list of user roles are extracted from the requestInfo parameter.
PlainAccess Request - If the request is an explicit plain access request, it is to be passed as a part of the requestInfo. It will contain the fields that user is requesting for decryption and the id of record.
While decrypting Java object, this method also audits the request.
All the configurations related to enc-client library are stored in the MDMS. These master data are stored in DataSecurity
module. It has two types of configurations:
Masking Patterns
Security Policy
The masking patterns for different types of attributes(mobile number, name, etc.) are configurable in MDMS. It contains the following attributes:
patternId
- It is the unique pattern identifier. This id is referred to in the SecurityPolicy MDMS.
pattern
- This defines the actual pattern according to which the value will be masked.
The Security Policy master data contains the policy used to encrypt and decrypt JSON objects. Each of the Security Policy contains the following details:
model
- This is the unique identifier of the policy.
uniqueIdentifier
- The field defined here should uniquely identify records passed to the decryptJson
function.
attributes
- This defines a list of fields from the JSON object that needs to be secured.
roleBasedDecryptionPolicy
- This defines attribute-level role-based policy. It will define visibility for each attribute.
The visibility is an enum with the following options:
PLAIN - Show text in plain form.
MASKED - The returned text will contain masked data. The masking pattern will be applied as defined in the Masking Patterns master data.
NONE - The returned text will not contain any data. It would contain string like “Confidential Information”.
It defines what level of visibility should the decryptJson
function return for each attribute.
The Attribute defines a list of attributes of the model that are to be secured. The attribute is defined by the following parameters:
name
- This uniquely identifies the attribute out of the list of attributes for a given model.
jsonPath
- It is the json path of the attribute from the root of the model. This jsonPath is NOT the same as Jayway JsonPath library. This uses /
and *
to define the json paths.
patternId
- It refers to the pattern to be used for masking which is defined in the Masking Patterns master.
defaultVisibility
- It is an enum configuring the default level of visibility of that attribute. If the visibility is not defined for a given role, then this defaultVisibility will apply.
This parameter is used to define the unique identifier of that model. It is used for the purpose of auditing the access logs. (This attribute’s jsonPath should be at the root level of the model.)
It defines attribute-level access policies for a list of roles. It consists of the following parameters:
roles
- It defines a list of role codes for which the policy will get applied. Please make sure not to duplicate role codes anywhere in the other policy. Otherwise, any one of the policies will get chosen for that role code.
attributeAccessList
- It defines a list of attributes for which the visibility differs from the default for those roles.
There are two levels of visibility:
First level Visibility - It applies to normal search requests. The search response could have multiple records.
Second level Visibility - It is applied only when a user explicitly requests for plain access of a single record with a list of fields required in plain.
Second level visibility can be requested by passing plainAccessRequest
in the RequestInfo
.
Any user will be able to get plain access to the secured data(citizen’s PII) by requesting through the plainAccessRequest
parameter. It takes the following parameters:
recordId
- It is the unique identifier of the record that is requested for plain access.
fields
- It defines a list of attributes that are requested for plain access.
Every decrypt request is audited. Based on the uniqueIdentifier
defined as part of the Security Policy, it lists out the identifiers of the records that were decrypted as part of the request.
Each Audit Object contains the following attributes:
Encryption Service is used to secure sensitive data that is being stored in the database. The encryption services uses envelope encryption.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Java 17.
Kafka server is up and running.
Encryption Service offers following features :
Encrypt - The service will encrypt the data based on given input parameters and data to be encrypted. The encrypted data will be mandatorily of type string.
Decrypt - The decryption will happen solely based on the input data (any extra parameters are not required). The encrypted data will have identity of the key used at the time of encryption, the same key will be used for decryption.
Sign - Encryption Service can hash and sign the data which can be used as unique identifier of the data. This can also be used for searching gicen value from a datastore.
Verify - Based on the input sign and the claim, it can verify if the the given sign is correct for the provided claim.
Rotate Key - Encryption Service supports changing the key used for encryption. The old key will still remain with the service which will be used to decrypt old data. All the new data will be encrypted by the new key.
Following are the properties in application.properties file in egov-enc-service which are configurable.
The Encryption service is used to encrypt sensitive data that needs to be stored in the database. For each tenant, a different data encryption key (DEK) is used. The DEK is encrypted using Key Encryption Keys (KEK). Currently, there are two implementations of encrypting the data encryption keys available. The first one is using AWS KMS service and the second one is using a master password. For any custom implementation, the MasterKeyProvider interface in the service should be extended. Based on the master.password.provider flag in appication.properties you can enable which implementation of MasterKeyProvider interface to use
Can perform encryption without having to re-write encryption logic every time in every service.
To integrate, the host of encryption-services module should be overwritten in the helm chart.
/crypto/v1/_encrypt
should be added as an endpoint for encrypting input data in the system
/crypto/v1/_decrypt
should be added as the decryption endpoint.
/crypto/v1/_sign
should be added as the endpoint for providing a signature for a given value.
/crypto/v1/_verify
should be added as the endpoint for verifying whether the signature for the provided value is correct.
/crypto/v1/_rotatekey
should be added as an endpoint to deactivate the keys and generate new keys for a given tenant.
a) POST /crypto/v1/_encrypt
Encrypts the given input value/s OR values of the object.
b) POST /crypto/v1/_decrypt
Decrypts the given input value/s OR values of the object.
c) /crypto/v1/_sign
Provide signature for a given value.
d) POST /crypto/v1/_verify
Check if the signature is correct for the provided value.
e) POST /crypto/v1/_rotatekey
Deactivate the keys for the given tenant and generate new keys. It will deactivate both symmetric and asymmetric keys for the provided tenant.
An eGov core application which handles uploading different kinds of files to server including images and different document types.
Prior Knowledge of Java/J2EE.
Prior Knowledge of Spring Boot.
Prior Knowledge of REST APIs and related concepts like path parameters, headers, JSON etc.
Prior knowledge of aws and azure
The filestore application takes in a request object which contains an image/document or any kind of file and stores them in a disk/AWS/azure depending upon the configurations, additional implementations can be written for the app interface to interact with any other remote storages.
The request file to be uploaded is taken in form of multipart file part then saved to the storage and a uuid will returned as a unique identifier for that resource, which can be used to fetch the documents later.
Incase of images, the application will create three more additional copies of the file in the likes of large, medium and small for the usage of thumbnails or low quality images incase of mobile applications.
The search api takes the uuid, tenantid as mandatory url params and a few optional parameters and returns the presigned url of the files from the server, In case of images a single string containing multiple urls separated by commas will be returned representing different sizes of images stored.
The Application is present among the core group of applications available in the eGov-services git repository. The spring boot application but needs lombok extension added in your ide to load it. Once the application is up and running API requests can be posted to the url and ids can be generated.
NOTE : In case of intellij the plugin can be installed directly, for eclipse the lombok jar location has to be added in eclipse.ini file in this format -javaagent:lombok.jar.
For the API information please refer the swagger YAML
NOTE : The application needs at least one type of storage available for it to store the files either file-storage, AWS S3 or azure. More storage types can be added by extending the application interface also.
&#xNAN;IMPORTANT : To work work any of the file storage there are some application properties which needs to be configured.
DiskStorage:
The mount path of the disk should be provided in the following variable to save files in the disc. file.storage.mount.path=path.
Following are the variables that needs to be populated based on the aws/azure account you are integrating with.
How to enable Minio SDC:
isS3Enabled = true(Should be true)
aws.secretkey = {minio_secretkey}
aws.key = {minio_accesskey}
fixed.bucketname = egov-rainmaker(Minio bucket name)
minio.source = minio
How to enable AWS S3:
isS3Enabled = true(Should be true)
aws.secretkey = {s3_secretkey}
aws.key = {s3_accesskey}
fixed.bucketname = egov-rainmaker(S3 bucket name)
minio.source = minio
AZURE:
isAzureStorageEnabled - informing the application whether Azure is available or not
azure.defaultEndpointsProtocol - type of protocol https
azure.accountName - name of the user account
azure.accountKey - secret key of the user account
NFS :
isnfsstorageenabled-informing the application whether NFS is available or not <True/False>
file.storage.mount.path - <NFS location, example /filestore>
source.disk - diskStorage - name of storage
disk.storage.host.url=<Main Domain URL>
Allowed formats to be uploaded The default format of the files is the use of set brackets with strings inside it - {"jpg", "png"}. Make sure to follow the same.
The key in the map is the visible extension of the file types, the values on the right in curly braces are the respective tika types of the file. these values can be found in tika website or by passing the file through tika functions.
Upload POST API to save the files in the server
Search Files GET API to retrieve files based only on id and tenantid
Search URLs GET API to retrieve pre-signed URLs for a given array of ids
Deploy the latest version of Filestore Service.
Add role-action mapping for APIs.
The filestore service is used to upload and store documents which citizens add while availing of services from ULBs.
Can perform file upload independently without having to add fileupload specific logic in each module.
To integrate, the host of the filestore module should be overwritten in the helm chart.
/filestore/v1/files
should be added as the endpoint for uploading files in the system
/filestore/v1/files/url
should be added as the search endpoint. This method handles all requests to search existing files depending on different search criteria
Here is a link to a sample master data.
To know more about regular expression refer the below articles To test regular expression refer the below link.
the latest version of Encryption Service
Note: This video will give you an idea of how to deploy any Digit-service. Further you can find the latest builds for each service in out latest here.
Add mapping for API’s.
Go To : and click on file -> import url
Then add the raw url of the API doc in the pop up.
Incase the url is unavailable, please go to the of egov-services git repo and find the yaml for egov-filestroe.
minio.url = .backbone:9000(Minio server end point)
minio.url =
master-password
asd@#$@$!132123
Master password for encryption/ decryption. It can be any string.
master.salt
qweasdzx
A salt is random data that is used as an additional input to a one-way function that hashes data, a password or passphrase. It needs to be an alphanumeric string of length 8.
master.initialvector
qweasdzxqwea
An initialization vector is a fixed-size input to a cryptographic primitive. It needs to be an alphanumeric string of length 12.
size.key.symmetric
256
Default size of Symmetric key.
size.key.asymmetric
1024
Default size of Asymmetric key.
size.initialvector
12
Default size of Initial vector.
master.password.provider
software
Name of the implementation to be used for encrypting DEKs
NA
AWS access key to access the KMS service (Note: this field is required only if master.password.provider
is set to kms)
NA
AWS secret to access the KMS service (Note: this field is required only if master.password.provider
is set to kms)
NA
AWS region to access the KMS service (Note: this field is required only if master.password.provider
is set to kms)
NA
Id of the KMS key to be used for encrypting the DEK (Note: this field is required only if master.password.provider
is set to kms)
Indexer uses a config file per module to store all the configurations pertaining to that module. The Indexer reads multiple such files at start-up to support indexing for all the configured modules. In config, we define source and, destination elastic search index names, custom mappings for data transformation and mappings for data enrichment.
Below is the sample configuration for indexing TL application creation data into elastic search.
The table below lists the key configuration variables.
serviceName
Name of the module to which this configuration belongs.
summary
Summary of the module.
version
Version of the configuration.
mappings
List of definitions within the module. Every definition corresponds to one index requirement. Which means, every object received onto the kafka queue can be used to create multiple indexes, each of these indexes will need configuration, all such configurations belonging to one topic forms one entry in the mappings list. The keys listed henceforth together form one definition and multiple such definitions are part of this mappings key.
topic
The topic on which the data is to be received to activate this particular configuration.
configKey
Key to identify to what type of job is this config for. values: INDEX, REINDEX, LEGACYINDEX. INDEX: LiveIndex, REINDEX: Reindex, LEGACYINDEX: LegacyIndex.
indexes
Key to configure multiple index configurations for the data received on a particular topic. Multiple indexes based on a different requirement can be created using the same object.
name
Index name on the elastic search. (Index will be created if it doesn't exist with this name.)
type
Document type within that index to which the index json has to go. (Elasticsearch uses the structure of index/type/docId to locate any file within index/type with id = docId)
id
Takes comma-separated JsonPaths. The JSONPath is applied on the record received on the queue, the values hence obtained are appended and used as ID for the record.
isBulk
Boolean key to identify whether the JSON received on the Queue is from a Bulk API. In simple words, whether the JSON contains a list at the top level.
jsonPath
Key to be used in case of indexing a part of the input JSON and in case of indexing a custom json where the values for custom json are to be fetched from this part of the input.
timeStampField
JSONPath of the field in the input which can be used to obtain the timestamp of the input.
fieldsToBeMasked
A list of JSONPaths of the fields of the input to be masked in the index.
customJsonMapping
Key to be used while building an entirely different object using the input JSON on the queue
indexMapping
A skeleton/mapping of the JSON that is to be indexed. Note that, this JSON must always contain a key called "Data" at the top-level and the custom mapping begins within this key. This is only a convention to smoothen dashboarding on Kibana when data from multiple indexes have to be fetched for a single dashboard.
fieldMapping
Contains a list of configurations. Each configuration contains keys to identify the field of the input JSON that has to be mapped to the fields of the index json which is mentioned in the key 'indexMapping' in the config.
inJsonPath
JSONPath of the field from the input.
outJsonPath
JSONPath of the field of the index json.
externalUriMapping
Contains a list of configurations. Each configuration contains keys to identify the field of the input JSON that is to be enriched using APIs from the external services. The configuration for those APIs also is a part of this.
path
URI of the API to be used. (it should be POST/_search API.)
queryParam
Configuration of the query params to be used for the API call. It is a comma-separated key-value pair, where the key is the parameter name as per the API contract and value is the JSONPath of the field to be equated against this parameter.
apiRequest
Request Body of the API. (Since we only use _search APIs, it should be only RequestInfo.)
uriResponseMapping
Contains a list of configuration. Each configuration contains two keys: One is a JSONPath to identify the field from response, Second is also a JSONPath to map the response field to a field of the index json mentioned in the key 'indexMapping'.
mdmsMapping
Contains a list of configurations. Each configuration contains keys to identify the field of the input JSON that is to be denormalized using APIs from the MDMS service. The configuration for those MDMS APIs also is a part of this.
path
URI of the API to be used. (it should be POST/_search API.)
moduleName
Module Name from MDMS.
masterName
Master Name from MDMS.
tenantId
Tenant id to be used.
filter
Filter to be applied to the data to be fetched.
filterMapping
Maps the field of input json to variables in the filter
variable
Variable in the filter
valueJsonpath
JSONPath of the input to be mapped to the variable.
The Internal Gateway is a simplified Zuul service which provides an easy integration of services running different namespaces of a multistate instance, the clients need not know about all the details of microservices and their namespace in the K8s setup.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Java 17
API Gateway
Provides an easier API interface between services running in different tenants(namespaces) where direct access between microservices is blocked by default.
Allows refactoring microservices independently without forcing the clients to refactor integrating logic with other tenants.
Route filter - a single route filter enables the routing based on tenatId from the HTTP header of the incoming requests.
For each service, the below-mentioned property has to be added in internal-gateway.json
Open
API Swagger Documentation
User Service stores the PII data in the database in encrypted form. So whichever service is reading that data directly from the database will have to decrypt the data before responding to the user. As of now, to the best of our knowledge following services are reading the user data from the database:
User Service
Report Service
Searcher Service
Enc-Client Library is a supplementary library with features like masking, auditing, etc. The above services call the functions from the library to decrypt the data read from the database.
In a bid to avoid unnecessary repetition of codes which generate ids and to have central control over the logic so that the burden of maintenance is reduced from the developers. To create a config-based application which can be used without writing even a single line of coding.
Prior knowledge of Java/J2EE.
Prior knowledge of Spring Boot and Flyway.
Prior knowledge of REST APIs and related concepts like path parameters, headers, JSON etc.
The application exposes a Rest API to take in requests and provide the ids in response in the requested format. The request format varies from current date information, , random number, and sequence generated number. Id can be generated by providing a request with any of the above-mentioned information.
The Id Amritsar-PT-2019/09/12-000454-99 contains
Amritsar - which is the name of the city
PT - a fixed string representing the module code(PROPERTY TAX)
2019/09/12 - date
000454 - sequence generated number
99 - random number
The id generated in the above-mentioned eg needs the following format
[city]-PT-[cy:yyyy/mm/dd]-[SEQ_SEQUENCE_NAME]-[d{4}]
Everything in the square brackets will be replaced with the appropriate values by the app.
By default, the IDGen service will now read its configuration from MDMS. DB Configuration requires access to the DB, so the new preferred method for the configuration is MDMS. The configuration needs to be stored in common-masters/IdFormat.json in MDMS
It is recommended to have the IdFormat as a state-level master. To disable the configuration to be read from DB instead, the environment variable IDFORMAT_FROM_MDMS should be set to false.
ID-FORMAT-REPLACEABLES:
[FY:] - represents the financial year, the string will be replaced by the value of starting year and the last two numbers of the ending year separated by a hyphen. For instance: 2018-19 in case of the financial year 2018 to 2019.
[cy:] - any string that starts with cy will be considered as the date format. The values after the cy: is the format using which output will be generated.
[d{5}] - d represents the random number generator, the length of the random number can be specified in flower brackets next to d. If the value is not provided then the random length of 2 will be assigned.
[city] - The string city will be replaced by the city code provided by the respective ULB in location services.
[SEQ_*] - String starting with seq will be considered as sequence names, which will be queried to get the next seq number. If your sequence does not start with the namespace containing “SEQ” then the application will not consider it as a sequence. In the absence of the sequence from DB error will be thrown.
[tenantid] - replaces the placeholder with the tenantid passed in the request object.
[tenant_id] - replaces the placeholder with the tenantid passed in the request object. Replaces all `.` with `_`
[TENANT_ID] - replaces the placeholder with the tenantid passed in the request object. Replaces all `.` with `_`, and changes the case to upper case.
When you use both idName and format in a request. IDGEN will first try to see if the format for the given idName exists, if not then the format will be used.
If you want a state-level sequence then you need to use a fixed sequence name
But if you want a ULB level sequence, the sequence name should be dynamic based on the tenantid as given in the below example.
The SEQ_* replaceable used in id generation are by default expected for the sequence to already exist in the DB. But this behaviour can be changed and controlled using two environmental variables while deploying the service
AUTOCREATE_NEW_SEQ: Default to false. When set to true, this will auto-create sequences when the format has been derived using provided idName. Since the idName format comes from DB or MDMS, it is a trusted value and this value should be set to true. This will make sure that no DB configuration needs to be done as long as MDMS has been configured. It is recommended that each service using idgen should generate an ID using idName instead of just using passing the format directly. This will make sure that no DB configuration needs to be done for creating sequences.\
AUTOCREATE_REQUEST_SEQ: Default to false. When set to true, this will auto-create sequences when the format has been derived using the format parameter from the request. This is recommended to keep as false, as anyone with access to idgen can create any number of sequences in DB and overload the DB. Though during the initial setup of an environment, this variable can be kept as true to create all the sequences when the initial flows are run from the UI to generate the sequences. And afterwards, the flags should be disabled.
Add MDMS configs required for ID Gen service and restart the MDMS service.
Deploy the latest version of the ID generation service.
Add role-action mapping for APIs.
The ID Gen service is used to generate unique ID numbers for all miscellaneous/ad-hoc services which citizens avail from ULBs.
Can perform service-specific business logic without impacting the other module.
Provides the capability of generating the unique identifier of the entity calling ID Gen service.
To integrate, the host of idgen-services module should be overwritten in the helm chart.
/egov-idgen/id/_generate
should be added as the endpoint for generating ID numbers in the system
The Indexer Service operates independently and is responsible for all indexing tasks on the DIGIT platform. It processes records from specific Kafka topics and utilizes the corresponding index configuration defined in YAML files by each module.
Objectives:
Efficiently read and process records from Kafka topics.
Retrieve and apply appropriate index configurations from YAML files.
To provide a one-stop framework for indexing the data to Elasticsearch.
To create provisions for indexing live data, reindexing from one index to the other and indexing legacy data from the data store.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Prior knowledge of Java/J2EE
Prior knowledge of SpringBoot
Prior knowledge of Elasticsearch
Prior knowledge of REST APIs and related concepts like path parameters, headers, JSON etc.
Prior knowledge of Kafka and related concepts like Producer, Consumer, Topic etc.
Performs three major tasks namely: LiveIndex, Reindex and LegacyIndex.
LiveIndex: Task of indexing the live transaction data on the platform. This keeps the es data in sync with the DB.
Reindex: Task of indexing data from one index to the other. ES already provides this feature, the indexer does the same but with data transformation.
LegacyIndex: Task of indexing legacy data from the tables to ES.
Provides flexibility to index the entire object, a part of the object or an entirely different custom object all using one input JSON from modules.
Provides features for customizing index JSON by field mapping, field masking, data enrichment through external APIs and data denormalization using MDMS.
One-stop shop for all the es index requirements with easy-to-write and easy-to-maintain configuration files.
Designed as a consumer to save API overhead. The consumer configs are written from scratch for complete control over consumer behaviour.
Step 1: Write the configuration as per your requirement. The structure of the config file is explained later in the same doc.
Step 3: Provide the absolute path of the checked-in file to the DevOps team. They will add it to the file-read path of egov-indexer by updating the environment manifest file, ensuring it is read at the time of the application's startup.
Step 4: Run the egov-indexer app. Since it is a consumer, it starts listening to the configured topics and indexes the data.
a) POST /{key}/_index
Receive data and index. There should be a mapping with the topic as {key} in index config files.
b) POST /_reindex
This is used to migrate data from one index to another index
c) POST /_legacyindex
This is to run the LegacyIndex job to index data from DB. In the request body, the URL of the service which would be called by the indexer service to pick data must be mentioned.
DIGIT supports multiple languages. To enable this feature begin with setting up the base product localisation. The multilingual UI support makes it easier for users to understand the DIGIT operations.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Before starting the localisation setup one should know the React and eGov FrameWork.
Before setting up localisation, make sure that all the keys are pushed to the Create API and also get prepared with the values that need to be added to the Localisation key specific to particular languages that are being added to the product.
Make sure you know where to add the localisation in the code.
After localisation, users can view DIGIT screens in their preferred language. Completing the application is simple as the DIGIT UI allows easy language selection.
Once the key is added to the code as per requirement, the deployment can be done in the same way as the code is deployed.
Select a label that needs to be localised from the Product code. Here is the example code for a header before setting up Localisation.
As we see the above which supports only the English language, To set up Localisation to that header we need to the code in the following manner.
When comparing the code before and after the Localisation setup, we can see that the following code has been added.
{
labelName: "Trade Unit ",
labelKey: "TL_NEW_TRADE_DETAILS_TRADE_UNIT_HEADER"
},
The values here can be added to the key using two methods: either via the newly developed localisation screen or by updating the key values through the Postman application to create an API.
This document provides information on migrating location data from an older JSON-based configuration to a newly developed boundary service. A boundary-migration service has been created to facilitate this process. Users can migrate the data to the new boundary service by calling a single API.
Below is the step-by-step procedure for the same:-
Open Postman and create a new HTTP endpoint with a path as
and the above as the endpoint request body.
PS: The boundary-migration service is assumed to run on port 8081. If it is running on a different port, check the active port and update the path accordingly.
Send the request and if the endpoint path and request body are correct the response would be 200 OK.
Below is a reference CURL:
An eGov core application which provides locale-specific components and translating text for the eGov group of applications.
Prior knowledge of Java/J2EE.
Prior knowledge of Spring Boot.
Prior knowledge of REST APIs and related concepts like path parameters, headers, JSON etc.
Prior knowledge of Redis and Postgres.
The localization application stores the locale data in the format of key and value along with the module, tenantid and locale. The module defines which application of eGov owns the locale data and tenantId does the same for the tenant, locale is the specific locale for which data is being added.
The request can be posted through the post API with the above-mentioned variables in the request body.
Once posted the same data can be searched based on the module, locale and tenantId as keys.
The data posted to the localization service is permanently stored in the database and loaded into the Redis cache for easy access and every time new data is added to the application the Redis cache will be refreshed.
Deploy the latest version of the Localization Service.
Add role-action mapping for APIs.
The localization service is used to store key-value pairs of metadata in different languages for all miscellaneous/ad-hoc services which citizens avail from ULBs.
Can perform service-specific business logic without impacting the other module.
Provides the capability of having multiple languages in the module.
To integrate, the host of the localization-services module should be overwritten in the helm chart.
/localization/messages/v1/_upsert
should be added as the create endpoint for creating localization key-value pairs in the system
/localization/messages/v1/_search
should be added as the search endpoint. This method handles all requests to search existing records depending on different search criteria
MDMS v2 provides APIs for defining schemas, searching schemas, and adding master data against these defined schemas. All data is now stored in PostgreSQL tables instead of GitHub. MDMS v2 currently also includes v1 search API for fetching data from the database in the same format as MDMS v1 search API to ensure backward compatibility.
Prior knowledge of Java/J2EE.
Prior knowledge of Spring Boot.
Prior knowledge of REST APIs and related concepts like path parameters, headers, JSON etc.
Prior knowledge of Git.
Advanced knowledge of operating JSON data would be an added advantage to understanding the service.
Create schema: MDMS v2 introduces functionality to define your schema with all validations and properties supported by JSON schema draft 07. Below is a sample schema definition for your reference -
To create a basic schema definition, use the following keywords:
$schema: specifies which draft of the JSON Schema standard the schema adheres to.
title and description: state the intent of the schema. These keywords don’t add any constraints to the data being validated.
type: defines the first constraint on the JSON data.
Additionally, we have two keys which are not part of standard JSON schema attributes -
x-unique: specifies the fields in the schema utilizing which a unique identifier for each master data will be created.
x-ref-schema: specifies referenced data. This is useful in scenarios where the parent-child relationship needs to be established in master data. For example, Trade Type can be a parent master data to Trade Sub Type. In the example above, the field path represents the JsonPath of the attribute in the master data which contains the unique identifier of the parent which is being referenced. Schema code represents the schema under which the referenced master data needs to be validated for existence.
Search schema: MDMS v2 has API to search schema based on the tenantid, schema code, and unique identifier.
Create data: MDMS v2 enables data creation according to the defined schema. Below is an example of data based on the mentioned schema:
Search data: MDMS v2 exposes two search APIs - v1 and v2 search where v1 search API is completely backward compatible.
Update data: MDMS v2 allows the updation of master data fields.
Fallback functionality: Both the search APIs have fallback implemented where if data is not found for a particular tenant, the services fall back to the parent tenant(s) and return the response. If data is not found even for the parent tenant, an empty response is sent to the user.
/mdms-v2/schema/v1/_create - Takes RequestInfo and SchemaDefinition in the request body. SchemaDefinition has all attributes which define the schema.
/mdms-v2/schema/v1/_search - Takes RequestInfo and SchemaDefSearchCriteria in the request body and returns schemas based on the provided search criteria.
/mdms-v2/v2/_create/{schemaCode} - Takes RequestInfo and Mdms in the request body where the MDMS object has all the information for the master being created and it takes schemaCode as path param to identify the schema against which data is being created.
/mdms-v2/v2/_search - Takes RequestInfo and MdmsCriteria in the request body to return master data based on the provided search criteria. It also has a fallback functionality where if data is not found for the tenant which is sent, the services fall back to the parent tenant(s) to look for the data and return it.
/mdms-v2/v2/_update/{schemaCode} - Takes RequestInfo and Mdms in the request body where the MDMS object has all the information for the master being updated and it takes schemaCode as path param to identify the schema against which data is being updated.
/mdms-v2/v1/_search - This is a backwards-compatible API which takes RequestInfo and MdmsCriteria in the request body to return master data based on the provided search criteria and returns the response in MDMS v1 format. It also has fallback functionality where if data is not found for the tenant which is sent, the services fall back to the parent tenant(s) to look for the data and return it.
Learn how to configure Localization service.
Configure service to enable fetch and share of location details
A core application that provides location details of the tenant for which the services are being provided.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Java 17
PostgreSQL server is running and the DB is created
The location information is also known as the boundary data of ULB
Boundary data can be of different hierarchies - ADMIN or ELECTION hierarchy defined by the Administrators and Revenue hierarchy defined by the Revenue department.
The election hierarchy has the locations divided into several types like zone, election wards, blocks, streets and localities. The Revenue hierarchy has the locations divided into a zone, ward, block and locality.
The model which defines the localities like zone, ward and etc is a boundary object which contains information like name, lat, long, parent or children boundary if any. The boundaries come under each other in a hierarchy. For instance, a zone contains wards, a ward contains blocks, and a block contains locality. The order in which the boundaries are contained in each other differs based on the tenants.
Add/Update the MDMS master file which contains boundary data of ULBs.
Add Role-Action mapping for the egov-location APIs.
Deploy/Redeploy the latest version of the egov-mdms service.
Fill the above environment variables in the egov-location with proper values.
Deploy the latest version of the egov-location service.
The boundary data has been moved to MDMS from the master tables in DB. The location service fetches the JSON from MDMS and parses it to the structure of the boundary object as mentioned above. A sample master would look like below.
The egov-location APIs can be used by any module which needs to store the location details of the tenant.
Get the boundary details based on boundary type and hierarchy type within the tenant boundary structure.
Get the geographical boundaries by providing appropriate GeoJson.
Get the tenant list in the given latitude and longitude.
To integrate, the host of egov-location should be overwritten in the helm chart.
/boundarys/_search should be added as the search endpoint for searching boundary details based on tenant Id, Boundary Type, Hierarchy Type etc.
/geography/_search should be added as the search endpoint. This method handles all requests related to geographical boundaries by providing appropriate GeoJson and other associated data based on tenantId or lat/long etc.
/tenant/_search should be added as the search endpoint. This method tries to resolve a given lat, long to a corresponding tenant, provided there exists a mapping between the reverse geocoded city to the tenant.
The MDMS tenant boundary master file should be loaded in the MDMS service.
When any of these services reads the data from the database, it will be in encrypted form. Before responding to a request, they call the enc-client library to convert the data to plain/masked form. The data returned as part of the response should only be in plain/masked form. It should not contain any encrypted values. Detailed guidelines on how to write the necessary configurations are provided in document.
Step 2: Check in the config file to a remote location preferably Github. Currently, we check the files into this folder -for dev
Click here to access the details.
Import the boundary migration service from in an IDE (preferably IntelliJ) and start the service after building it.
Copy the boundary-data JSON that needs to be migrated as an example we are taking . and adding RequestInfo inside this as shown below.
\
Reference - Now, properties can be added under the schema definition. In JSON Schema terms, properties is a validation keyword. When you define properties, you create an object where each property represents a key in the JSON data that’s being validated. You can also specify which properties described in the object are required.
Reference -
Follow this guide to see how to in postgreSQL.
Working Knowledge of service to add location data in master data.
egov-mdms service is running and all the required are loaded in it
Please refer to the for the location service to understand the structure of APIs and to have a visualisation of all internal APIs.
egov.services.egov_mdms.hostname
Host name for MDMS service.
egov.services.egov_mdms.searchpath
MDMS Search URL.
egov.service.egov.mdms.moduleName
MDMS module which contain boundary master.
egov.service.egov.mdms.masterName
MDMS master file which contain boundary detail.
tenantId
The tenantId (ULB code) for which the boundary data configuration is defined.
moduleName
The name of the module where TenantBoundary master is present.
TenantBoundary.hierarchyType.name
Unique name of the hierarchy type.
TenantBoundary.hierarchyType.code
Unique code of the hierarchy type.
TenantBoundary.boundary.id
Id of boundary defined for particular hierarchy.
boundaryNum
Sequence number of boundary attribute defined for the particular hierarchy.
name
Name of the boundary like Block 1 or Zone 1 or City name.
localname
Local name of the boundary.
longitude
Longitude of the boundary.
latitude
Latitude of the boundary.
label
Label of the boundary.
code
Code of the boundary.
children
Details of its sub-boundaries.
Adding new language to the DIGIT System. Refer to the link provided to find out how languages are added in DIGIT.
Follow the steps below to adopt the new MDMS -
Define the schema for the master that you want to promote to MDMS v2.
Ensure that the schema has a unique field (a unique field can also be composite) to enforce data integrity.
Use the following API endpoint to create a schema - /mdms-v2/schema/v1/_create
Search and verify the created schema using the following API endpoint - /mdms-v2/schema/v1/_search
Once the schema is in place, add the data using the following API endpoint - /mdms-v2/v2/_create/{schemaCode}
Verify the data by using the following API endpoint - /mdms-v2/v2/_search
Configure master data management service
The MDMS service aims to reduce the time spent by developers on writing codes to store and fetch master data (primary data needed for module functionality) which doesn’t have any business logic associated with them. Instead of writing APIs and creating tables in different services to store and retrieve data that is seldom changed, the MDMS service keeps them in a single location for all modules and provides data on demand with the help of no more than three lines of configuration.
Prior knowledge of Java/J2EE
Prior knowledge of Spring Boot
Prior knowledge of REST APIs and related concepts like path parameters, headers, JSON etc.
Prior knowledge of Git
Advanced knowledge of how to operate JSON data would be an added advantage to understanding the service
Adds master data for usage without the need to create master data APIs in every module.
Reads data from GIT directly with no dependency on any database services.
egov.mdms.conf.path
The default value of folder where master data files are stored
masters.config.url
The default value of the file URL which contains master-config values
The MDMS service provides ease of access to master data for any service.
No time spent writing repetitive codes with no business logic.
To integrate, the host of egov-mdms-service should be overwritten in the helm chart
egov-mdms-service/v1/_search
should be added as the search endpoint for searching master data.
MDMS client from eGov snapshots should be added as mvn entity in pom.xml for ease of access since it provides MDMS request pojos.
Learn how to setup DIGIT master data.
Notification service can notify the user through SMS and email for there action on DIGIT as an acknowledgement that their action has been successfully completed.
ex: actions like property create, TL create, etc.
To send SMS we need the help of 2 services, one on which the user is taking action and the second SMS service.
To send an email we need the help of 2 services, one on which the user is taking action and second email service.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Prior Knowledge of Spring boot.
Prior Knowledge of Kafka.
Prior Knowledge of localization service.
For a specific action of the user, he/she will get a SMS and email as an acknowledgment.
Users can get SMS and email based on the localization language.
If you want to take action for a specific action on that action the service has to listen to a particular topic so that each time any record comes to the topic consumer will know that action has been taken and can trigger a notification for it.
ex: if you want to trigger a notification for Property create then the property service’s NotificationConsumer class should listen to topic egov.pt.assessment.create.topic so that each time any record comes to the topic egov.pt.assessment.create.topic NotificationConsumer will know that Property creates action that has been taken and can trigger a notification for it.
when any record comes into the topic first the service will fetch all the required data like user name, property id, mobile number, tenant id, etc from the record which we fetched from the topic.
Then we will fetch the message contain from localization and the service replaces the placeholders with the actual data.
Then put the record in SMS topic in which SMS service is listening.
email service is also listening to the same topic which SMS service is listening.
NotificationConsumer
Master Data Management Service is a core service made available on the DIGIT platform. It encapsulates the functionality surrounding Master Data Management. The service creates, updates and fetches Master Data pertaining to different modules. This eliminates the overhead of embedding the storage and retrieval logic of Master Data into other modules. The functionality is exposed via REST API.
Prior Knowledge of Java/J2EE, Spring Boot, and advanced knowledge of operating JSON data would be an added advantage to understanding the service.
The MDM service reads the data from a set of JSON files from a pre-specified location. It can either be an online location (readable JSON files from online) or offline (JSON files stored in local memory). The JSON files should conform to a prescribed format. The data is stored in a map and tenantID of the file serves as the key.
Once the data is stored in the map the same can be retrieved by making an API request to the MDM service. Filters can be applied in the request to retrieve data based on the existing fields of JSON.
The spring boot application needs lombok extension added to your IDE to load it. Once the application is up and running API requests can be posted to the URL.
The config JSON files to be written should follow the listed rules
The config files should have JSON extension.
The file should mention the tenantId, module name, and master name first before defining the data.
The Master Name in the above sample will be substituted by the actual name of the master data. The array succeeding it will contain the actual data.
Example config JSON for “Billing Service”
APIs are available to create, update and fetch master data pertaining to different modules. Refer to the segment below for quick details.
BasePath:/mdms/v1/[API endpoint] Method
POST /_create
Creates or Updates Master Data on GitHub as JSON files
MDMSCreateRequest
Request Info + MasterDetail — Details of the master data to be created or updated on GitHub.
MasterDetail
MdmsCreateResponse
Response Info
Method
POST /_search
This method fetches a list of masters for a specified module and tenantId.
MDMSCriteriaReq (mdms request) -
Request Info + MdmsCriteria — Details of module and master which need to be searched using MDMS.
MdmsCriteria
MdmsResponse
Response Info + Mdms
MDMS
Common Request/Response/Error Structures:
RequestInfo should be used to carry meta information about the requests to the server as described in the fields below. All DIGIT APIs will use requestinfo as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseInfo in the response body to ensure correlation.
ResponseInfo should be used to carry metadata information about the response from the server. apiId, ver, and msgId in ResponseInfo should always correspond to the same values in the respective request's RequestInfo.
ErrorRes
All DIGIT APIs will return ErrorRes in case of failure which will carry ResponseInfo as metadata and Error object as an actual representation of the error. When the request processing status in the ResponseInfo is ‘FAILED’ the HTTP status code 400 is returned.
Configuring master data for a new module requires creating a new module in the master config file and adding master data. For better organizing, create all the master data files belonging to the module in the same folder. Organizing in the same folder is not mandatory it is based on the moduleName in the master data file.
Before you proceed with the configuration, make sure the following pre-requisites are met -
User with permission to edit the git repository where MDMS data is configured.
These data can be used to validate the incoming data.
After adding the new module data, the MDMS service needs to be restarted to read the newly added data.
The master config file is structured as below. Each key in the master config is a module and each key in the module is a master.
The new module can be added below the existing modules in the master config file.
For creating a new master in MDMS, create the JSON file with the master data and configure the newly created master in the master config file.
Before proceeding with the configuration, make sure the following pre-requisites are met -
User with permission to edit the git repository where MDMS data is configured.
After adding the new master, the MDMS service needs to be restarted to read the newly added data.
The new JSON file needs to contain 3 keys as shown in the below code snippet. The new master can be created either State-wise or ULB-wise. Tenant ID and config in the master config file determine this.
The master config file is structured as below. Each key in the master config is a module and each key in the module is a master.
Each master contains the following data and the keys are self-explanatory
MDMS stands for Master Data Management Service. MDMS is one of the applications in the eGov DIGIT core group of services. This service aims to reduce the time spent by developers on writing codes to store and fetch master data (primary data needed for module functionality ) which doesn’t have any business logic associated with them.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Prior knowledge of Java/J2EE.
Prior knowledge of Spring Boot.
Prior knowledge of REST APIs and related concepts like path parameters, headers, JSON, etc.
Prior knowledge of Git.
Advanced knowledge of how to operate JSON data would be an added advantage to understanding the service.
The MDMS service reads the data from a set of JSON files from a pre-specified location.
It can either be an online location (readable JSON files from online) or offline (JSON files stored in local memory).
The JSON files are in a prescribed format and store the data on a map. The tenantID of the file serves as a key and a map of master data details as values.
Once the data is stored in the map the same can be retrieved by making an API request to the MDMS service. Filters can be applied in the request to retrieve data based on the existing fields of JSON.
For deploying the changes in MDMS data, the service needs to be restarted.
The changes in MDMS data could be adding new data, updating existing data, or deleting it.
The config JSON files to be written should follow the listed rules
The config files should have JSON extension
The file should mention the tenantId, module name, and master name first before defining the data
Example config JSON for “Billing Service”
MDMS supports the configuration of data at different levels. While we enable a state there can be data that is common to all the ULBs of the state and data specific to each ULB. The data further can be configured at each module level as state-specific or ULB’s specific.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Prior Knowledge of Java/J2EE.
Prior Knowledge of Spring Boot.
Prior Knowledge of REST APIs and related concepts like path parameters, headers, JSON, etc.
Prior knowledge of Git.
Advanced knowledge of operating JSON data would be an added advantage to understanding the service.
State Level Masters are maintained in a common folder.
ULB Level Masters are maintained in separate folders named after the ULB.
Module Specific State Level Masters are maintained by a folder named after the specific module that is placed outside the common folder.
For deploying the changes(adding new data, updating existing data or deletion) in MDMS, the MDMS service needs to be restarted.
The common master data across all ULBs and modules like department, designation, etc are placed under the common-masters folder which is under the tenant folder of the MDMS repository.
The common master data across all ULBs and are module-specific are placed in a folder named after each module. These folders are placed directly under the tenant folder.
Module data that are specific to each ULB like boundary data, interest, penalty, etc are configured at the ULB level. There will be a folder per ULB under the tenant folder and all the ULB’s module-specific data are placed under this folder.
Tenant represents a body in a system. In the municipal system, a state and its ULBs (Urban local bodies) are tenants. ULB represents a city or a town in a state. Tenant configuration is done in MDMS.
Before proceeding with the configuration, the following pre-requisites are met -
Knowledge of json and how to write a json is required.
Knowledge of MDMS is required.
User with permission to edit the git repository where MDMS data is configured.
On the login page, city name selection is required. Tenant added in MDMS shows in the city drop-down of the login page.
In reports or in the employee inbox page the details related to ULB are displayed from the fetched ULB data added in MDMS.
Modules i.e., TL, PT, MCS can be enabled based on the requirement of the tenant.
After adding the new tenant, the MDMS service needs to be restarted to read the newly added data.
Tenant is added in tenant.json. In MDMS, file tenant.json, under the tenant folder holds the details of the state and ULBs to be added in that state.
Localization should be pushed for ULB grade and ULB name. The format is given below.
Localization for ULB Grade
Localization for ULB Name
Format of localization code for tenant name <MDMS_State_Tenant_Folder_Name>_<Tenants_Fille_Name>_<Tenant_Code> (replace dot with underscore)
Boundary data should be added for the new tenant.
In case the data does not have unique identifiers e.g. complex masters like the one , consider adding a redundant field which can serve as the unique identifier.
the latest version of the MDMS-service
Note: This video will give you an idea of how to deploy any Digit-service. Further you can find the latest builds for each service in out latest here.
Add for the file location
Add JSON path
Note : See the for the values of conf path and master config.
egov-mdms sample data - [Download this and refer it's path as conf path value]
master-config.json - [Refer to this path as master config value]
All content on this page by is licensed under a .
Please check the link to create a new master
ex: ///common-masters/ Here “pb” is the tenant folder name.
ex: ///TradeLicense/ Here “pb” is the tenant folder name and “TradeLicense“ is the module name.
ex: ////TradeLicense/ Here “amritsar“ is the ULB name and “TradeLicense“ is the module name. All the data specific to this module for the ULB are configured inside this folder.
To enable tenants the above data should be pushed in tenant.json file. Here "ULB Grade" and City "Code" are important fields. ULB Grade can have a set of allowed values that determines the ULB type, (, Municipality (municipal council, municipal board, municipal committee) (Nagar Parishad), etc). City "Code" has to be unique to each tenant. This city-specific code is used in all transactions. Not permissible to change the code. If changed we will lose the data of the previous transactions done.
"logoId": "", Here the last section of the path should be "/<tenantId>/logo.png". If we use anything else, the logo will not be displayed on the UI. <tenantId> is the tenant code ie “uk.citya”.
tenantId
Unique id for a tenant.
Yes
String
filePath
file-path on git where master data is to be created or updated
Yes
String
masterName
Master Data name to be created or updated
Yes
String
masterData
content to be written on to the Config file
Yes
Object
tenantId
Unique id for a tenant
Yes
String
moduleDetails
module for which master data is required
Yes
Array
mdms
Array of modules
Yes
Array
apiId
unique API ID
Yes
String
ver
API version - for HTTP based request this will be same as used in path
Yes
String
ts
time in epoch format: int64
Yes
Long
action
API action to be performed like _create, _update, _search (denoting POST, PUT, GET) or _oauth etc
Yes
String
DID
Device ID from which the API is called
No
String
Key
API key (API key provided to the caller in case of server to server communication)
No
String
msgId
Unique request message id from the caller
Yes
String
requestorId
UserId of the user calling
No
String
authToken
//session/jwt/saml token/oauth token - the usual value that would go into HTTP bearer token
Yes
String
apiId
unique API ID
Yes
String
ver
API version
Yes
String
ts
response time in epoch format: int64
Yes
Long
resMsgId
unique response message-id (UUID) - will usually be the correlation id from the server
No
String
msgId
message-id of the request
No
String
status
status of request processing -
Enum: SUCCESSFUL (HTTP 201) or FAILED (HTTP 400)
Yes
String
code
Error Code will be a module-specific error label/code to identify the error. All modules should also publish the Error codes with their specific localized values in localization service to ensure clients can print locale-specific error messages. An example of an error code would be UserNotFound to indicate User Not Found by User/Authentication service. All services must declare their possible Error Codes with a brief description in the error response section of their API path.
Yes
String
message
English locale message of the error code. Clients should make a separate call to get the other locale description if configured with the service. Clients may choose to cache these locale-specific messages to enhance performance with a reasonable TTL (May be defined by the localization service based on tenant + module combination).
Yes
String
description
Optional long description of the error to help clients take remedial action. This will not be available as part of the localization service.
No
String
params
Some error messages may carry replaceable fields (say $1, $2) to provide more context to the message. E.g. Format related errors may want to indicate the actual field for which the format is invalid. Client's should use the values in the param array to replace those fields.
No
Array
tenantId
Serves as a Key
moduleName
Name of the module to which the master data belongs
MasterName
The Master Name will be substituted by the actual name of the master data. The array succeeding it will contain the actual data.
Persister Service persists data in the database in a sync manner providing very low latency. The queries which have to be used to insert/update data in the database are written in yaml file. The values which have to be inserted are extracted from the json using jsonPaths defined in the same yaml configuration.
Below is a sample configuration which inserts data in a couple of tables.
The above configuration is used to insert data published on the kafka topic save-pgr-request
in the tables eg_pgr_service_v2
and eg_pgr_address_v2
. Similarly, the configuration can be written to update data. Following is a sample configuration:
The above configuration is used to update the data in tables. Similarly, the upsert operation can be done using ON CONFLICT() function in psql.
The table below describes each field variable in the configuration.
serviceName
The module name to which the configuration belongs
version
Version of the config
description
Detailed description of the operations performed by the config
fromTopic
Kafka topic from which data has to be persisted in DB
isTransaction
Flag to enable/disable perform operations in Transaction fashion
query
Prepared Statements to insert/update data in DB
basePath
JsonPath of the object that has to be inserted/updated.
jsonPath
JsonPath of the fields that has to be inserted in table columns
type
Type of field
dbType
DB Type of the column in which field is to be inserted
To use the generic GET/POST SMS gateway, first, configure the service application properties sms.provider.class=Generic
This will set the generic interface to be used. This is the default implementation, which can work with most SMS providers. The generic implementation supports below -
GET or POST-based API
Supports query params, form data, JSON body
To configure the URL of the SMS provider use sms.provider.url property.
To configure the HTTP method use configure the sms.provider.requestType property to either GET or POST.
To configure form data or json api set sms.provider.contentType=application/x-www-form-urlencoded or sms.provider.contentType=application/json respectively.
To configure which data needs to be sent to the API set the below property:
sms.config.map={'uname':'$username', 'pwd': '$password', 'sid':'$senderid', 'mobileno':'$mobileno', 'content':'$message', 'smsservicetype':'unicodemsg', 'myParam': '$extraParam' , 'messageType': '$mtype'}
sms.category.map={'mtype': {'*': 'abc', 'OTP': 'def'}}
sms.extra.config.map={'extraParam': 'abc'}
sms.extra.config.map is not used currently and is only kept for custom implementation which requires data that does not need to be directly passed to the REST API call. sms.config.map is a map of parameters and their values.
Special variables that are mapped -
$username maps to sms.provider.username
$password maps to sms.provider.password
$senderid maps to sms.senderid
$mobileno maps to mobileNumber from kafka fetched message
$message maps to the message from the kafka fetched message
$<name> any variable that is not from above list, is first checked in sms.category.map and then in application.properties and then in the environment variable with full upper case and _ replacing -, space or .
So if you use sms.config.map={'u':'$username', 'p':'password'}. Then the API call will be passed <url>?u=<$username>&p=password
Message success delivery can be controlled using the below properties
sms.verify.response (default: false)
sms.print.response (default: false)
sms.verify.responseContains
sms.success.codes (default: 200,201,202)
sms.error.codes
If you want to verify some text in the API call response set sms.verify.response=true and sms.verify.responseContains to the text that should be contained in the response.
It is possible to whitelist or blacklist phone numbers to which the messages should be sent. This can be controlled using the below properties:
sms.blacklist.numbers
sms.whitelist.numbers
Both of them can be given a separate list of numbers or number patterns. To use patterns use X for any digit match and * for any number of digits match.
sms.blacklist.numbers=5*,9999999999,88888888XX will blacklist any phone number starting with 5, or the exact number 9999999999 and all numbers starting from 8888888800 to 8888888899
Few 3rd parties require a prefix of 0 or 91 or +91 with the mobile number. In such a case you can use the sms.mobile.prefix to automatically add the prefix to the mobile number coming in the message queue.
eGov Payment Gateway acts as a liaison between eGov apps and external payment gateways facilitating payments, reconciliation of payments and lookup of transactions' status'.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Java 17
Kafka server is up and running
PSQL server is running and the database is created to store transaction data.
Create or initiate a transaction, to make a payment against a bill.
Make payment for multiple bill details [multi module] for a single consumer code at once.
Transaction to initiate a call to the transaction/_create API, various validations are carried out to ensure the sanctity of the request.
The response includes a generated transaction id and a redirect URL to the payment gateway itself.
Various validations are carried out to verify the authenticity of the request and the status is updated accordingly. If the transaction is successful, a receipt is generated for the same.
Reconciliation is carried out by two jobs scheduled via a Quartz clustered scheduler.
The early Reconciliation job is set to run every 15 minutes [configurable via app properties] and is aimed at reconciling transactions which were created 15 - 30 minutes ago and are in a PENDING state.
The daily Reconciliation job is set to run once per day and is aimed at reconciling all transactions that are in a PENDING state, except for ones which were created 30 minutes ago.
Axis, Phonepe and Paytm payment gateways are implemented.
The following properties in the application.properties file in egov-pg-service have to be added and set to default value after integrating with the new payment gateway. In the below table properties for AXIS bank, payment gateway is shown the same relevant property needs to be added for other payment gateways.
axis.active
Bollean lag to set the payment gateway active/inactive
axis.currency
Currency representation for merchant, default(INR)
axis.merchant.id
Payment merchant Id
axis.merchant.secret.key
Secret key for payment merchant
axis.merchant.user
User name to access the payment merchant for transaction
axis.merchant.pwd
Password of the user tp access payment merchant
axis.merchant.access.code
Access code
axis.merchant.vpc.command.pay
Pay command
axis.merchant.vpc.command.status
commans status
axis.url.debit
Url for making payment
axis.url.status
URL to get the status of the transaction
Deploy the latest version of egov-pg-service.
Add pg service persister yaml path in persister configuration.
The egov-pg-service acts as communication/contact between eGov apps and external payment gateways.
Record every transaction against a bill.
Record of payment for multiple bill details for a single consumer code at once.
To integrate, the host of egov-pg-service should be overwritten in helm chart
/pg-service/transaction/v1/_create should be added in the module to initiate a new payment transaction, on successful validation
/pg-service/transaction/v1/_update should be added as the update endpoint to updates an existing payment transaction. This endpoint is issued only by payment gateways to update the status of payments. It verifies the authenticity of the request with the payment gateway and forwards all query params received from a payment gateway
/pg-service/transaction/v1/_search should be added as the search endpoint for retrieving the current status of a payment in our system.
(Note: All the APIs are in the same postman collection therefore the same link is added in each row)
OTP Service is a core service that is available on the DIGIT platform. The service is used to authenticate the user in the platform. The functionality is exposed via REST API.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Java 17
egov-otp is called internally by the user-otp service that fetches the mobile number and feeds to egov-otp to generate the 'n' DIGIT OTP.
The below properties define the OTP configurations -
a) egov.otp.length
: Number of digits in the OTP
b) egov.otp.ttl
: Controls the validity time frame of the OTP. The default value is 900 seconds. Another OTP generated within this time frame is also allowed.
c) egov.otp.encrypt
: Controls if the OTP is encrypted and stored in the table.
Deploy the latest version of egov-otp service.
Add role-action mapping for APIs.
The egov-otp service is used to authenticate the user in the platform.
Can perform user authentication without impacting the other module.
In the future, this application can be used in a standalone manner in any other platforms that require a user authentication system.
To integrate, the host of egov-otp module should be overwritten in the helm chart.
/otp/v1/_create
should be added as the create endpoint. Create OTP configuration API is an internal call from v1/_send endpoint. This endpoint is present in the user-otp service and removes the need for explicit calls.
/otp/v1/_validate
should be added as the validate endpoint. The OTP configuration end point validates the OTP with respect to the mobile number.
/otp/v1/_search
should be added as the search endpoint. This API searches the mobile number and OTP using uuid - mapping uuid to OTP reference number.
API Swagger Documentation
BasePath
/egov-otp/v1
Egov-otp service APIs - contains create, validate and search endpoints
a) POST /otp/v1/_create
- create OTP configuration this API is an internal call from v1/_send endpoint. This endpoint present in the user-otp service removes the need for explicit calls.
b) POST /otp/v1/_validate
- validate OTP configuration this endpoint is to validate the OTP with respect to the mobile number
c) POST /otp/v1/_search
- Search the mobile number and OTP using uuid, uuid using the OTP reference number
Configure bulk generation of PDF documents
The objective of the PDF generation service is to bulk generate pdf as per requirement.
Before proceeding with the documentation, ensure the following prerequisites are met:
Ensure the Kafka server is operational
Verify that the PostgreSQL (PSQL) server is running, and a database is created to store filestore IDs and job IDs of generated PDFs
Provide a common framework to generate PDFs
Provide flexibility to customise the PDF as per the requirement
Provide functionality to add an image or QR code in a PDF
Provide functionality to generate PDFs in bulk
Provide functionality to specify the maximum number of records to be written in one PDF
MAX_NUMBER_PAGES
Maximum number of records to be written in one PDF
DATE_TIMEZONE
Date timezone which will be used to convert epoch timestamp into date (DD/MM/YYYY
)
DEFAULT_LOCALISATION_LOCALE
Default value of localisation locale
DEFAULT_LOCALISATION_TENANT
Default value of localisation tenant
DATA_CONFIG_URLS
File path/URL'S of data config
FORMAT_CONFIG_URLS
File path/URL'S of format config
Create data config and format config for a PDF according to product requirements.
Add data config and format config files in PDF configuration.
Add the file path of data and format config in the environment yaml file.
Deploy the latest version of pdf-service in a particular environment.
The PDF configuration can be used by any module which needs to show particular information in PDF format that can be printed/downloaded by the user.
Functionality to generate PDFs in bulk.
Avoid regeneration.
Support QR codes and Images.
Functionality to specify the maximum number of records to be written in one PDF.
Uploading generated PDF to filestore and return filestore id for easy access.
To download and print the required PDF _create API has to be called with the required key (For Integration with UI, please refer to the links in Reference Docs)
Steps to migrate MDMS data to enable use of workbench UI v1.0
Follow the steps below to migrate the MDMS data to enable the use of Workbench UI v1.0.
Follow the steps below to generate the schema for Workbench UI v1.0:
Clone the MDMS Repository: Start by cloning the MDMS repository on your local machine.
Configure application.properties
: Open the application.properties
file in the workbench utility and configure it as follows:
Add the hostname of the environment.
Add the MDMS cloned folder path in the egov.mdms.conf.path
.
Add master-config.json
in masters.config.url
.
Specify the output folder path for the created schema in master.schema.files.dir
.
Port-forward MDMSv2 Service: Port-forward the MDMSv2 service to port 8094.
Run the Curl Command:
This command generates the schema and saves it in the path specified by master.schema.files.dir
.
After generating the schema, you may need to update it with additional attributes:
Add x-unique
Attribute: This defines unique fields in the schema.
Add x-ref-schema
Attribute: Use this attribute if a field within MDMS data needs to refer to another schema.
Set Default Value for a Field: Use the default
keyword to set default values.
To migrate the schema, use the following curl command:
To migrate data for a specific master/module name, use the following curl command:
Here is an example of a schema:
Service request allows users to define a service and then create a service against service definitions. A service definition can be a survey or a checklist which the users might want to define and a service against the service definition can be a response against the survey or a filled-out checklist.
Prior knowledge of Java/J2EE
Prior knowledge of SpringBoot
Prior knowledge of PostgreSQL
Prior knowledge of REST APIs and related concepts like path parameters, headers, JSON etc.
Users can -
Create and search service definitions.
Create and search services.
/service-request/service/definition/v1/_create
- Takes RequestInfo and ServiceDefinition in request body. ServiceDefinition has all the parameters related to the service definition being created.\
/service-request/service/definition/v1/_search
- Allows searching of existing service definitions. Takes RequestInfo, ServiceDefinitionCriteria and Pagination objects in the request body.\
/service-request/service/v1/_create
- Takes RequestInfo and Service in the request body. Service has all the parameters related to the service being created against a particular ServiceDefinition.\
/service-request/service/v1/_search
- Allows searching of existing services created against service definitions. Takes RequestInfo, ServiceCriteria and Pagination objects in the request body.
Persister service provides a framework to persist data in transactional fashion with low latency based on a config file. Removes repetitive and time consuming persistence code from other services.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Prior Knowledge of Java/J2EE.
Prior Knowledge of SpringBoot.
Prior Knowledge of PostgresSQL.
Prior Knowledge of JSONQuery in Postgres. (Similar to PostgresSQL with a few aggregate functions.).
Kafka server is up and running.
Persist data asynchronously using kafka providing very low latency
Data is persisted in batch
All operations are transactional
Values in prepared statement placeholder are fetched using JsonPath
Easy reference to parent object using ‘{x}’ in jsonPath which substitutes the value of the variable x in the JsonPath with value of x for the child object.(explained in detail below in doc)
Supported data types ARRAY("ARRAY"), STRING("STRING"), INT("INT"),DOUBLE("DOUBLE"), FLOAT("FLOAT"), DATE("DATE"), LONG("LONG"),BOOLEAN("BOOLEAN"),JSONB("JSONB")
Persister uses configuration file to persist data. The key variables are described below:
serviceName: Name of the service to which this configuration belongs.
description: Description of the service.
version: the version of the configuration.
fromTopic: The kafka topic from which data is fetched
queryMaps: Contains the list of queries to be executed for the given data.
query: The query to be executed in form of prepared statement:
basePath: base of json object from which data is extrated
jsonMaps: Contains the list of jsonPaths for the values in placeholders.
jsonPath: The jsonPath to fetch the variable value.
To persist large quantity of data bulk setting in persister can be used. It is mainly used when we migrate data from one system to another. The bulk persister have the following two settings:
persister.bulk.enabled
false
Switch to turn on or off the bulk kafka consumer
persister.batch.size
100
The batch size for bulk update
Any kafka topic containing data which has to be bulk persisted should have '-batch' appended at the end of topic name example: save-pt-assessment-batch.
Every incoming request [via kafka] is expected to have a version attribute set, [jsonpath, $.RequestInfo.ver] if versioning is to be applied.
If the request version is absent or invalid [not semver] in the incoming request, then a default version defined by the following property in application.propertiesdefault.version=1.0.0
is used.
The request version is then matched against the loaded persister configs and applied appropriately.
Write configuration as per the requirement. Refer the example given earlier.
In the environment file, mention the file path of configuration under the variable egov.persist.yml.repo.path
while mentioning the file path we have to add file:///work-dir/
as prefix. for example: egov.persist.yml.repo.path = file:///work-dir/configs/egov-persister/abc-persister.yml
. If there are multiple file separate it with comma (,
) .
Deploy latest version of egov-persister service and push data on kafka topic specified in config to persist it in DB.
The persister configuration can be used by any module to store records in particular table of database.
Insert/Update Incoming Kafka messages to Database.
Add Modify kafka message before putting it into database.
Persist data asynchronously.
Data is persisted in batch.
Write configuration as per your requirement. Structure of the config file is explained above in the same document.
Check-in the config file to a remote location preferably github.
Provide the absolute path of the checked-in file to DevOps, to add it to the file-read path of egov-persister. The file will be added to egov-persister's environment manifest file for it to be read at start-up of the application.
Run the egov-persister app and push data on kafka topic specified in config to persist it in DB
The objective of this service is to create a common point to manage all the SMS notifications being sent out of the platform. Notification SMS service consumes SMS from the Kafka notification topic and processes them to send it to a third-party service. Modules like PT, TL, PGR etc make use of this service to send messages through the Kafka Queue.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Prior knowledge of Java/J2EE
Prior knowledge of SpringBoot
Prior knowledge of Third party API integration
Prior knowledge of REST APIs and related concepts like path parameters, headers, JSON etc
Prior knowledge of Kafka and related concepts like Producer, Consumer, Topic etc.
Provide a common platform to send an SMS notification to the user.
Support localised SMS.
Easily configurable with a different SMS service provider.
The implementation of the consumer is present in the directory
src/main/java/org/egov/web/notification/sms/service/impl
.
These are the current providers available
Generic
Console
MSDG
The implementation to be used can be configured by setting sms.provider.class
.
The Console
implementation just prints the message mobile number and message to the console
This is the default implementation, which can work with most SMS providers. The generic implementation supports below
GET or POST-based API
Supports query params, form data, JSON Body
To configure the URL of the SMS provider use sms.provider.url
property To configure the HTTP method use this to configure the sms.provider.requestType
property to either GET
or POST
.
To configure form data or json API set sms.provider.contentType=application/x-www-form-urlencoded
or sms.provider.contentType=application/json
respectively
To configure which data needs to be sent to the API below property can be configured:
sms.config.map
={'uname':'$username', 'pwd': '$password', 'sid':'$senderid', 'mobileno':'$mobileno', 'content':'$message', 'smsservicetype':'unicodemsg', 'myParam': '$extraParam' , 'messageType': '$mtype'}
sms.category.map
={'mtype': {'*': 'abc', 'OTP': 'def'}}
sms.extra.config.map
={'extraParam': 'abc'}
sms.extra.config.map
is not used currently and is only kept for custom implementation which requires data that doesn't need to be directly passed to the REST API call
sms.config.map
is a map of parameters and their values
Special variables that are mapped
$username
maps to sms.provider.username
$password
maps to sms.provider.password
$senderid
maps to sms.senderid
$mobileno
maps to mobileNumber
from kafka fetched message
$message
maps to the message
from the kafka fetched message
$<name>
any variable that is not from the above list, is first checked in sms.category.map
and then in application.properties
and then in the environment variable with full upper case and _
replacing -
, space or .
So if you use sms.config.map={'u':'$username', 'p':'password'}
. Then the API call will be passed <url>?u=<$username>&p=password
Message Success or Failure
Message success delivery can be controlled using the below properties
sms.verify.response
(default: false)
sms.print.response
(default: false)
sms.verify.responseContains
sms.success.codes
(default: 200,201,202)
sms.error.codes
If you want to verify some text in the API call response set sms.verify.response=true
and sms.verify.responseContains
to the text that should be contained in the response.
Blacklisting or Whitelisting numbers
It is possible to whitelist or blacklist phone numbers to which the messages should be sent. This can be controlled using below properties:
sms.blacklist.numbers
sms.whitelist.numbers
Both of them can be given a ,
separated list of numbers or number patterns. To use patterns use X
for any digit match and *
for any number of digits match.
sms.blacklist.numbers=5*,9999999999,88888888XX
will blacklist any phone number starting with 5
, or the exact number 9999999999
and all numbers starting from 8888888800
to 8888888899
Prefixing
Few 3rd parties require a prefix of 0
or 91
or +91
with the mobile number. In such a case you can use sms.mobile.prefix
to automatically add the prefix to the mobile number coming into the message queue.
Error Handling
There are different topics to which the service will send messages. Below is a list of the same:
1kafka.topics.backup.sms 2kafka.topics.expiry.sms=egov.core.sms.expiry 3kafka.topics.error.sms=egov.core.sms.error
In an event of a failure to send an SMS, if kafka.topics.backup.sms
is specified, then the message will be pushed onto that topic.
Any SMS which expires due to Kafka lags, or some other internal issues, they will be passed to the topic configured in kafka.topics.expiry.sms
If a backup
the topic has not been configured, then in an event of an error the same will be delivered to kafka.topics.error.sms
Following are the properties in the application.properties file in the notification sms service which are configurable.
egov.core.notification.sms
It is the topic name to which the notification sms consumer would subscribe. Any module wanting to integrate with this consumer should post data on this topic only.
sms.provider.class
Generic
This property decides which SMS provider is to be used by the service to send messages. Currently, Console, MSDG and Generic have been implemented.
sms.provider.contentType
application/x-www-form-urlencoded
To configure form data or json api set sms.provider.contentType=application/x-www-form-urlencoded or sms.provider.contentType=application/json respectively
sms.provider.requestType
POST
Property to configure the http method used to call provider
sms.provider.url
URL of the provider. This will be given by the SMS provider only.
sms.provider.username
egovsms
Username as provided by the provider which is passed during the API call to the provider.
sms.provider.password
abc123
Password as provided by the provider which is passed during the API call to the provider. This has to be encrypted and stored
sms.senderid
EGOV
SMS sender id provided by the provider, this will show up as the sender on receiver’s phone.
sms.config.map
{'uname':'$username', 'pwd': '$password', 'sid':'$senderid', 'mobileno':'$mobileno', 'content':'$message', 'smsservicetype':'unicodemsg', 'myParam': '$extraParam' , 'messageType': '$mtype'}
Map of parameters to be passed to the API provider. This is provider-specific. $username maps to sms.provider.username
$password maps to sms.provider.password
$senderid maps to sms.senderid
$mobileno maps to mobileNumber from kafka fetched message
$message maps to the message from the kafka fetched message
$<name> any variable that is not from above list, is first checked in sms.category.map and then in application.properties and then in environment variable with full upper case and _ replacing -, space or
sms.category.map
{'mtype': {'*': 'abc', 'OTP': 'def'}}
replace any value in sms.config.map
sms.blacklist.numbers
5*,9999999999,88888888XX
For blacklisting, a “,” separated list of numbers or number patterns. To use patterns use X for any digit match and * for any number of digits match
sms.whitelist.numbers
5*,9999999999,88888888XX
For whitelisting, a “,” separated list of numbers or number patterns. To use patterns use X for any digit match and * for any number of digits match
sms.mobile.prefix
91
add the prefix to the mobile number coming in the message queue
Add the variables present in the above table in a particular environment file
Deploy the latest version of egov-notification-sms service.
Notification SMS service consumes SMS from the Kafka notification topic and processes them to send it to a third-party service. Modules like PT, TL, PGR etc make use of this service to send messages through the Kafka Queue.
Provide an interface to send notification SMS on user mobile number.
Support SMS in various languages.
To integrate, create the SMS request body given in the example below. Provide the correct mobile number and message in the request body and send it to the Kafka topic:- egov.core.notification.sms
The notification-sms service reads from the queue and sends the sms to the mentioned phone number using one of the SMS providers configured.
Whenever any user logs an authorization token, a refresh token is generated for the user. Using the auth token the client can make rest API calls to the server to fetch data. The auth token has an expiry period. Once the authorization token expires, it cannot be used to make API calls. The client has to generate a new authorization token. This is done by authenticating the refresh token with the server which then generates and sends a new authorization token to the client. The refresh token avoids the need for the client to log in again whenever the authorization token expires.
Refresh token also has an expiry period and once it gets expired it cannot be used to generate new authorization tokens. The user has to log in again to get a new pair of authorization tokens and refresh tokens. Generally, the duration before the expiry of the refresh token is more as compared to that of authorization tokens. If the user logs out of the account both authorization tokens and refresh tokens become invalid.
Variables to configure expiry time:
Here are the articles in this section:
The URL shortening service is used to shorten long URLs. There may be a requirement when we want to avoid sending very long URLs to the user via SMS, WhatsApp etc. This service compresses the URL.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Prior knowledge of Java/J2EE
Prior knowledge of SpringBoot
Prior knowledge of REST APIs and related concepts like path parameters, headers, JSON etc.
Compress long URLs
Converted short URLs contain id, which is used by this service to identify and get longer URLs.
Deploy the latest version of the URL Shortening service
POST /egov-url-shortening/shortener
Receive long URLs and converts them to shorter URLs. Shortened URLs contain URLs to the endpoint mentioned next. When a user clicks on shortened URL, the user is redirected to a long URL.
GET /{id}
This shortened URL contains the path to this endpoint. The service uses the id used in the last endpoint to get the long URL. In response, the user is redirected to the long URL.
User-OTP service handles the OTP for user registration, user log in and password reset for a particular user.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Java 17
The User-OTP service sends the OTP to the user on login request, on password change request and during new user registration.
Deploy the latest version of user-otp.
Make sure egov-otp is running.
Add Role-Action mapping for APIs.
User-OTP service handles the OTP for user registration, user login and password reset for a particular user.
Can perform user registration, login, and password reset.
In the future, if we want to expose the application to citizens then it can be done easily.
To integrate, the host of user-otp module should be overwritten in the helm chart.
/user-otp/v1/_send
should be added as the endpoint for sending OTP to the user via sms or email
BasePath
/user-otp/v1/[API endpoint]
Method
a) POST /_send
This method sends the OTP to a user via sms or email based on the below parameters -
Following are the Producer topic.
egov.core.notification.sms.otp
:- This topic is used to send OTP to user mobile number.
org.egov.core.notification.email
:- This topic is used to send OTP to user email id.
\
Configure user data management services
User service is responsible for user data management and providing functionality to login and logout into the DIGIT system
Before you proceed with the configuration, make sure the following pre-requisites are met
Java 17
PostgreSQL server is running
Redis is running
Store, update and search user data
Provide Authentication
Provide login and logout functionality into the DIGIT platform
Store user data PIIs in encrypted form
Note : This is a sample JSON file containing role-action mapping , If you don't have any of the master data setup yet you can you this to create on for you and then add all these files and start making changed in your repo.
The following application properties file in user service are configurable.
User data management and functionality to log in and log out into the DIGIT system using OTP and password.
Providing the following functionalities to citizen and employee-type users
Employee:
User registration
Search user
Update user details
Forgot password
Change password
User role mapping(Single ULB to multiple roles)
Enable employees to login into the DIGIT system based on a password.
Citizen:
Create user
Update user
Search user
User registration using OTP
OTP based login
To integrate, the host of egov-user should be overwritten in the helm chart.
Use /citizen/_create
endpoint for creating users into the system. This endpoint requires the user to validate his mobile number using OTP. First, the OTP is sent to the user's mobile number and then the OTP is sent as otpReference
in the request body.
Use /v1/_search
and /_search
endpoints to search users in the system depending on various search parameters.
Use /profile/_update
for updating the user profile. The user is validated (either through OTP-based validation or password validation) when this API is called.
/users/_createnovalidate
and /users/_updatenovalidate
are endpoints to create user data into the system without any validations (no OTP or password required). They should be strictly used only for creating/updating users internally and should not be exposed outside.
Forgot password: In case the user forgets the password it can be reset by first calling /user-otp/v1/_send
which generates and sends OTP to the employee’s mobile number. The password is then updated using this OTP by calling the API /password/nologin/_update
in which a new password along with the OTP is sent.
Use /password/_update
to update the existing password by logging in. Both old and new passwords are sent to the request body. Details of the API can be found in the attached swagger documentation.
Use /user/oauth/token
for generating tokens, /_logout
for logout and /_details
for getting user information from the token.
Multi-Tenant User: The multi-tenant user functionality allows users to perform actions across multiple ULBs. For example, employees belonging to Amritsar can perform the role of say Trade License Approver for Jalandhar by assigning them the tenant-level role of tenantId pb.jalandhar.
Following is an example of the user:
If an employee has a role with statelevel tenantId
they can perform actions corresponding to that role across all tenants.
Refresh Token: Whenever the /user/oauth/token
is called to generate the access_token
along with access_token,
one more token is generated called refresh_token
. The refresh token is used to generate a new access_token
whenever the existing one expires. Till the time the refresh token is valid, users will not have to log in even if their access_token
expires since this is generated using refresh_token
. The validity time of the refresh token is configurable and can be configured using the property: refresh.token.validity.in.minutes
Since User service handles PII (Personal Identifiable information) encrypting the data before saving in DB becomes crucial.
DIGIT manages these as security policy in Master Data which is then referred by encryption service to encrypt the data before persisting it to DB.
There are two security policy models for user data - User and UserSelf.
User model
attributes
contains a list of fields from the user object that needs to be secured and the field
roleBasedDecryptionPolicy
is an attribute-level role-based policy. It defines visibility for each attribute.
User security model is used for Search API response
UserSelf
It contains the same structure of security policy but the UserSelf is used for Create/Update API response.
The visibility of the PII data is based on the above MDMS configuration. There are three types of visibility mentioned in the config.
PLAIN - Show text in plain form.
MASKED - The returned text contains masked data. The masking pattern is applied as defined in the Masking Patterns master data.
NONE - The returned text does not contain any data. It contains strings like “Confidential Information”.
Any user can get plain access to the secured data (citizen’s PII) by requesting through the plainAccessRequest
parameter. It takes the following parameters:
recordId
- It is the unique identifier of the record that is requested for plain access.
fields
- It defines a list of attributes that are requested for plain access.
service is running and has pg service added in it
Note : You can follow this to setup postgreSQL locally and create a DB in it.
Additional gateways can be added by implementing the interface. No changes are required to the core packages.
NPM
Confirm that the service is running and configured with a persister
Note : Refer to this to know how to install postgreSQL locally and then creating a DB
PDFMake: ( ):- for generating PDFs.
Mustache.js: ( ):- as templating engine to populate format as defined in format config, from request json based on mappings defined in data config.
For configuration details, refer to .
Clone the migration utility: Start by cloning the migration utility from this .
Detailed API payloads for interacting with Service Request for all four endpoints can be found in the following collection -
The link to the swagger documentation can be found below -
Each persister config has a version attribute which signifies the service version, this version can contain custom DSL; defined here,
This service is a consumer, which means it reads from the Kafka queue and does not provide a facility to be accessed through API calls, there’s no REST layer here. The producers willing to integrate with this consumer will be posting a JSON onto the topic configured at ‘’. The notification-sms service reads from the queue and sends the sms to the mentioned phone number using one of the SMS providers configured.
service is running
service is running
service is running
and services are running
Setup the latest version of and
the latest version of egov-user service
Note: This video will give you an idea of how to deploy any Digit-service. Further you can find the latest builds for each service in out latest here.
Add for APIs
To know more about the encryption policy, refer to the document docs.
access.token.validity.in.minutes
Duration in minutes for which the authorization token is valid
refresh.token.validity.in.minutes
Duration in minutes for which the refresh token is valid
/user/oauth/token
Used to start the session by generating Auth token and refresh token from username and password using grant_type as password. The same API can be used to generate new auth token from refresh token by using grant_type as refresh_token and sending the refresh token with key refresh_token
/user/_logout
This API is used to end the session. The access token and refresh token will become invalid once this API is called. Auth token is sent as param in the API call
host.name
Host name to append in short URL
db.persistance.enabled
The boolean flag stores the short URL in the database when the flag is set as TRUE.
tenantId
Unique id for a tenant.
Yes
String
mobileNumber
Mobile number of the user
Yes
String
type
OTP type ex: login/register/password reset
Yes
String
userType
Type of user ex: Citizen/Employee
No
String
egov.user.search.default.size
10
default search record number limit
citizen.login.password.otp.enabled
true
whether citizen login otp based
employee.login.password.otp.enabled
false
whether employee login otp based
citizen.login.password.otp.fixed.value
123456
fixed otp for citizen
citizen.login.password.otp.fixed.enabled
false
allow fixed otp for citizen
otp.validation.register.mandatory
true
whether otp compulsory for registration
access.token.validity.in.minutes
10080
validity time of access token
refresh.token.validity.in.minutes
20160
validity time of refresh token
default.password.expiry.in.days
90
expiry date of a password
account.unlock.cool.down.period.minutes
60
unlock time
max.invalid.login.attempts.period.minutes
30
window size for counting attempts for lock
max.invalid.login.attempts
5
max failed login attempts before account is locked
egov.state.level.tenant.id
pb
Configure escalation flows based on predefined criteria
The objective of this functionality is to provide a mechanism to trigger action on applications which satisfy certain predefined criteria.
Looking at sample use cases provided by the product team, the majority of use cases can be summarised as performing action ‘X’ on applications in state ‘Y’ and have exceeded the state SLA by ‘Z’ days. We can write one query builder which takes this state ‘Y’ and SLA exceeded by ‘Z’ as search params and then we can perform action X on the search response. This has been achieved by defining an MDMS config like below:
In the above configuration, we define the condition for triggering the escalation of applications. The above configuration triggers escalation for applications in RESOLVED
state which have exceeded stateSLA by more than 1.0
day and this triggers the escalation by performing CLOSERESOLVEDCOMPLAIN
on the applications. Once the applications are escalated the processInstances are pushed on the pgr-auto-escalation
topic. We have done a sample implementation for pgr-services, where we have updated the persister configuration to listen on this topic and update the complaint status accordingly.
The auto escalation for businessService PGR
will be triggered when the following API is called:
These APIs have to be configured in the cron job config so that they can be triggered periodically as per requirements. Only the user with the role permission AUTO_ESCALATE
can trigger auto escalations. Hence, create the user with statelevel AUTO_ESCALATE
role permission and then add that user in the userInfo of the requestInfo. This step has to be done because the cron job does internal API calls and ZUUL will not enrich the userInfo.
For setting up an auto-escalation trigger, the workflow must be updated. For example, to add an auto escalate trigger on RESOLVED
state with action CLOSERESOLVEDCOMPLAIN
in PGR
businessService, we will have to search the businessService and add the following action in the actions array of RESOLVED
state and call update API.
Suppose an application gets auto-escalated from state ‘X' to state 'Y’, employees can look at these escalated applications through the escalate search API. The following sample cURL can be used to search auto-escalated applications of the PGR module belonging to Amritsar tenant -
API Postman Collection
Deploy workflow 2.0 in an environment where workflow is already running
In workflow 2.0 assignee is changed from an object to a list of objects.
To accommodate this change a new table named 'eg_wf_assignee_v2' is added that maps the processInstaceIds to assignee UUIDs. To deploy workflow 2.0 in an environment where workflow is already running assignee column needs to be migrated to the eg_wf_assignee_v2 table.
The following query does this migration:
Persister config for egov-workflow-v2 is updated. Insert query for the table eg_wf_assignee_v2 is added in egov-workflow-v2-persister.yml.
The latest updated config can be referred to from the below link:
The employee inbox has an added column to display the locality of the applications. This mapping of the application number to locality is fetched by calling the searcher API for the respective module. If a new module is integrated with workflow its searcher config should be added in the locality searcher yaml with module code as a name in the definition. All the search URLs and role action mapping details must be added to the MDMS.
The format of the URL is given below:
Sample request for TL:
The searcher yaml can be referred from the below link:
For sending back the application to citizens, the action with the key 'SENDBACKTOCITIZEN' must be added. The exact key should be used. The resultant state of the action should be a new state. If pointing to an existing state the action in that state will be visible to the CITIZEN even when the application reaches the state without Send Back as the workflow is role-based.
To update the businessService for Send Back feature, add the following state and action in the search response at required places and add call businessService update API. This assigns the UUID to the new state and action and creates the required references. The Resubmit action is added as optional action for counter employees to take action on behalf of the citizen.
State json:
Action json:
Each item in the above dropdown is displayed by adding an object in the link below -
For example:
{
"id": 1928,
"name": "rainmaker-common-tradelicence",
"url": "quickAction",
"displayName": "Search TL",
"orderNumber": 2,
"parentModule": "",
"enabled": true,
"serviceCode": "",
"code": "",
"path": "",
"navigationURL": "tradelicence/search",
"leftIcon": "places:business-center",
"rightIcon": "",
"queryParams": ""
}
id , url, displayName, navigationURL
Mandatory properties.
The value of the URL property should be “quickAction” as shown above.
Accordingly, add the role-actions:
{
"rolecode": "TL_CEMP",
"actionid": 1928,
"actioncode": "",
"tenantId": "pb"
}
SLA slots and the background colour of the SLA days remaining on the Inbox page are defined in the MDMS configuration as shown above.
For example: If the maximum SLA is 30 then, it has 3 slots
30 - 30*(1/3) => 20 - 30: will have green colour defined
0 - 20: will have yellow colour defined
<0: will have red colour defined
The colours are also configured in the MDMS.
For API /egov-workflow-v2/egov-wf/process/_transition: The field assignee of type User in ProcessInstance object is changed to a list of 'User' called assignees. User assignee --> List<User> assignees
For Citizen Sendback: When the action SENDBACKTOCITIZEN is called on the entity the module has to enrich the assignees with the UUIDs of the owners and creator of the entity.
Create and modify workflow configuration
Each service integrated with egov-workflow-v2 service needs to first define the workflow configuration which describes the workflow states, the action that can be taken on these states, the user roles that can perform those actions, SLAs etc. This configuration is created using APIs and is stored in the DB. The configuration can be created at either the state level or the tenant level based on the requirements.
Before you proceed with the configuration, make sure the following pre-requisites are met -
egov-workflow-v2 service is up and running
Role action mapping is added for the BusinessService APIs
Create and modify workflow configuration
Configure state level as well BusinessService level SLA
Control access to workflow actions from the configuration
Validates if the flow defined in the configuration is complete during the creation
Deploy the latest version of egov-workflow-v2 service.
Add role action mapping for BusinessService APIs (preferably add _create and update only for SUPERUSER. Search can be added for CITIZEN and required employee roles like TL__CEMP etc.
Overwrite the egov.wf.statelevel flag (true for state level and false for tenant level).
Add businessService persister yaml path in persister configuration.
Create the businessService JSON based on product requirements. Following is a sample json of a simple 2-step workflow where an application can be applied by a citizen or counter employee and then can be either rejected or approved by the approver.
Once the businessService json is created add it in the request body of _create API of workflow and call the API to create the workflow.
To update the workflow first search the workflow object using _search API and then make changes in the businessService object and then call _update using the modified search result.
States cannot be removed using _update API as it will leave applications in that state in an invalid state. In such cases first, all the applications in that state should be moved forward or backward state and then the state should be disabled through DB directly.
The workflow configuration can be used by any module which performs a sequence of operations on an application/entity. It can be used to simulate and track processes in organisations to make them more efficient and increase accountability.
Integrating with workflow service provides a way to have a dynamic workflow configuration which can be easily modified according to the changing requirements. The modules don’t have to deal with any validations regarding workflow such as authorisation of the user to take an action if documents are required to be uploaded at a certain stage etc. since it will be automatically handled by egov-workflow-v2 service based on the defined configuration. It also automatically keeps updating SLAs for all applications which provide a way to track the time taken by an application to get processed.
To integrate, the host of egov-workflow-v2
should be overwritten in the helm chart
/egov-workflow-v2/egov-wf/businessservice/_search
should be added as the endpoint for searching workflow configuration. (Other endpoints are not required once workflow configuration is created)
The configuration can be fetched by calling _search
``API
Configure workflows for a new product
Workflow is defined as a sequence of tasks that has to be performed on an application/Entity to process it. The egov-workflow-v2
is a workflow engine which helps in performing these operations seamlessly using a predefined configuration. We will discuss how to create this configuration for a new product in this document.
Before you proceed with the configuration, make sure the following pre-requisites are met -
egov-workflow-v2
service is up and running
Role action mapping is added for business service APIs
Create and modify workflow configuration according to the product requirements
Configure State level as well BusinessService level SLA to efficiently track the progress of the application
Control access to perform actions through configuration
tenantId
The tenantId (ULB code) for which the workflow configuration is defined
businessService
The name of the workflow
business
The name of the module which uses this workflow configuration
businessServiceSla
The overall SLA to process the application (in milliseconds)
state
Name of the state
applicationStatus
Status of the application when in the given state
docUploadRequired
Boolean flag representing if document are required to enter the state
isStartState
Boolean flag representing if the state can be used as starting state in workflow
isTerminateState
Boolean flag representing if the state is the leaf node or end state in the workflow configuration. (No Actions can be taken on states with this flag as true)
isStateUpdatable
Boolean flag representing whether data can be updated in the application when taking action on the state
currentState
The current state on which action can be performed
nextState
The resultant state after action is performed
roles
A list containing the roles which can perform the actions
auditDetails
Contains fields to audit edits on the data. (createdTime, createdBy,lastModifiedTIme,lastModifiedby)
Deploy the latest version of the egov-workflow-v2 service.
Add businessService persister yaml path in persister configuration.
Add role action mapping for BusinessService APIs.
Overwrite the egov.wf.statelevel flag (true for state level and false for tenant level).
The Workflow configuration has 3 levels of hierarchy:
BusinessService
State
Action
The top-level object is BusinessService which contains fields describing the workflow and a list of States that are part of the workflow. The businessService can be defined at the tenant level like pb.amritsar or at the state level like pb. All objects maintain an audit sub-object which keeps track of who is creating and updating and the time of it.
Each state object is a valid status for the application. The State object contains information about the state and what actions can be performed on it.
The action object is the last object in the hierarchy, it defines the name of the action and the roles that can perform the action.
The workflow should always start from the null state as the service treats new applications as having null as the initial state. eg:
In the action object whatever nextState is defined, the application will be sent to that state. It can be to another forward state or even some backward state from where the application has already passed &#xNAN;(generally, such actions are named SENDBACK)
SENDBACKTOCITIZEN is a special keyword for an action name. This action sends back the application to the citizen’s inbox for him to take action. A new State should be created on which Citizen can take action and should be the nextState of this action. While calling this action from the module assignees should be enriched by the module with the UUIDs of the owners of the application
Tracer is a library that intercepts API calls to DIGIT services with imported tracer and logs errors.
A new utility method has been added to the tracer using which modules can prepare error details and invoke this utility method to persist these error objects on the database for retrying them.
Here are the steps using which any module can utilize this functionality to store error objects -
The concerned module has to prepare error details.
The concerned module can then make a call to exceptionHandler
method which takes a list of errorDetails
as its argument. This method will do a couple of enrichments and then emit these errorDetails
to Kafka for indexer service to pick it up and persist.
Create an index with the name - egov-tracer-error-details
using this command on Kibana -
PUT egov-tracer-error-details { }
4. Create mapping for this index -
5. Setup indexer with the following indexer configuration file -
6. Now, whenever exceptionHandler
method will be invoked, errorDetails
will be persisted on the index that was created in step 3.
The inbox service is an event-based service that fetches pre-aggregated data of municipal services and workflow performs complex search operations and returns applications and workflow data in a paginated manner. The service also returns the total count matching the search criteria.
The first step is to capture pre-aggregated events for the module which needs to be enabled in event-based inbox. This index needs to hold all the fields against which a search needs to be performed and any other fields which need to be shown in the UI when applications are listed.
Now, this service allows to search both the module objects as well as processInstance
(Workflow record) based on the provided criteria for any of the municipal services. For this, it uses a module-specific configuration. A sample configuration is given below -
Inside each query configuration object, we have the following keys -
module
- Module code for which inbox has been configured.
index
- Index where pre-aggregated events are stored.
allowedSearchCriteria
- This lists out various parameters on which searching is allowed for the given module
sortBy
- Specifies the field path inside the pre-aggregated record present in the index against which the result has to be sorted. Default order can be specified to ASC
or DESC
.
sourceFilterPathList
- This is a list which specifies the fields which should appear and which should not appear as part of the search result in order to avoid clutter and to improve query performance.
allowedSearchCriteria
- Within each object is going to have the following keys -
name
- Name of the incoming search parameter in the inbox request body.
path
- Path inside the pre-aggregated record present in the index against which the incoming search parameter needs to be matched.
isMandatory
- This specifies whether a particular parameter is mandatory to be sent in inbox search request or not.
operator
- This specifies which operator clause needs to be applied while forming ES queries. Currently, we support equality and range comparison operators only.
Once the configuration is added to this MDMS file, the MDMS service for that particular environment has to be restarted.
For modules where search has to be given on PII data like mobile number, a migration API needs to be written which will fetch data from the database, decrypt it, hash it using the encryption client and store it in the index to enable search.
For modules where a search has to be given on non-PII data, the indexer service’s _legacyIndex API can be invoked to move data to the new index.
DIGIT 2.9 represents the most recent Long-Term Support (LTS) version, offering a stable and reliable foundation. This version emphasises enhanced security measures, improved system stability, streamlined deployment processes, simplified configuration, and comprehensive documentation. Periodic updates, encompassing both minor adjustments and significant enhancements, will be released as necessary. Support for this iteration of DIGIT will be available for the upcoming five years, with a clear migration guide available to facilitate the transition to subsequent LTS versions once the current support period concludes.
Extended Support: LTS versions come with an extended period of support, which includes security updates, bug fixes, and sometimes even minor feature enhancements. This extended support period will last for 5 years, which means that users do not need to upgrade to newer versions frequently to receive critical updates.
Stability: The LTS release is typically more stable than regular releases because it undergoes more extensive testing and bug fixing. Compatibility: With the extended support period, the LTS release ensures better compatibility with third-party applications and hardware over time. Developers and vendors have a stable base to target, reducing the frequency of compatibility issues.
Reduced Costs: For businesses, the reduced need for frequent upgrades can translate into lower IT costs. Upgrading to a new version often involves significant effort in testing, deployment, and sometimes even hardware upgrades. LTS releases help spread these costs over a longer period.
Predictability: LTS release provides a predictable upgrade path, making it easier for organisations to plan their IT infrastructure, training, and budgets. Knowing the support timeline in advance helps in strategic planning and resource allocation.
Focus on Core Features and Performance: Since the LTS release is not focused on adding new features aggressively, the bulk of efforts are spent on optimising for performance and reliability. This focus benefits users who need a solid and efficient system rather than the latest features.
Community and Vendor Support: LTS releases will often have a larger user base, which means a more extensive community support network is available.
Infra/Backbone upgrade: - Postgres upgrade - Kafka upgrade - Redis upgrade - Kubernetes upgrade - Elasticsearch upgrade
Use of helm file for deployment management
Upgrade of core service dependencies
Upgrade to DIGIT libraries
Spring Cloud Gateway
Test Automation script
Single Click deployment using Github Actions
MDMS V2
Workbench (MDMS UI)
Boundary Service (Beta)
Admin UI to configure Master Data
Functionality to define schema and attribute validation for master data
Maintain master data in the database
Hot reload of MDMS data
Configurable Rate limiting for services using helm
Deployment of DIGIT without any local tool setup/configuration (via browser)
Ability to create and link boundary nodes using UI/APIs
Geospatial queries like proximity search to locate boundary nodes
Filestore Service: Fixed flow to store and retrieve files from Azure blob storage.
PDF Service: Fixed bug where _create API creates duplicate PDF for each request and fixed Kafka message getting pushed on a single partition.
SMS Notification Service: Added API for SMS bounce backtracking.
Mail Notification Service: Added support for attachments.
User OTP Service: Added support for sending OTP via email by emitting email events.
User Service: Added fallback to default message if user email update localization messages are not configured.
Workflow Service: Introduced state-level business service fallback as part of v2 business service search API.
Dashboard Analytics: Introduced feature to perform DIVISION operation on metric chart responses.
Location Service: Fixed bug where search on root level boundary type yields empty search response.
Privacy Exemplar: Data privacy changes for masking/unmasking PII data were introduced as part of this exemplar.
Encryption Client Library: As part of privacy changes, the enc-client library was introduced to support encryption-related functionalities.
Codegen: Enhanced codegen utility to support open API 3.0 specifications.
Persister: Enhanced persister to make it compatible with Signed Audit Service.
Human Resource Management Service: Fixed bug where employee search causes server error in single instance clusters.
DIGIT Developer Guide: Backend and Frontend guides along with a sample birth registration spring-boot module, citizen and employee React modules were developed as part of this guide.
DIGIT Installer Enhancement: DIGIT Installer was simplified and a detailed tutorial document was created for installing DIGIT services on AWS. (Note: Simplified DIGIT Installer has not been merged to master yet and is being released separately because many of our clusters are still running on legacy DevOps configurations)
Upgraded all helm charts to support Kubernetes version upgrade from 1.20 to 1.28.
Configuration of the notification messages for a business service based on the channel for the same event.
For a specific action of the user, he/she will get an SMS and email as an acknowledgement.
Users can get SMS, Event, and email based on different channels.
The application allows one to either send different messages across all channels based on their actions.
To have this functionality for different business services, a channel names file was created and added to the MDMS data.
It contains information about the combination of different actions and channels for a particular business service. Example below:
The different channels are
SMS: ID (Mobile Number)
Event
Email: ID (Email ID)
This feature enables the functionality that checks for the channels present in the file and sends the notifications accordingly.
For the SMS event, it would send the SMS notification and log “Sending SMS Notification” for the Event it would log, “Event Notification Sent”, and for Email, it would log, “Email Notification Sent”.
To add/delete any particular channel for a business service -
Restart egov-mdms-service to apply the changes.
Configure your business service - follow the steps mentioned below.
For any record that comes into the topic first, the service should fetch all the required data like user name, property id, mobile number, tenant id, etc from the record that we fetched from the topic.
Then we should fetch the message content from localization and the service replaces the placeholders with the actual data.
Then put the record in whichever channel’s Kafka topic that the sms, event or email service is listening to.
Then the respective channel services (sms, event, email) will send out the notifications.
According to TRAI’s regulation on unsolicited commercial communication, all telecoms must verify every SMS content before delivering it (Template scrubbing). For this, all the businesses using SMS must register Entities, SenderIDs, and SMS templates on a centralised DLT portal.
Below are the steps to register the SMS template in a centralised DLT portal and to add the template in the SMS country portal (Service provider).
Step 2: Login into the portal by entering the proper credentials and OTP.
Step 3: Select the Template from the option and click on Content Templates.
Click on the Add button to go to the next section.
Step 4: Select the option mentioned in the image below.
After clicking on the Save button, the template is added to the portal. Wait for the approval of the template. Once the template gets approved save the template ID and the message.
Step 5: Repeat the process from Step 3 to Step 4 to register the template in the DLT portal.
Step 6: Enter the credentials to log into the SMS Country portal.
Step 7: Select the option Features, then click on the Manage button under the Template section.
Then click on the Add DLT Template button.
Step 8: Mention the template ID and message of the approved template which we saved earlier in step 4. Select the Sender ID as EGOVFS.
After adding all the above details click on Add Template button. The DLT-approved template is added to the SMS Country portal and is ready to use.
Step 9: Repeat Step 7 and Step 8 to add the approved template in the SMS Country portal.
Configure workflows as per requirements
Workflows are a series of steps that moves a process from one state to another state by actions performed by different kind of Actors - Humans, Machines, Time based events etc. to achieve a goal like onboarding an employee, or approve an application or granting a resource etc. The egov-workflow-v2 is a workflow engine which helps in performing these operations seamlessly using a predefined configuration.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Java 17
Kafka server is up and running
PostgreSQL server is running and a database is created to store workflow configuration and data
Always allow anyone with a role in the workflow state machine to view the workflow instances and comment on it
On the creation of workflow, it will appear in the inbox of all employees that have roles that can perform any state transitioning actions in this state.
Once an instance is marked to an individual employee it will appear only in that employee's inbox although point 1 will still hold true and all others participating in the workflow can still search it and act if they have the necessary action available to them
If the instance is marked to a person who cannot perform any state transitioning action, they can still comment/upload and mark to anyone else.
Overall SLA: SLA for the complete processing of the application/Entity
State-level SLA: SLA for a particular state in the workflow
Add BusinessService Persister YAML path in persister configuration
Add Role-Action mapping for BusinessService APIs
Overwrite the egov.wf.statelevel flag ( true for state level and false for tenant level)
Create businessService (workflow configuration) according to product requirements
Add Role-Action mapping for /processInstance/_search API
Add workflow persister yaml path in persister configuration
The workflow configuration can be used by any module which performs a sequence of operations on an application/Entity. It can be used to simulate and track processes in organisations to make it more efficient too and increase accountability.
Role-based workflow
An easy way of writing rule
File movement within workflow roles
To integrate, the host of eGov-workflow-v2 should be overwritten in the helm chart.
/process/_search should be added as the search endpoint for searching workflow process Instance objects.
/process/_transition should be added to perform an action on an application. (It’s for internal use in modules and should not be added in Role-Action mapping).
The workflow configuration can be fetched by calling _search API to check if data can be updated or not in the current state.
The enc-client library is a supplementary Java library that supports encryption-related functionalities so that every service does not need to pre-process the request before calling the encryption service.
MDMS Service
Encryption Service
Kafka
Encrypt a JSON Object - The encryptJson
function of the library takes any Java Object as input and returns an object which has encrypted values of the selected fields. The fields to be encrypted are selected based on an MDMS Configuration.
This function requires the following parameters:
Java/JSON object - The object whose fields will get encrypted.
Model - It is used to identify the MDMS configuration to be used to select fields of the provided object.
Tenant ID - The encryption key will be selected based on the passed tenantId.
Encrypt a Value - The encryptValue
function of the library can be used to encrypt single values. This method also required a tenantId parameter.
Decrypt a JSON Object - The decryptJson
function of the library takes any Java Object as input and returns an object that has plain/masked or no values of the encrypted fields. The fields are identified based on the MDMS configuration. The returned value(plain/masked/null) of each of the attributes depends on the user’s role and if it is a PlainAccess
request or a normal request. These configurations are part of the MDMS.
This function required the following parameters:
Java/JSON object - The object containing the encrypted values that are to be decrypted.
Model - It is used to select a configuration from the list of all available MDMS configurations.
Purpose - It is a string parameter that passes the reason of the decrypt request. It is used for Audit purposes.
RequestInfo - The requestInfo parameter serves multiple purposes:
User Role - A list of user roles is extracted from the requestInfo parameter.
PlainAccess Request - If the request is an explicit plain access request, it is to be passed as a part of the requestInfo. It will contain the fields that the user is requesting for decryption and the id of the record.
While decrypting Java objects, this method also audits the request.
All the configurations related to the enc-client library are stored in the MDMS. These master data are stored in DataSecurity
module. It has two types of configurations:
Masking Patterns
Security Policy
{ "patternId": "001", "pattern": ".(?=.{4})" }
The masking patterns for different types of attributes(mobile number, name, etc.) are configurable in MDMS. It contains the following attributes:
patternId
- It is the unique pattern identifier. This id is referred to in the SecurityPolicy MDMS.
pattern
- This defines the actual pattern according to which the value will be masked.
The Security Policy master data contains the policy used to encrypt and decrypt JSON objects. Each of the Security Policy contains the following details:
model
- This is the unique identifier of the policy.
uniqueIdentifier
- The field defined here should uniquely identify records passed to the decryptJson
function.
attributes
- This defines a list of fields from the JSON object that needs to be secured.
roleBasedDecryptionPolicy
- This defines attribute-level role-based policy. It will define visibility for each attribute.
The visibility is an enum with the following options:
PLAIN - Show text in plain form.
MASKED - The returned text will contain masked data. The masking pattern will be applied as defined in the Masking Patterns master data.
NONE - The returned text will not contain any data. It would contain string like “Confidential Information”.
It defines what level of visibility should the decryptJson
function return for each attribute.
{ "name": "mobileNumber", "jsonPath": "mobileNumber", "patternId": "001", "defaultVisibility": "MASKED" }
The Attribute defines a list of attributes of the model that are to be secured. The attribute is defined by the following parameters:
name
- This uniquely identifies the attribute out of the list of attributes for a given model.
jsonPath
- It is the JSON path of the attribute from the root of the model. This jsonPath is NOT the same as the Jayway JsonPath library. This uses /
and *
to define the json paths.
patternId
- It refers to the pattern to be used for masking which is defined in the Masking Patterns master.
defaultVisibility
- It is an enum configuring the default level of visibility of that attribute. If the visibility is not defined for a given role, then this defaultVisibility will apply.
This parameter is used to define the unique identifier of that model. It is used to audit the access logs. (This attribute’s jsonPath should be at the root level of the model.)
{ "name": "uuid", "jsonPath": "uuid" }
It defines attribute-level access policies for a list of roles. It consists of the following parameters:
roles
- It defines a list of role codes for which the policy will be applied. Please make sure not to duplicate role codes anywhere in the other policy. Otherwise, any one of the policies will get chosen for that role code.
attributeAccessList
- It defines a list of attributes for which the visibility differs from the default for those roles.
There are two levels of visibility:
First-level Visibility - This applies to normal search requests. The search response could have multiple records.
Second level Visibility - It is applied only when a user explicitly requests plain access to a single record with a list of fields required in plain.
Second-level visibility can be requested by passing plainAccessRequest
in the RequestInfo
.
Any user can get plain access to the secured data(citizen’s PII) by requesting through the plainAccessRequest
parameter. It takes the following parameters:
recordId
- It is the unique identifier of the record requested for plain access.
fields
- It defines a list of attributes that are requested for plain access.
Every decrypt request is audited. Based on the uniqueIdentifier
defined as part of the Security Policy, it lists out the identifiers of the records that were decrypted as part of the request.
Each audit object contains the following attributes:
For integration-related steps, refer to the document .
A new inbox service needs to be enabled via the configuration present in MDMS. The path to the MDMS configuration is - .
If any existing module needs to be migrated onto a new inbox, data needs to be , and configuration like the above needs to be written to enable these modules on the new inbox.
Update channelNames array in the file and add/delete the channel you want a business service’s action to have.
Adding details about the particular action and the channels you want that action to trigger in the file in egov-mdms-data repository.
Respected localization templates should be upserted before in the required environment using the . Restart the localization service to have the newly updated templates available.
Step 1: Visit the Airtel DLT portal ( ) and select your area of operation as Enterprise then click on next.
service is running and has a yml added yo .
the latest version of eGov-workflow-V2 service
Note: This video will give you an idea of how to deploy any Digit-service. Further you can find the latest builds for each service in out latest here.
For configuration details, refer to the links in .
Here is a link to a sample master data.
To know more about regular expression refer the below articles To test regular expression refer to the below link.
1
PDF Service
Fixed bug where _create API creates duplicate PDF for each request.
2
PDF Service
Fixed issue where pdf service was pushing data to only one kafka topic partition
3
User Service
Fixed bug where updating citizen profile causes server error, fixed bug where employee details are updateable via citizen profile update API.
4
Location Service
Fixed bug where search on root level boundary type yields empty search response.
5
Human Resource Management Service
Fixed bug where employee search causes server error in single instance clusters.
egov.wf.default.offset
The default value of offset in search
egov.wf.default.limit
The default value of limit in search
egov.wf.max.limit
The maximum number of records that are returned in search response
egov.wf.inbox.assignedonly
Boolean flag if set to true default search will return records assigned to the user only, if false it will return all the records based on the user’s role. (default search is the search call when no query params are sent and based on the RequestInfo of the call, records are returned, it’s used to show applications in employee inbox)
egov.wf.statelevel
Boolean flag set to true if a state-level workflow is required
Installation Guide for DIGIT via GitHub Actions in AWS
This guide provides step-by-step instructions for installing DIGIT using GitHub Actions within an AWS environment.
AWS account
Github account
Create an IAM User in your AWS account.
Generate ACCESS_KEY
and SECRET_KEY
for the IAM user.
Assign Administrator Access to the IAM user for necessary permissions.
Fork the Repository into your organization account on GitHub.
Navigate to the repository settings, then to Secrets and Variables, and add the following repository secrets:
AWS_ACCESS_KEY_ID: <GENERATED_ACCESS_KEY>
AWS_SECRET_ACCESS_KEY: <GENERATED_SECRET_KEY>
AWS_DEFAULT_REGION: ap-south-1
AWS_REGION: ap-south-1
Enable GitHub Actions
Open the GitHub Actions workflow file.
Specify the branch name you wish to enable GitHub Actions for.
Navigate to infra-as-code/terraform/sample-aws
.
Open input.yaml
and enter details such as domain_name
, cluster_name
, bucket_name
, and db_name
.
Navigate to config-as-code/environments
.
Open egov-demo-secrets.yaml
.
Enter db_password
and ssh_private_key
. Add the public_key
to your GitHub account.
Choose one of the following methods to generate an SSH key pair:
Method a: Use an online website (Note: This is not recommended for production setups, only for demo purposes): https://8gwifi.org/sshfunctions.jsp
Method b: Use OpenSSL commands:
After entering all the details, push these changes to the remote GitHub repository. Open the Actions
tab in your GitHub account to view the workflow. You should see that the workflow has started, and the pipelines are completed successfully.
This indicates that your setup is correctly configured, and your application is ready to be deployed. Monitor the output of the workflow for any errors or success messages to ensure everything is functioning as expected.
As you wrap up your work with DIGIT, ensuring a smooth and error-free cleanup of the resources is crucial. Regular monitoring of the GitHub Actions workflow's output is essential during the destruction process. Watch out for any error messages or signs of issues. A successful job completion will be confirmed by a success message in the GitHub Actions window, indicating that the infrastructure has been effectively destroyed.
When you're ready to remove DIGIT and clean up the resources it created, proceed with executing the terraform_infra_destruction
job. This action is designed to dismantle all setup resources, clearing the environment neatly.
We hope your experience with DIGIT was positive and that this guide makes the uninstallation process straightforward.
To initiate the destruction of a Terraform-managed infrastructure, follow the steps below:
Navigate to Actions
Click DIGIT-Install workflow
Select Run workflow
When prompted, type "destroy". This action starts the terraform_infra_destruction
job.
You can observe the progress of the destruction job in the actions window.
Note: For DIGIT configurations created using the master branch
If DIGIT is installed from a branch other than the main one, ensure that the branch name is correctly specified in the workflow file. For instance, if the installation is done from the digit-install branch, the following snippet should be updated to reflect that.
Contains the latest hotfixed builds of indexer, gateway, workflow.
Test cases for various core-services that were tested as part of DIGIT-2.9-LTS
These test cases can serve as benchmark tests for any breaking changes in case any modification is done on top of existing services.
A comprehensive guide on running automated test scripts for various core services.
Before initiating automating DIGIT- LTS core services, ensure the Postman tool is installed.
Create an environment in Postman with a global variable BaseUrl and set its value per your environment configuration. For example - we have set https://digit-lts.digit.org as the base URL.
Import all the services you want to automate in the same environment.
Follow the steps below to run the egov-User service automation scripts.
2. Port forward to DIGIT-LTS environment: Replace [userPod]
with the relevant user pod name.
Port-forward to the DIGIT-LTS environment to create the first user using the command above.
In the CSV file, each cell in the first row UserFIRST, UserName2, and UserName3 represents a unique user and each cell in the second row represents a name given to a specific user.
For example: The first cell in the first row that is UserFIRST represents the first user and USERDemoM1 represents the name given to user UserFIRST.
USERDemoM1
EGOvDemoM2
EGOvDemoM3
Open the User collection in Postman and click on the Run button.
Select CSV file - Select the downloaded CSV file by clicking on the Select File button.
Click on the Run User button to execute the collection.
The provided steps automate the creation of users in the DIGIT-LTS environment, which is essential for accessing all resources.
Review and modify the CSV file as needed to include accurate user data.
For further assistance or troubleshooting, refer to the Postman documentation or contact the relevant support channels.
By following these steps, you can effectively automate the core services of DIGIT-LTS, starting with the User service, using Postman.
Run localization Collection in Postman: Open the localization collection in Postman by clicking on localization collection. Click on the Run button to execute the collection.
Select CSV File: When prompted, click on the Select File button to select the downloaded CSV file.
Run Collection: After selecting the CSV file, click on the Run Collection button to execute the collection.
In the CSV file, code1 represents the specific code for creating the message in the locale1.
The message "Punjab water park" is created in the locale/region/mohalla. A unique code "Alpha pgr 1" is associated with the created message.
Alpha pgr 1
Punjab water park
The locale column in the CSV file represents the place/area to create the message (mandatory).
The code column represents the unique code associated with the message.
The above steps automate the localization services using Postman.
1. Import Egov OTP Collection into Postman:
Open Postman and import the collection.
2. To Run Egov OTP Collection in Postman:
Click on the Egov OTP collection in Postman to open the collection.
Click on the Run button to execute the collection.
Click on the Run Collection button to execute the collection.
1. Import MDMS Collection into Postman:
Open Postman and import the collection.
2. To run the MDMS Collection in Postman:
Open the MDMS collection in Postman by clicking on MDMS collection.
Click on the Run button to execute the collection.
Click on the Run Collection button to execute the collection.
1. Import Url shortening Collection into Postman:
Open Postman and import the collection.
2. To run the URL - shortening collection in Postman:
Open the URL shortening collection in Postman by clicking on the URL shortening collection.
Click on the Run button to execute the collection.
Click on the Run Collection button to execute the collection.\
1. Import Location Collection into Postman:
Open Postman and import the collection.
2. To run Location Collection in Postman:
Open the Location collection in Postman by clicking on Location collection.
Click on the Run button to execute the collection.
Click on the Run Collection button to execute the collection.
1. Import Access control Collection into Postman:
Open Postman and import the collection.
2. To run the Access Control collection in Postman:
Open the Access Control collection in Postman by clicking on Access Control collection.
Click on the Run button to execute the collection.
Click on the Run Collection button to execute the collection.
1. Import Filestore Collection into Postman:
Open Postman and import the collection.
2. To run Filestore Collection in Postman:
Open the Filestore collection in Postman by clicking on the Filestore collection.
Click on the Run button to execute the collection.
Click on the Run Collection button to execute the collection.
1. Import the ID gen Collection into Postman:
Open Postman and import the collection.
2. To Run Id gen Collection in Postman:
Open the Id gen collection in Postman by clicking on Id gen collection.
Click on the "Run" button to execute the collection.
Click on the "Run Collection" button to execute the collection.
1. Importing WorkFlow Collection:
2. Running the WorkFlow Collection:
Open the WorkFlow collection in Postman and click on the Run button.
Select the downloaded CSV file by clicking on the Select File button.
Click on the Run Workflow button to execute the collection.
Additionally, you must update the columns BusinessIdFirst
and BusinessIdTwo
in your application for a successful transition.
1. Importing Encryption Collection:
2. Port Forwarding to Digit-LTS Environment:
Port-forward to the Digit-LTS environment to decrypt the Encrypted data. Use the following command:
kubectl port-forward [Encryption] 8081:8080-n egov
Replace [Encryption] with the relevant Encryption pod name.
3. Running the Encryption Collection:
In CSV file , cell in first row "UserForEncy" represents unique user and cell in second row "EncUser1" represent name given to specific user.
EncUser1
Open the User collection in Postman and click on the Run button.
Select the downloaded CSV file by clicking on the Select File button.
Click on the Run EncyptionApi button to execute the collection.
This document provides step-by-step instructions on how to update the version of Amazon RDS (Relational Database Service) using both the AWS Management Console and Terraform.
Access to an AWS account with permissions to manage Amazon RDS instances.
Basic knowledge of AWS Management Console and Terraform.
Create a RDS Snapshot Backup for Data Protection.
Step 1: Navigate to the Amazon RDS Console:
Log in to the AWS Management Console.
Go to the Amazon RDS service.
Step 2: Select the RDS Instance:
From the list of DB instances, select the RDS instance for which you want to update the version.
Step 3: Initiate the Upgrade:
From the given databases, select the database you wish to upgrade the Engine Version of and click on the “Modify” button.
In the Modify DB Instance wizard, locate the "DB Engine Version" section. Select the desired version from the dropdown list. Review the other configuration settings and click on the "Continue" button.
Select “Apply Immediately” and Review the summary of changes and click on the "Modify DB Instance" button to initiate the upgrade.
Step 4: Monitor the Upgrade Progress:
Once the upgrade is initiated, monitor the upgrade progress from the RDS dashboard.
The status of the instance will change to "modifying" during the upgrade process.
Once the upgrade is completed, the status will change back to "available."
Step 1: Define the Terraform Configuration:
Create or update the Terraform configuration file (e.g., variable.tf
) with the necessary settings to manage the RDS instance.
Use the aws_rds_instance
resource to define the RDS instance.
Specify the desired version in the engine_version
attribute.
Step 2: Apply the Terraform Configuration:
Run terraform plan
to preview the changes that will be applied.
Run terraform apply
to apply the changes and update the RDS instance with the new version.
Step 3: Monitor Terraform Execution:
Monitor the Terraform execution for any errors or warnings.
Once the execution is completed, verify that the RDS instance has been updated to the new version.
This document has provided instructions on how to update the version of Amazon RDS using both the AWS Management Console and Terraform. Regularly updating the RDS version ensures that your database instance is up-to-date with the latest features and security patches.
\
This document contains information about the code changes that will be required in the registries for upgrading Spring boot and client libraries
Upgrade the Java version in the module to Java 17 before upgrading the Spring Boot version to 3.2.2. Following is a sample snippet of the Java version upgrade:
Upgrade spring-boot-starter-parent library to version 3.2.2. The code snippet of the dependency is shown below:
Upgrade the Flyway library to version 9.22.3 for compatibility with Postgres 14. Below is the code snippet:
Upgrade the postgresql library to version 42.7.1:
The tracer library is upgraded to springboot 3.2.2. The updates are available in the library version 2.9.0-SNAPSHOT. If the module is using the tracer, upgrade the tracer version to 2.9.0-SNAPSHOT as shown below:
The service-common library is upgraded and added in tracer. You don't have to explicitly upgrade the services-common library and remove it from POM if you upgrade the tracer. In case your module is using only services-common, you can directly upgrade the version to 2.9.0-SNAPSHOT
Use the below version of JUnit which is compatible with Java 17:
If you are using the MDMS client library update the dependency version to 2.9.0-SNAPSHOT as shown below:
Update the Lombok version in the pom.xml to 1.8.22:
If you are using net.minidev library, upgrade the version to 2.5.0
To simplify dependency management and ensure version compatibility for Spring Kafka and Spring Redis dependencies use the spring-boot-starter-parent as your project's parent in the pom.xml. This ensures the spring-kafka or spring-redis dependency is included without specifying a version, Spring Boot will automatically provide a compatible version of the dependency. Following are code snippets of both the dependencies:
Note: If a tracer library is implemented there is no need to explicitly import spring-kafka.
Javax is deprecated and transitioning to Jakarta. Remove any Javax dependencies and update all Javax imports to Jakarta. For example, change imports like PostConstruct and valid to their Jakarta counterparts in all occurrences.\
Remove the annotation @javax.annotation.Generated which is now deprecated.
Update the Dockerfile for flyway migration with the below content:
FROM egovio/flyway:10.7.1
COPY ./migration/main /flyway/sql
COPY migrate.sh /usr/bin/migrate.sh
RUN chmod +x /usr/bin/migrate.sh
ENTRYPOINT ["/usr/bin/migrate.sh"]
Update the migrate.sh script:
#!/bin/sh
flyway -url=$DB_URL -table=$SCHEMA_TABLE -user=$FLYWAY_USER -password=$FLYWAY_PASSWORD -locations=$FLYWAY_LOCATIONS -baselineOnMigrate=true -outOfOrder=true migrate
If you are using spring-redis, add the following configuration file:
Remove @SafeHtml annotation from the fields in POJO models as it is deprecated.
Update the Junit dependencies in the test cases as shown below:
This comprehensive documentation provides step-by-step instructions and best practices for smoothly migrating your DIGIT installation from version 2.8 to the v2.9 (LTS) release.
To begin the migration process from DIGIT v2.8 to v2.9 LTS, it's crucial to first upgrade all prerequisite backbone services.
Following this, scale down the Zuul replica count to zero using the provided command
Once all deployed services are confirmed to be up and running smoothly, the migration from v2.8 to v2.9 LTS can be considered complete.
Note: If you encounter this (V20180731215512__alter_eg_role_address_fk.sql or version V20180731215512
) flyway migration issue in egov-user service, follow these steps to resolve it
Connect to postgres pod of your server since we need to run few query to resolve it.
Run this SQL query : DELETE FROM egov_user_schema where version = '20180731215512';
Now run this SQL query : ALTER TABLE eg_userrole DROP CONSTRAINT eg_userrole_userid_fkey;
Restart the egov-user pod after successfullt executing these queries.
Code changes required once Postgres is upgraded
egovio/egov-indexer-db:2.9.1-c781a2f714-65
egovio/gateway:gateway-2.9.2-a916a090e6-40
egovio/egov-workflow-v2-db:2.9.1-80b58dc788-15
to access the Test Scenarios.
to access the Postman Collection for Automation Scripts.
1. Import user collection: Copy the User collection link from the provided document link: and import the collection in Postman.
3. Run the user collection: Click on the link to download the CSV file. Make sure to download the file in CSV format before proceeding with the User collection.
Import Localization Collection into Postman: and copy the localization collection link (available in column B of the sheet). Open the Postman and import the collection.
Prepare CSV File: to download the CSV file. Make sure the CSV file is in the correct format.
to open the document and copy the Egov OTP collection link
to open the document and copy the MDMS collection link.
to open the document and copy the URL shortening collection link.
to open the document and copy the location collection link.
to open the document and copy the Access Control collection link.
to open the document and copy the Filestore collection link.
to open the document and copy the ID-gen collection link.
Before executing the Id gen collection, download the CSV file from in CSV format.
to open the document, copy the Workflow collection link and import the Workflow collection into Postman.
to download the CSV file before executing the WorkFlow collection.
to open the document, copy the Encryption collection link and import the collection into Postman.
to download the CSV file before executing the Encryption collection.
Next, proceed with deploying the core service images as outlined in the attached .
changelog.md
This document provides a comprehensive log of system upgrades, detailing the progression of various software components from older versions to their latest releases as of this update. This ensures transparency and provides a reference for the evolution of our system's infrastructure.
No unreleased changes.
PostgreSQL: Upgraded from 11.2
to 14.10
, enhancing database performance, security features, and compatibility with the latest extensions.
Redis: Upgraded from 3.6
to 7.2.4
, bringing improvements in processing speed, security patches, and new features for better data management.
Elasticsearch: Upgraded from 6.6
to 8.11.3
, which includes significant advancements in search capabilities, performance optimizations, and security enhancements.
Kibana: Version updated from 6.6
to 8.11.3
to align with the Elasticsearch upgrade, improving data visualization and UI/UX enhancements.
Kafka: Upgraded from 2.4
to 3.6.0
, introducing improvements in scalability, reliability, and a set of new features that enhance message queuing capabilities.
Jaeger: Upgraded from 1.18
to 1.53.0
, significantly improving distributed tracing capabilities, UI improvements, and performance optimizations.
Prometheus: Upgraded from 2.48.0
to 2.49.0
, focusing on enhancements in monitoring capabilities and minor improvements in system performance.
Grafana: Upgraded from 7.0.5
to 10.2.3
, bringing a leap in visualization features, plugin ecosystem, and overall performance and usability improvements.
Cert-Manager: Upgraded from 1.7.3
to 1.13.3
, enhancing certificate management with new features and security improvements.
Report
egov-searcher
Zuul
No fixes in this release.
Upgraded components include security patches and improvements to address known vulnerabilities.
Previous changes if applicable.
This document provides step-by-step instructions on how to take backups of PostgreSQL databases hosted on AWS.
Access to an AWS account with permissions to manage Amazon RDS instances.
PostgreSQL database hosted on Amazon RDS.
Step 1: Navigate to the Amazon RDS Console:
Log in to the AWS Management Console.
Go to the Amazon RDS service.
Step 2: Select the PostgreSQL Instance:
From the list of DB instances, select the PostgreSQL instance for which you want to take the backup.
Step 3: Enable Automated Backups (Optional):
If automated backups are not already enabled, navigate to the "Backup" tab of the RDS instance.
Click on "Modify" and enable automated backups.
Configure the backup retention period according to your requirements.
Step 4: Manually Trigger a Snapshot:
To create a manual snapshot, select the RDS instance you want to back up. Click on the Actions button in the right upper corner and select Take a snapshot.
This will redirect you to a page like the one below.
Provide a meaningful name for the snapshot and click “Create snapshot”.
This will create a manual snapshot of the DB instance that you created.
Step 5: Create a Manual Backup Using pg_dump:
Connect to the PostgreSQL database using a PostgreSQL client tool or command-line interface.
Use the pg_dump
command to export the database to a file:
Replace <hostname>
, <username>
, <database_name>
, and <backup_file_name>
with the appropriate values.
Step 6: Copy Backup Files to Amazon S3 (Optional):
If desired, copy the backup files to Amazon S3 for long-term storage and redundancy.
Use the AWS CLI or SDKs to upload the files to an S3 bucket.
This document has provided instructions on how to take backups of PostgreSQL databases hosted on AWS. Regular backups are essential for data protection and disaster recovery purposes.
\
1
Upgrade is completed for all the core services that are part of the release.
Yes
NA
Shashwat Mishra
Code is frozen by 14 March 2024
2
Test cases are documented by the QA team and test results are updated in the test cases sheet.
Yes
Mustakim
3
The incremental demo of the features showcased during tech council meetings and feedback incorporated.
Yes
NA
Shashwat Mishra
4
QA signoff is completed by the QA team and communicated to the platform team.
Yes
NA
QA signoff was completed. Sign-off dates
6th Feb 2023
5
API Technical documents are updated for the release along with the configuration documents.
Yes
NA
6
Promotion to new environment testing from the QA team is completed.
Yes
NA
Shraddha Solkar
Aniket Talele
Shashwat Mishra
7
API Automation scripts are updated for new APIs or changes to any existing APIs for the release. API automation regression is completed on UAT, the automation test results are analyzed and necessary actions are taken to fix the failure cases. Publish the list of failure use cases with a reason for failure and the resolution taken to fix these failures for the release.
No
NA
Shraddha Solkar
Not picked up in this release due to lack of resources. We do not have QA resource who can write automation scripts.
8
The API backward compatibility testing is completed.
Yes
Shraddha Solkar
Aniket Talele
Shashwat Mishra
Core modules were tested against urban 2.8 modules and the bugs which were found have been addressed.
9
The communication is shared with the platform team for regression by the QA team.
Yes
NA
Shraddha Solkar
Aniket Talele
Shashwat Mishra
UAT sign-off was completed on 24th March 2023
10
The GIT tags and releases are created for the code changes for the release.
Yes
Shashwat Mishra
Aniket Talele
11
Verify whether the Release notes are updated
Yes
Shashwat Mishra
Anjoo Narayan
Aniket Talele
12
Verify whether all MDMS, Configs, InfraOps configs updated.
Yes
NA
Shraddha Solkar
Shashwat Mishra
13
Verify whether all docs will be Published to by the Technical Writer as part of the release.
Yes
NA
Shashwat Mishra
Anjoo Narayan
14
Verify whether all test cases are up to date and updated along with necessary permissions to view the test cases sheet. The test cases sheet is verified by the Test Lead.
Yes
Shraddha Solkar
Aniket Talele
Shashwat Mishra
15
Verify whether all the localisation data was updated in Release Kits.
Yes
Shraddha Solkar
Shashwat Mishra
16
Verify whether the platform release notes and user guides are updated and published
Yes
Platform team
Aniket Talele
Shashwat Mishra
Release notes and user guides are published in gitbook.
17
The Demo of technical enhancements is done by the platform team as part of the tech council meetings.
Yes
NA
Platform team
Ghanshyam Rawat Aniket Talele
Shashwat Mishra
18
Architect SignOff and Technical Quality Report
Yes
NA
Ghanshyam Rawat Aniket Talele
Sign off is given.
19
The release communication along with all the release artefacts are shared by the Platform team.
Inprogress
NA
Shashwat Mishra
Aniket Talele
Provides a list of boundary entities based on the provided search criteria.
unique id for a tenant.
unique list of boundary codes.
Provides functionality to search linked boundaries based on the provided search criteria
unique id for a tenant.
boundary type within the tenant boundary structure.
Type Of the BoundaryType Like REVENUE, ADMIN
boolean flag to inform the service if children need to be part of search.
boolean flag to inform the service if parents need to be part of search.
unique List of boundary codes.
RequestInfo should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestinfo as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseInfo in the response body to ensure correlation.
RequestInfo should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestinfo as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseInfo in the response body to ensure correlation.
RequestInfo should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestinfo as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseInfo in the response body to ensure correlation.
RequestInfo should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestinfo as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseInfo in the response body to ensure correlation.
Provides functionality to create boundary data.
RequestInfo should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestinfo as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseInfo in the response body to ensure correlation.
Provides functionality to update boundary data.
RequestInfo should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestinfo as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseInfo in the response body to ensure correlation.
Provides functionality to define hierarchy.
RequestInfo should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestinfo as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseInfo in the response body to ensure correlation.
Provides boundary type hierarchy based on the provided search criteria.
RequestInfo should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestinfo as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseInfo in the response body to ensure correlation.
Provides functionality to establish relationships between boundaries.
RequestInfo should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestinfo as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseInfo in the response body to ensure correlation.
Provides functionality to update relationships between boundaries.
RequestInfo should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestinfo as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseInfo in the response body to ensure correlation.
Get list of masters for a perticulare module and tenantId.
RequestInfo should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestinfo as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseInfo in the response body to ensure correlation.
Creates/Updates the module master data json files on the github through UI input.
RequestInfo should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestinfo as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseInfo in the response body to ensure correlation.
Generate pdfs and return their filestoreids
tenantId for pdf
key to identify correct PDF configurarion
Generate pdf and return as binary response
tenantId for pdf
key to identify correct PDF configurarion
No Content
Get details for already generated PDF
tenantId for pdf
search based on unique id of pdf job.
search based on unique id of a document
Whether single object or multiobject pdf required
The endpoint for uploading file in the system.
The file to upload.
Unique ulb identifier.
module name.
tag name.
Create messages for different locale.
RequestInfo should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestinfo as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseInfo in the response body to ensure correlation.
Unique tenant id.
update messages for different locale.
RequestInfo should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestinfo as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseInfo in the response body to ensure correlation.
Locale of message.
Tenant of message.
Module of message..
delete messages
RequestInfo should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestinfo as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseInfo in the response body to ensure correlation.
tenant id of message.
create OTP Configuration this API is internaly call from v1/_send end point, this end point present in user-otp service no need of explicity call
RequestInfo should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestinfo as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseInfo in the response body to ensure correlation.
validate OTP Configuration this end point is validate the otp respect to mobilenumber
RequestInfo should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestinfo as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseInfo in the response body to ensure correlation.
search the mobile number and otp using uuid ,uuid nothing but otp reference number
RequestInfo should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestinfo as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseInfo in the response body to ensure correlation.
API to generate new id based on the id formats passed.
Contract class to receive request.
RequestInfo should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestinfo as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseInfo in the response body to ensure correlation.
An encryption request can contain multiple EncReqObject. This will help to encrypt bulk requests which may have different tenant-id and/or method ( AES / RSA ).
EncrReqObject contains data to be encrypted and meta-data required to perform the encryption.
{"tenantId":"pb.jalandhar","type":"Important","value":{"key":"secret"}}
A request to rotate key for a given tenant
The tenantId for which the key needs to be changed.
Initiate legacy index job to index data from DB fetched by calling some api
Reindex data from one index to another
egovio/egov-accesscontrol:DIGIT-2.9-LTS-237578ce80-10
egovio/egov-enc-service-db:DIGIT-2.9-LTS-237578ce80-21
egovio/egov-filestore-db:DIGIT-2.9-LTS-237578ce80-14
egovio/egov-idgen-db:DIGIT-2.9-LTS-07f47790b8-8
egovio/egov-indexer-db:2.9.1-c781a2f714-65
egovio/egov-localization-db:DIGIT-2.9-LTS-237578ce80-10
egovio/egov-mdms-service:DIGIT-2.9-LTS-07f47790b8-14
egovio/egov-notification-mail:DIGIT-2.9-LTS-07f47790b8-5
egovio/egov-notification-sms:DIGIT-2.9-LTS-07f47790b8-7
egovio/egov-otp-db:DIGIT-2.9-LTS-07f47790b8-6
egovio/egov-persister:DIGIT-2.9-LTS-07f47790b8-8
egovio/egov-pg-service-db:DIGIT-2.9-LTS-237578ce80-11
egovio/egov-url-shortening-db:DIGIT-2.9-LTS-07f47790b8-12
egovio/egov-user-db:DIGIT-2.9-LTS-c33cfe45ab-19
egovio/egov-workflow-v2-db:2.9.1-80b58dc788-15
egovio/internal-gateway-scg:DIGIT-2.9-LTS-b4fd517ebc-6
egovio/pdf-service-db:DIGIT-2.9-LTS-5d71b59949-24
egovio/user-otp:DIGIT-2.9-LTS-07f47790b8-9
egovio/xstate-chatbot-db:DIGIT-2.9-LTS-44558a0602-3
egovio/gateway:gateway-2.9.2-a916a090e6-40
egovio/egov-location-db:DIGIT-2.9-LTS-07f47790b8-10
egovio/service-request-db:DIGIT-2.9-LTS-237578ce80-7
egovio/audit-service-db:DIGIT-2.9-LTS-07f47790b8-12
egovio/gateway-kubernetes-discovery:DIGIT-2.9-LTS-7f4ff55ce3-6
egovio/egov-hrms-db:DIGIT-2.9-LTS-4553648f56-9
egovio/mdms-v2-db:MDMS-v2-2.9LTS-837232ac67-71
egovio/boundary-service-db:v1.0.0-063968adc7-18
Forward user sent message to Chatbot through GET request.
The receipient mobile number of message
The sender mobile number of message
Type of message ex:- text, image
If media_type is "text" then the actual message would be picked from this field
media data if meda_type other than text
No Content
Forward user sent message to Chatbot through POST request
The receipient mobile number of message
The sender mobile number of message
Type of message ex:- text, image
If media_type is "text" then the actual message would be picked from this field
media data if meda_type other than text
No Content
To create new workflow applicationin the system. API supports bulk creation with max limit as defined in the Trade License Request. Please note that either whole batch succeeds or fails, there's no partial batch success. To create one workflow(ProcessInstance) instance, please pass array with one workflow(ProcessInstance) object.
Following Conditions are applied -
Contract class to receive request. Array of TradeLicense items are used in case of create, whereas single TradeLicense item is used for update
RequestInfo should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestinfo as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseInfo in the response body to ensure correlation.
Used for search result and create only
Unique id for a tenant.
unique identifier of Application
Name of the workflow confguration.
Module name to which workflow application belongs
The list of businessIds
The unique Old license number for a Application.
Boolean flag to return history of the workflow
Number of records to be returned
Starting offset for returning search response
RequestInfo should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestinfo as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseInfo in the response body to ensure correlation.
unique API ID
API version - for HTTP based request this will be same as used in path
time in epoch
API action to be performed like _create, _update, _search (denoting POST, PUT, GET) or _oauth etc
Device ID from which the API is called
API key (API key provided to the caller in case of server to server communication)
Unique request message id from the caller
UserId of the user calling
//session/jwt/saml token/oauth token - the usual value that would go into HTTP bearer token
This is acting ID token of the authenticated user on the server. Any value provided by the clients will be ignored and actual user based on authtoken will be used on the server.
Unique id for a tenant.
unique identifier of trade licence
Unique application number for a trade license application.
The list of businessIds
The unique Old license number for a Trade license.
Boolean flag to return history of the workflow
RequestInfo should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestinfo as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseInfo in the response body to ensure correlation.
unique API ID
API version - for HTTP based request this will be same as used in path
time in epoch
API action to be performed like _create, _update, _search (denoting POST, PUT, GET) or _oauth etc
Device ID from which the API is called
API key (API key provided to the caller in case of server to server communication)
Unique request message id from the caller
UserId of the user calling
//session/jwt/saml token/oauth token - the usual value that would go into HTTP bearer token
This is acting ID token of the authenticated user on the server. Any value provided by the clients will be ignored and actual user based on authtoken will be used on the server.
Unique id for a tenant.
unique identifier of Application
Name of the workflow confguration.
Module name to which workflow application belongs
The list of businessIds
The unique Old license number for a Application.
Boolean flag to return history of the workflow
Number of records to be returned
Starting offset for returning search response
RequestInfo should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestinfo as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseInfo in the response body to ensure correlation.
unique API ID
API version - for HTTP based request this will be same as used in path
time in epoch
API action to be performed like _create, _update, _search (denoting POST, PUT, GET) or _oauth etc
Device ID from which the API is called
API key (API key provided to the caller in case of server to server communication)
Unique request message id from the caller
UserId of the user calling
//session/jwt/saml token/oauth token - the usual value that would go into HTTP bearer token
This is acting ID token of the authenticated user on the server. Any value provided by the clients will be ignored and actual user based on authtoken will be used on the server.
Unique id for a tenant.
unique identifier of Application
Name of the workflow confguration.
Module name to which workflow application belongs
The list of businessIds
The unique Old license number for a Application.
Boolean flag to return history of the workflow
Number of records to be returned
Starting offset for returning search response
RequestInfo should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestinfo as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseInfo in the response body to ensure correlation.
Unique id for a tenant.
unique identifier of Application
Name of the workflow confguration.
Module name to which workflow application belongs
The list of businessIds
The unique Old license number for a Application.
Boolean flag to return history of the workflow
Number of records to be returned
Starting offset for returning search response
RequestInfo should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestinfo as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseInfo in the response body to ensure correlation.
unique API ID
API version - for HTTP based request this will be same as used in path
time in epoch
API action to be performed like _create, _update, _search (denoting POST, PUT, GET) or _oauth etc
Device ID from which the API is called
API key (API key provided to the caller in case of server to server communication)
Unique request message id from the caller
UserId of the user calling
//session/jwt/saml token/oauth token - the usual value that would go into HTTP bearer token
This is acting ID token of the authenticated user on the server. Any value provided by the clients will be ignored and actual user based on authtoken will be used on the server.
Name of the workflow confguration.
Unique id for a tenant.
unique identifier of Application
Module name to which workflow application belongs
The list of businessIds
The unique Old license number for a Application.
Boolean flag to return history of the workflow
Number of records to be returned
Starting offset for returning search response
RequestInfo should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestinfo as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseInfo in the response body to ensure correlation.
unique API ID
API version - for HTTP based request this will be same as used in path
time in epoch
API action to be performed like _create, _update, _search (denoting POST, PUT, GET) or _oauth etc
Device ID from which the API is called
API key (API key provided to the caller in case of server to server communication)
Unique request message id from the caller
UserId of the user calling
//session/jwt/saml token/oauth token - the usual value that would go into HTTP bearer token
This is acting ID token of the authenticated user on the server. Any value provided by the clients will be ignored and actual user based on authtoken will be used on the server.
To create new workflow configuration(BuinessService) in the system. API supports bulk creation with max limit as defined in the BuinessService Request. Please note that either whole batch succeeds or fails, there's no partial batch success. To create one BuinessService, please pass array with one BuinessService object.
Following Conditions are applied -
Contract class to receive request. Array of TradeLicense items are used in case of create, whereas single TradeLicense item is used for update
RequestInfo should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestinfo as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseInfo in the response body to ensure correlation.
Used for search result and create only
Can be used only to add new state or action in the workflow. Can update any existing field. Removing of any state is not allowed as applications in that state will be in invalid state
Following Conditions are applied -
Contract class to receive request. Array of TradeLicense items are used in case of create, whereas single TradeLicense item is used for update
RequestInfo should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestinfo as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseInfo in the response body to ensure correlation.
Used for search result and create only
BusinessService code of the businessService
Unique id for a tenant.
RequestInfo should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestinfo as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseInfo in the response body to ensure correlation.
unique API ID
API version - for HTTP based request this will be same as used in path
time in epoch
API action to be performed like _create, _update, _search (denoting POST, PUT, GET) or _oauth etc
Device ID from which the API is called
API key (API key provided to the caller in case of server to server communication)
Unique request message id from the caller
UserId of the user calling
//session/jwt/saml token/oauth token - the usual value that would go into HTTP bearer token
This is acting ID token of the authenticated user on the server. Any value provided by the clients will be ignored and actual user based on authtoken will be used on the server.
To create new workflow configuration(BuinessService) in the system. API supports bulk creation with max limit as defined in the BuinessService Request. Please note that either whole batch succeeds or fails, there's no partial batch success. To create one BuinessService, please pass array with one BuinessService object.
Following Conditions are applied -
Contract class to receive request. Array of TradeLicense items are used in case of create, whereas single TradeLicense item is used for update
RequestInfo should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestinfo as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseInfo in the response body to ensure correlation.
Used for search result and create only
Can be used only to add new state or action in the workflow. Can update any existing field. Removing of any state is not allowed as applications in that state will be in invalid state
Following Conditions are applied -
Contract class to receive request. Array of TradeLicense items are used in case of create, whereas single TradeLicense item is used for update
RequestInfo should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestinfo as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseInfo in the response body to ensure correlation.
Used for search result and create only
BusinessService code of the businessService
Unique id for a tenant.
RequestInfo should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestinfo as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseInfo in the response body to ensure correlation.
unique API ID
API version - for HTTP based request this will be same as used in path
time in epoch
API action to be performed like _create, _update, _search (denoting POST, PUT, GET) or _oauth etc
Device ID from which the API is called
API key (API key provided to the caller in case of server to server communication)
Unique request message id from the caller
UserId of the user calling
//session/jwt/saml token/oauth token - the usual value that would go into HTTP bearer token
This is acting ID token of the authenticated user on the server. Any value provided by the clients will be ignored and actual user based on authtoken will be used on the server.
ResponseInfo should be used to carry metadata information about the response from the server. apiId, ver and msgId in ResponseInfo should always correspond to the same values in respective request's RequestInfo.
Details of the payment object.
Wrapper for Request Info
RequestInfo should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestinfo as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseInfo in the response body to ensure correlation.
params
Wrapper for Request Info
RequestInfo should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestinfo as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseInfo in the response body to ensure correlation.
RequestInfo should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestinfo as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseInfo in the response body to ensure correlation.