Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
User service is responsible for user data management and providing functionality to log in and log out of the DIGIT system.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Java 8
Kafka server is up and running
Encryption and MDMS services are running
PSQL server is running and the database
Redis is running
Store, update and search user data
Provide authentication
Provide login,logout functionality into MgramSeva platform
Store user data PIIs in encrypted form
Setup latest version of egov-enc-service and egov-mdms- service
Deploy the latest version of egov-user service
Add Role-Action mapping for API’s
Following are the properties in application.properties file in user service which is configurable.
User data management and functionality to log in and log out into the DIGIT system using OTP and password.
Providing the following functionality to citizen and employee type users
Employee:
User registration
Search user
Update user details
Forgot password
Change password
User role mapping(Single ULB to multiple roles)
Enable employees to login into the DIGIT system based on the password.
Citizen:
Create user
Update user
Search user
User registration using OTP
OTP based login
To integrate, the host of egov-user should be overwritten in the helm chart.
Use /citizen/_create
endpoint for creating users into the system. This endpoint requires the user to validate his mobile Number using OTP. The first OTP will be sent to his mobile number and then that OTP will be sent as otpReference
in the request body
Use /v1/_search
and /_search
endpoints to search users in the system depending on various search parameters
Use /profile/_update
for updating the user profile. The user will be validated (either by OTP based validation or password validation) when this API is called
/users/_createnovalidate
and /users/_updatenovalidate
are endpoints to create user data into the system without any validations (no OTP or password required). They should be strictly used only for creating/updating user’s internally and should not be exposed outside
Forgot password: In case the user forgets the password it can be reset by first calling /user-otp/v1/_send
which will generate and send OTP on employee’s mobile number, the password can then be updated using this OTP by calling API /password/nologin/_update
in which a new password along with the OTP has to be sent.
Use /password/_update
to update the existing password by logging in. In the request body, both old and new password has to be sent. Details of the API can be found in the attached swagger documentation
Use /user/oauth/token
for generating tokens, /_logout
for logout and /_details
for getting user information from his token
Multi Tenant User : The Multi-tenant User functionality allows a user to perform actions across multiple ULB’s. For example, an employee belonging to Amritsar can perform the role of say Trade License Approver for Jalandhar by assigning a tenant level role of tenantId pb.jalandhar to him. Following is an example of such a user:
If an employee has a role with state level tenantId
he can perform actions corresponding to that role across all tenants
Refresh Token: Whenever the /user/oauth/token
is called to generate the access_token
, along with the access_token
one more token is generated called refresh_token
. The refresh token is used to generate new access_token
whenever the existing one expires. Till the time the refresh token is valid the user won’t have to log in even if his access_token
get’s expired, as it will be generated using refresh_token
. The validity time of the refresh token is configurable and can be configured using the property: refresh.token.validity.in.minutes
(Note: All the API’s are in the same postman collection therefore the same link is added in each row)
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
Property
Value
Remarks
egov.user.search.default.size
10
default search record number limit
citizen.login.password.otp.enabled
true
whether citizen login otp based
employee.login.password.otp.enabled
false
whether employee login otp based
citizen.login.password.otp.fixed.value
123456
fixed otp for citizen
citizen.login.password.otp.fixed.enabled
false
allow fixed otp for citizen
otp.validation.register.mandatory
true
whether otp compulsory for registration
access.token.validity.in.minutes
10080
validity time of access token
refresh.token.validity.in.minutes
20160
validity time of refresh token
default.password.expiry.in.days
90
expiry date of a password
account.unlock.cool.down.period.minutes
60
unlock time
max.invalid.login.attempts.period.minutes
30
window size for counting attempts for lock
max.invalid.login.attempts
5
max failed login attempts before account is locked
egov.state.level.tenant.id
pb
Title
Link
User Data encryption promotion details
Encryption Service
Link
/citizen/_create
/users/_createnovalidate
/_search
/v1/_search
/_details
/users/_updatenovalidate
/profile/_update
/password/_update
/password/nologin/_update
/_logout
/user/oauth/token
Water service is a DIGIT application that helps and gives flexibility to municipalities and citizens to manage water service requirements like apply for a water connection or search for water connections. The application goes through various steps as defined by the states. The application is passed through different users who verify and inspect the application details before moving it to the next stage. Based on the state, citizens get notifications (SMS and in-app ). Citizens can also pay for application fees or employees can collect the fee for the application.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Java 8
Kafka server is up and running
egov-persister service is running and has a water service persister config path added to it
PSQL server is running and a database is created to store water connection/application data
knowledge of eGov-mdms service, eGov-persister, eGov-idgen, eGov-sms, eGov-email,eGov-user, eGov-localization, eGov-workflow-service will be helpful.
Add old water connection to the system with/without arrears
Create a new Water Connection
Searching for water connections
Notification based on the application state
Table UML Diagram
Mdms configuration
master-config.json for water service
ConnectionType
Two connection types supported Metered and Non metered.
CheckList
CheckList is used to define the Q & A for the feedback form and its validation.
Category
Predefined list of categories allowed.
SubCategory
Predefined list of subcategories allowed
Persister configuration
Actions
Role Action Mapping
Roles available
Workflow business service config:
Create businessService (workflow configuration) using the /businessservice/_create
. Following is the product configuration for water service
Indexer config for water service:
Provide the absolute path of the checked-in file to DevOps, to add it to the file-read path of egov-indexer. The file will be added to the egov-indexer's environment manifest file for it to be read at the start-up of the application.
Run the egov-indexer app, Since it is a consumer, it starts listening to the configured topics and indexes the data.
Notification will be sent to the property owners and connection holders based on different application states.
We can add connection holders to the water connection which will be the owner of the connection. We can fill in the connection holders' details or we can just make the property owner to the connection holder.
The connection holder will get a notification based on a different state of the application. We are pushing the data of the connection holders in the user service too.
We can add road cutting details of multiple roads to the water connection. For each road that goes undercutting process, we have to fill their road type details and road cutting area. Based on this information, the application one-time fee estimate is calculated.
Add mdms configs required for water connection registration and restart mdms service.
Deploy the latest version of ws-services service.
Add water-service and water-services-meter persister yaml path in persister configuration and restart persister service.
Add Role-Action mapping for API’s.
Create businessService (workflow configuration) according to trade water connection, modify water connection
Add ws-service indexer yaml path in indexer service configuration and restart indexer service.
This ws-service module is used to manage water service connections against a property in the system.
Provide backend support for the different water connection registration processes.
Mseva and SMS notifications on application status changes.
The elastic search index for creating visualizations and Dashboards.
Supports workflow which is configurable
To integrate, the host of ws-service module should be overwritten in helm chart.
/ws-services/wc/_create
should be added as the create endpoint for creating water application/connection in the system.
/ws-services/wc/_search
should be added as the search endpoint . This method handles all requests to search existing records depending on different search criteria.
/ws-services/wc/_update
should be added as the update endpoint. This method is used to update fields in existing records or to update the status of the application based on workflow.
(Note: All the API’s are in the same postman collection therefore the same link is added in each row)
The main objective of the billing module is to serve the Bill for all revenue Business services. To serve the Bill, Billing-Service requires demand. Demands will be prepared by Revenue modules and stored by billing based on which it will generate the Bill.
Prior Knowledge of Java/J2EE.
Prior Knowledge of Spring Boot.
Prior Knowledge of KAFKA
Prior Knowledge of REST APIs and related concepts like path parameters, headers, JSON, etc.
Prior knowledge of the demand-based systems.
Following services should be up and running:
user
MDMS
Id-Gen
URL-Shortening
notification-sms
eGov billing service creates and maintains demands.
Generates bills based on demands.
push created and updated bill/demand to Kafka on specified topics
Updates the demands from payment when the collection service takes a payment.
Deploy the latest image of the billing service available.
In the MDMS data configuration, the following master data is needed for the functionality of the billing
Business Service JSON
TAX-Head JSON
Tax-Period JSON
Billing service can be integrated with any organization or system that wants a demand-based payment system.
Easy to create and simple process of generating bills from demands
The amalgamation of bills period-wise for a single entity like Water connection.
Amendment of bills in case of legal requirements.
Customers can create a demand using the /demand/_create
Organizations or Systems can search the demand using /demand/_search
endpoint
Once the demand is raised the system can call /demand/_update
endpoint to update the demand as per need.
Bills can be generated using, which is a self-managing API that generates a new bill only when the old one expires /bill/_fetchbill.
Bills can be searched using /bill/_search.
Amendment facility can be used in case of a legal issue to add values to existing demands using /amendment/_create
and /amendment/_update
can be used to cancel the created ones or update workflow if configured.
Interaction Diagram V1.1
Adjusting the receivable amount with the individual tax head.
Default order based apportioning(Based on apportioning order adjust the received amount with each tax head).V1.1
Proportionate based apportioning (Adjust total receivable with all the tax head equally)
Order & Percentage based apportioning(Adjust total receivable based on order and the percentage which is defined for each tax head).
The basic principle of apportioning is, if the full amount is paid for any bill then each individual tax head should get nullify with their corresponding adjusted amount.
Example: Case 1: When there are no arrears all tax heads belong to their current purpose:
Case 2: Apportioning with two years of arrear: If the current financial year is 2014-15. Below are the demands
if any payment is not done, and we generating demand in 2015-16 then the demand structure will as follows:
user-otp service is used to generate OTP for user login, user registration and user password change.
Prior Knowledge of Java/J2EE.
Prior Knowledge of Spring Boot.
Prior Knowledge of KAFKA
Prior Knowledge of REST APIs and related concepts like path parameters, headers, JSON, etc.
Following services should be up and running:
user
MDMS
Id-Gen
URL-Shortening
notification-sms
user-otp service generate validates the user details and request type and send OTP for a particular action.
Deploy the latest image of the user-otp service available.
User OTP service can be integrated with any organization or system that wants OTP based validation for user login, registration.
Easy to create and simple process of generating bills from demands.
Easy to generate OTP to validate mobile number for registration, login and password reset with simple API calls
otp can be generate calling /user-otp/v1/_send
Water Calculator Service is used for creating meter reading, searching meter reading, updating existing meter reading, calculation of water charge, demand generation, SMS & email notification to ULB officials on-demand generation and estimation of water charge(one-time cost) which involves cost like road-cutting charge, form fee, scrutiny fee, etc.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Java 8
Kafka server is up and running
egov-persister service is running and has water service persister configs path added in it
PSQL server is running and a database is created to store water connection/application data
Following services should be up and running:
egov-perister
egov-mdms
ws-services
billing-service
Calculate water charges and taxes based on billing slab
Calculate meter reading charge for water connection
Generate demand
Scheduler for generating the demand(for non metered connection)
Deploy the latest version of ws-service and ws-calculator
Add water-persist.yml & water-meter.yml file in config folder in git and add that path in persister. (The file path is to be added in environment yaml file in param called persist-yml-path
)
Master Config
Criteria
connection type
building type
calculation attribute
property usage type
If all criteria will match for that water connection this slab will use for calculation.
Water charge is based on billing slab, for water application charge will be based on slab and tax based on master configuration.
Actions
Role Action Mapping
Charge for the given connection for a given billing cycle will be defined/identified by the system with the help of the CalculationAtrribute MDMS and WCBillngSlab MDMS.
CalcualtionAttribute helps to identify the type of calculation for the given connectionType below mdms of
Metered Connection water consumption is the attribute used for the calculation of charge for billing cycle i.e Based no the units consumed for a given billing cycle for a given connection would identify the actual charge from the WCBIllingSlab mdms based on the propertyType, calcautionAttribute derived for a connection and ConnectionType
Non-Metered Connection Flat is the attribute used for calculation of the charge for a given billing cycle, i.e for NonMetered connection, there would be a flat charge for the given billing cycle. The amount can be derived from the WCBillingSlab mdms based on the propertyType, calcautionAttribute derived for a connection and ConnectionType.
Once water is sent to the calculator its tax estimates are calculated. Using these tax head estimates demand details are created. For every tax head, the estimated demand generates function will create a corresponding demand detail.
Whenever _calculate API is called demand is first searched based on the connection no and the demand from and to period. If demand already exists the same demand is updated else new demand is generated with consumer code as connection no and demand from and to a period equal to financial year start and end period.
In the case of the update, if the tax head estimates change, the difference in amount for that tax head is added as new demand detail. For example, if the initial demand has one demand detail with WATER_CHARGE equal to 120.
After updating if the WATER_CHARGE increases to 150 we add one more demand detail to account for the increased amount. The demand detail will be updated to:
RoundOff is bill based i.e every time bill is generated round off is adjusted so that payable amount is the whole number. Individual WS_ROUNDOFF in demand detail can be greater than 0.5 but the sum of all WS_ROUNDOFF will always be less than 0.5.
Description
For generating the demand for non metered connections we have a feature for generating the demand in batch. The scheduler is responsible for generating the demand based on the tenant.
The scheduler can be hit by scheduler API or we can schedule cron job or we can put config to kubectl which will hit scheduler based on config.
After the scheduler has been hit we will search the list of the tenant (city) present in the database.
After getting the tenants we will pick up tenants one by one and generate the demand for that tenant.
We will load the consumer codes for the tenant and push the calculation criteria to Kafka. Calculation criteria contain minimal information (We are not pushing large data to Kafka), calculation criteria contain consumer code and one boolean variable.
After pushing the data into Kafka we are consuming the records based on the batch configuration. Ex:-> if the batch configuration is 50 so we will consume the 50 calculation criteria at a time.
After consuming the record(Calculation criteria) we will process the batch for generating the demand. If the batch is successful so will log the consumer codes which have been processed.
If some records failed in batch so we will push the batch into dead letter batch topic. From the dead letter batch topic, we will process the batch one by one.
If the record is successful we will log the consumer code, If the record is failed so we will push the data into a dead letter single topic.
Dead letter single topic contains information about failure records in Kafka.
Use cases
If the same job trigger multiple time what will happen?
If the same job triggers multiple times we will process again as mentioned above but at the demand level we will check the demand based on consumer code and billing period, If demand already exists then we will update the demand otherwise we will create the demand.
Are we maintaining success or failure status anywhere?
Currently, we are maintaining the status of failed records in Kafka.
Configuration
We need to configure the batch size for Kafka consumers. This configuration is for how much data will be processed at a time.
ws-calculator will be integrated with ws-service. ws-services internally invoke the ws-calculator service to calculate and generate demand for the charges.
WS calculator application is used to calculate the water application one-time Fees and meter reading charges based on the different billing slabs that's why the calculation and demand generation logic will be separated out from the WS service. So in future, if calculation logic needs to modify then changes can be carried out for each implementation without modifying the WS service.
Once the water connection is activated for metered connection, an employee can add meter reading details using this API - /ws-calculator/meterConnection/_create
which in turn will generate the demand. For the Non-Metered connections, the scheduler APIs need to be called periodically to generate the demand.
For the Metered Connection service, to get the previous meter reading /meterConnection/_search
API is used.
To generate the demand for metered or non-metered water connection /waterCalculator/_calculate
API is used.
Users can pay partial/full/advance amount for the Metered or Non-Metered connection bill. In these cases, the Billing service would call back /waterCalculator/_updateDemand
API to update the details of the demand generated.
/waterCalculator/_jobscheduler
API is used to generate demand for Non-metered connections. This API can be called periodically.
(Note: All the API’s are in the same postman collection therefore the same link is added in each row)
The purpose of the mGramSeva IFIX adapter service is to push the demand, bill, and payment events to IFIX from the mGramSeva.
mGramSeva IFIX adapter service is a wrapper for pushing data from the mGramSeva to IFIX. When demand or payment is generated in the mGramSeva system, mGramSeva IFIX adapter service listens to those topics and it calls the IFIX reference adapter service push API to publish the data to IFIX.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Java 8
Kafka
Spring boot
Pushing demand, bill and payment events to IFIX adapter
The following topics interact with the mGramSeva IFIX adapter service - When we create demand for ws-services, then it sends an event as demand for IFIX. If it is expense demand, it sends the event as a bill for IFIX. If it is ws-services payment, then it sends the event as a receipt for IFIX. If it is an expense payment, it sends the payment as an event for IFIX.
mgramseva-create-demand
mgramseva-update-demand
egov.collection.payment-create
Please deploy the following build.
ifix-adapter:v1.0.0-4e24064-14
mGramSeva IFIX adapter is integrated with the IFIX Reference adaptor service. mGramSeva IFIX adapter Application internally invokes the IFIX Reference adaptor service to push the data.
mGramSeva IFIX adapter application to call IFIX-reference-adapter/events/v1/_push to push the demand, bill, and payment events from mGramSeva to IFIX.
eChallan system enables employees to generate the challans for Adhoc services so that the payment can be recorded into the system along with service-specific details.
Before you proceed with the documentation, make sure the following pre-requisites are met -
Java 8
Kafka server is up and running
egov-persister service is running and has workflow persister config path added in it
PSQL server is running and a database is created to store workflow configuration and data
Allow employees to capture service details for miscellaneous services and mark as paid
Allow employees to update/cancel challan.
MDMS Configuration Actions & Role Action Mapping
Actions
Role Action Mapping
Roles available
Add mdms configs required for eChallan Service and calculator and restart mdms service.
Deploy the latest version of eChallan Service and calculator.
Add eChallan Service persister yaml path in persister configuration and restart persister service
Add Role-Action mapping for API’s.
Add pdf configuration file for challan and bill.
The eChallan service is used to generate e-challans / bills for all miscellaneous / adhoc services .
Can perform service-specific business logic without impacting the other module.
Provides the capability of capturing the unique identifier of the entity for which the challan is generated.
In the future, if we want to expose the application to citizens then it can be done easily.
The workflow or Service-specific workflow can be enabled at the challan service level at any time without changing the design.
Allow employees to update/cancel challan
To integrate, the host of echallan-services module should be overwritten in helm chart.
echallan-services/eChallan/v1/_create
should be added as the create endpoint for creating eChallan in the system.
echallan-services/eChallan/v1/_search
should be added as the search endpoint. This method handles all requests to search existing records depending on different search criteria.
echallan-services/eChallan/v1/_update
should be added as the update endpoint. This method is used to update fields in existing records or to update the status of application based on workflow.
/echallan-services/eChallan/v1/_expenseDashboard
Is added in echallan-service to show the data of expenses in metrix format.
/echallan-services/eChallan/v1/_chalanCollectionData
it is added to get the main monthly dashboard data for the expense.
(Note: All the API’s are in the same postman collection therefore the same link is added in each row)
Schedulers are designed to run a particular service at a scheduled time, without triggering manually. We can have multiple schedulers for an application. It will consider the GMT time format only.
The python script (name) would read the mdms-read-cronjob json from the mdms service which users CRONJOB user for a token to access mdms service.
Try to identify the API's configured in this mdms with the argument passed while invoking the script.
With the identified configs from the mdms, the script calls the respective API configured there.
Total 7 schedulers are available in the mGramSeva: _schedulerTodaysCollection: This scheduler will run daily to send the day collection amount to the collection employee. _jobscheduler/true: This is to send the notification to the ULB employee when the bulk demand auto-generation is failed.
_schedulermarkexpensebill: This scheduler is used to mark the expense as paid for the paid expenses once for every fortnight. _schedulernewexpenditure: This is used to send the notification once for every fortnight regarding the no of expenditures created. _schedulermonthsummary: This is to send the monthly summary details to the ULB employee. _schedulerpendingcollection: This is to send the total pending amount details to the respective ULB employee user once every fortnight. _jobscheduler/false: This is used to generate the bulk demand automatically once every month.
As we have 7 different schedulers in mGramSeva which will be running in 4 different time slots so we have to configure, all of them run the same python scripts with the different argument which you can see in the file under command -> args
The Time of the scheduler to run should be configured under cron-> schedule option.
Example of failedBulkDemand scheduler.
You can observer
command->args value is failedbulkdemand ( through which python script understand to invoke only api configured in mdms-read-cronjob mdms json file with the name as “failedbulkdemand”
cron->schedule value is “ 30 3 5 * *” which define the time to kick this scheduler i.e at 3.30 on 5th day of every month. As the crontab follows GMT timezone converting this time to IST this jobs run on 9am of on 5th day of every month helps to define the pattern for the schedule cron.
PriorNote: In Devops for every configuration app name will change according to the name of the cron job file given and the schedule will change according to the time set, and argument will be as per job name given in mdms configuration.
labels: app: monthly-cronjob // This name will change based on the cronjob schedular we are using group: mdms-read-cronjob // this is same for all as we are using same python script
cron: schedule: 30 3 4 * * // This depends on the time we need to run the schedular
image: repository: api-cronjob tag: v1
command:
python3
cronJobAPIConfig.py
args:
monthly // This will be the job name which will differ with the requirement of scheduler type.
env: |
name: JOB_NAME
valueFrom:
fieldRef:
resources: |
requests: {}
The remaining fields will be the same for all the schedulers.
Monthly: This will run and send the notification to the ULB employee or consumer on the 4th of every month morning at 9 am as per the scheduled time. Fortnightevening: This scheduler will run on the 1st and 15th of every month evening at 6 pm to send the respective notification to the Consumer. Failedbulkdemand: When the bulk demand generation is failed this scheduler will run and share the message to ULB employees to generate demand manually. Dailyevening: This schedular will run daily and send notifications to the collection operator on a daily basis. Here are the links
MDMS Object details and Configuration: {
"jobName": "monthly", // This will change based on the job name
"active": "true", // when it is true schedular will run automatically and when it is false schedular won’t run.
"method": "POST",
"payload": {
"RequestInfo": "{DEFAULT_REQUESTINFO}" // this is common in all the schedulers to send the request info
},
"header": {
"Content-Type": "application/json" // This is one of the common property for all the schedulers
}
}
Need to create a user with CRONJOB as name and type as SYSTEM and ROLE as SYSTEM AND EMPLOYEE here is the sample curl to create the user.
When you build the cronjob you will get the build id like below. api-cronjob:develop-c0aa08a-2 From this you will take the only id instead of complete name like develop-c0aa08a-2. This will be used as the id for your respective yaml files and will be deployed the same to you required environment to test the cron job. For example :
Mdms-read-cronjob:develop-c0aa08a-2,
fAiledbulkdemand:develop-c0aa08a-2,
Fortnightevening:develop-c0aa08a-2,
monthly:develop-c0aa08a-2 Note: develop-c0aa08a-2 is the common build id for all the files which you are using.
How to run the cronjob manually
Please delete the existing cron jobs if they are already exists with same name.
kubectl delete cronjob mdms-read-cronjob -n mgramseva
Please deploy these builds in QA environments, which are related to cronjob schedulars.
mdms-read-cronjob:develop-c0aa08a-2,failedbulkdemand:develop-c0aa08a-2,fortnightevening:develop-c0aa08a-2,monthly:develop-c0aa08a-2
Steps to test the cron job schedular.
kubectl get cronjob -n mgramseva -- to check the list of cron jobs
We will create the job manually to test the messages.
Here are the commands to create the jobs.
Will receive a message for the respective schedular each time we run it.
We can increase the no to test again like failedbulkdemand-manually-1 next it will be failedbulkdemand-manually-2
kubectl create job --from=cronjob/failedbulkdemand failedbulkdemand-manually-1 -n mgramseva
kubectl create job --from=cronjob/fortnightevening fortnightevening-manually-1 -n mgramseva
kubectl create job --from=cronjob/mdms-read-cronjob mdms-read-cronjob-manually-1 -n mgramseva
kubectl create job --from=cronjob/monthly monthly-manually-1 -n mgramseva
kubectl get job -n mgramseva -- to check the list of jobs
To check the cronjob image
kubectl describe cronjob mdms-read-cronjob -n mgramseva
To delete specific job
kubectl delete jobs mdms-read-cronjob-manually-1 -n mgramseva
Objective
The purpose of the mGramSeva Rollout Dashboard Scripts to aggregate the data points from mgramseva DB and services for Rollout dashboard in Metabase
mGramSeva Rollout Dashboard is a python script for pushing the data from the mGramSeva to s a specific table in DB on a daily basis which can be loaded to Metabase and graphical dashboard built on top of this table in the Metabase.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Python 3.9
mGramSeva DB
mGramseva user details who has access to MDMS service API
mGramSeva mdms service access
Collecting the data on certain data points and inserting the data into the rollout dashboard table in the DB User Story with details of the data point:
Please deploy the following build.
rollout-dashboard-cronjob:develop-2a8d6a44-3
mGramSeva Rollout Dashboard is not directly integrated with any of the services except this scripts fetch the data from the MDMS service and mGramSeva DB
please follow the steps below
The python script inserts the data into table “roll_out_dashboard
“ in mgramSevaDb for every run, it cleans the old data and creates new data.
This table has to be loaded into the metabase by adding mGramSeva DB to the metabase.
We are using re-indexing to get all the data to the respective indexer. We have 2 steps for this. The first step is to run the connector from the playground, which is followed by legacyindexer service call from indexer service, which internally calls the respective plain search service to get the data and to send it to the respective indexer.
Access to kubectl of the environment targetted
Postman scripts
Plain search apis in the respective services
We have mainly 3 indexes in mGramSeva for Re-indexing.
Water-services
Echallan-services
dss-collection_v2
ws-services re-indexing
Kafka Connector Curl to be run from playground pod
Plain Search call
EChallan -Reindexing Kafka Connector Call to be run from Playground pod
Legacy Index call from postman
Dss collection v2 re-indexing
Kafka Connector call to be run from playground pod
payment re-indexing run from postman call
Property creation through WNS module
The indexer provides the facility for indexing the data to elastic search.
Write the configuration for water service.
Put indexer config file to the config repo under egov-indexer folder.( )
All content on this page by is licensed under a .
All content on this page by is licensed under a .
All content on this page by is licensed under a .
The combination of the above can be used to define the billing slab. Billing Slab is defined in mdms under ws-services-calculation folder with the . The following is the sample slab.
All content on this page by is licensed under a .
All content on this page by is licensed under a .
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
fieldPath:
"url": "", // This is the respective service url to call that service as per the scheduler
Here is the Configuration for all the schedulers:
All content on this page by is licensed under a .
Configure the username, tenantId and password of the user for which mdms service access is available on the environment specific yaml file in DevOps. Example below
All content on this page by is licensed under a .
All content on this page by is licensed under a .
Environment Variables
Description
egov.user.event.notification.enabled
This variable is to check the event Notification enabled or not.
egov.challan.default.limit
This variable is to get the default limit value
egov.challan.max.limit
This variable to check the max limit value.
create.ws.workflow.name
This variable will give the business service name while creating the workflow.
notification.sms.enabled
This variable is to check the SMS notifications are enabled or not.
egov.localization.statelevel
This variable is used to check the localizations are state level or not.
egov.pending.collection.link
variable for collection list screen link for notifications
egov.monthly.summary.link
variable for monthly summary screen link for notifications
egov.new.Expenditure.link
variable for new expenditure screen link
egov.mark.paid.Expenditure.link
variable for paid expenditure screen link
egov.bilk.demand.failed.link
variable for mnaul bulk demand generation screen link
egov.today.collection.link
variable for today’s collection screen link
Title
Link
API Swagger Documentation
Link
echallan-services/eChallan/v1/_create
echallan-services/eChallan/v1/_update
echallan-services/eChallan/v1/_search
echallan-services/eChallan/v1/_chalanCollectionData
echallan-services/eChallan/v1/_chalanCollectionData
Environment Variables | Description |
| This variable contains the kafka topic name which is used to create new water connection application in the system. |
| This variable contains the kafka topic name which is used to update the existing water connection application in the system. |
| This variable contains the kafka topic name which is used to update the process instance of the water connection application. |
| This variable contains the idgen format name for water application |
| This variable contains the idgen format for water application ex:- WS/[CITY.CODE]/[fy:yyyy-yy]/[SEQ_EGOV_COMMON] |
| This variable contains the idgen format name for water connection |
| This variable contains the idgen format for water connection ex:- WS_AP/[CITY.CODE]/[fy:yyyy-yy]/[SEQ_EGOV_COMMON] |
Title | Link |
Id-Gen service |
url-shortening |
MDMS |
TaxHead | Amount | Order | Full Payment(2000) | Partial Payment1(1500) | Partial payment2(750) | Partial payment2 with rebate(500) |
WS_CHARGE | 1000 | 6 | 1000 | 1000 | 750 | 750 |
AdjustedAmt | 1000 | -250 | -750 | -750 |
RemainingAMTfromPayableAMT | 0 | 0 | 0 | 0 |
Penality | 500 | 5 | 500 | 500 |
AdjustedAmt | 500 | -500 |
RemainingAMTfromPayableAMT | 1000 | 250 |
Interest | 500 | 4 | 500 | 500 |
AdjustedAmt | 500 | -500 |
RemainingAMTfromPayableAMT | 1500 | 750 |
Cess | 500 | 3 | 500 | 500 |
AdjustedAmt | 500 | -500 |
RemainingAMTfromPayableAMT | 2000 | 1250 |
Exm | -250 | 1 | -250 | -250 |
AdjustedAmt | -250 | 250 |
RemainingAMTfromPayableAMT | 2250 | 1750 |
Rebate | -250 | 2 | -250 | -250 |
AdjustedAmt | -250 | 250 |
RemainingAMTfromPayableAMT | 2500 | 750 |
TaxHead | Amount | TaxPeriodFrom | TaxPeriodTo | Order | Purpose |
WS_CHARGE | 1000 | 2014 | 2015 | 6 | Current |
AdjustedAmt | 0 |
Penality | 500 | 2014 | 2015 | 5 | Current |
AdjustedAmt | 0 |
Interest | 500 | 2014 | 2015 | 4 | Current |
AdjustedAmt | 0 |
Cess | 500 | 2014 | 2015 | 3 | Current |
AdjustedAmt | 0 |
Exm | -250 | 2014 | 2015 | 1 | Current |
AdjustedAmt | 0 |
TaxHead | Amount | TaxPeriodFrom | TaxPeriodTo | Order | Purpose |
WS_CHARGE | 1000 | 2014 | 2015 | 6 | Arrear |
AdjustedAmt | 0 |
WS_CHARGE | 1500 | 2015 | 2016 | 6 | Current |
AdjustedAmt | 0 |
Penality | 600 | 2014 | 2015 | 5 | Arrear |
AdjustedAmt | 0 |
Penalty | 500 | 2015 | 2016 | 5 | Current |
AdjustedAmt | 0 |
Interest | 500 | 2014 | 4 | Arrear |
AdjustedAmt | 0 |
Cess | 500 | 2014 | 3 | Arrear |
AdjustedAmt | 0 |
Exm | -250 | 2014 | 1 | Arrear |
AdjustedAmt | 0 |
| 3000 | Expiry time of the otp |
Environment Variables | Description |
| This variable is to check the SMS notifications are enabled or not. |
| This variable is to check the email notifications are enabled or not. |
| This variable is for download bill reciept path |
| This variable is to get the common link to the home page |
Title | Link |
API Swagger Documentation |
Water Calculator Service |
Link |
/ws-services/wc/_create |
/ws-services/wc/_update |
/ws-services/wc/_search |
/ws-services/wc/_submitfeedback |
/ws-services/wc/_getfeedback |
/ws-services/wc/_revenueDashboard |
/ws-services/wc/_revenueCollectionData |
bs.businesscode.demand.updateurl | { | Each module’s application calculator should provide its own update URL. if not present then new bill will be generated without making any changes to the demand. |
bs.bill.billnumber.format | BILLNO-{module}-[SEQ_egbs_billnumber{tenantid}] | IdGen format for bill number |
|
|
|
| enable disable workflow of bill amendment |
|
| topic name to push demand created, to be consumed by mgramseva adaptor |
|
| topic name to push demand updated, to be consumed mgram sevaadaptor |
|
| topic name to push bill created, to be consumed mgram seva |
|
| topic name to push bill updated, to be consumed mgram seva |
Title | Link |
/demand/_create, _update, _search |
/bill/_fetchbill, _search |
/amendment/_create, _update |
Title | Link |
API Swagger Contract |
Water Service Document |
Link |
|
|
|
|
|
|
mGramsev-DSS Documentation
DSS has two sides to it. One is, the process in which the Data is pooled to ElasticSearch and the other is the way it is fetched, aggregated, computed, transformed and sent across.
As this revolves around a variety of Data Sets, there is a need for making this configurable. So that, tomorrow, given a new scenario is introduced, then it is just a configuration away from getting the newly introduced scenario involved in this process flow.
This document explains the steps on how to define the configurations for the Analytics Side Of DSS for mGramSeva.
Analytics : Micro Service which is responsible for building, fetching, aggregating and computing the Data on ElasticSearch to a consumable Data Response. Which shall be later used for visualizations and graphical representations.
Analytics Configurations: Analytics contains multiple configurations. we need to add the changes related to mGramseva in this dashboard-analytics. Here is the location: https://github.com/misdwss/config-mgramseva/tree/QA/egov-dss-dashboards/dashboard-analytics Below is a list of configurations that need to be changed to run mGramSeva successfully.
Chart API Configuration
Master Dashboard Configuration
Each Visualization has its own properties. Each Visualization comes from different data sources (Sometimes it is a combination of different data sources)
In order to configure each visualization and its properties, we have a Chart API Configuration Document.
In this, Visualization Code, which happens to be the key, will be having its properties configured as a part of the configuration and are easily changeable.
Here is the sample ChartApiConfiguration.json data for the mGramSeva.
Click here to check the complete configuration.
Master Dashboard Configuration is the main configuration that defines as to which are the Dashboards that are to be painted on the screen.
It includes all the Visualizations, their groups, the charts which comes within them and even their dimensions as what should be their height and width.
Click here to check the complete configuration
Master Dashboard Configuration which was explained earlier hold the list of Dashboards that are available.
Given the instance where Role Action Mapping is not maintained in the Application Service, this configuration will act as Role - Dashboard Mapping Configuration
In this, each Role is mapped against the Dashboard which they are authorized to see
This was used earlier when the Role Action Mapping of eGov was not integrated.
Later, when the Role Action Mapping started controlling the Dashboards to be seen on the client-side, this configuration was just used to enable the Dashboards for viewing.
Click Here to check the complete configuration.
Transform collection schema for V2
This transform collection v1 configuration file is used to map with the incoming data. This mapped data will go inside the data object in the DSS collection v2 index.
Here: $i, the variable value that gets incremented for the number of records of paymentDetails
$j, the variable value that gets incremented for the number of records of billDetails.
This configuration defines and directs the Enrichment Process which the data goes through.
For example, if the Data which is incoming is belonging to a Collection Module data, then the Collection Domain Config is picked. And based on the Business Type specified in the data, the right config is picked.
In order to enhance the data of Collection, the domain index specified in the configuration is queried with the right arguments and the response data is obtained, transformed and set.
Domain Configuration
Topic Context Configuration
transform_expense.electricity_bill_v1 Configuration
transform_expense.om_v1 Configuration
transform_expense.salary_v1 Configuration
transform_ws_v1 Configuration
Below are the list of configurations made changes or added newly for mGramseva
Click here to see the complete configuration.
Topic Context Configuration is an outline to define which data is received on which Kafka Topic.
Indexer Service and many other services are sending out data on different Kafka Topics. If the Ingest Service is asked to receive those data and pass it through the pipeline, the context and the version of the data being received has to be set. This configuration is used to identify as in which Kafka topic consumed the data and what is the mapping for that.
Click here to see the complete configuration.
Based on expense and water-service business service we added transfororm configurations as per below.
Note: For Kafka connect to work, Ingest pipeline application properties or in environments direct push must be disabled.
es.push.direct=false
If DSS collection index data is indexing directly ( without Kafka connector) to ES through the ingest pipeline then, make the application properties or in environments, direct push must be enabled.
es.push.direct=true
Configure the Kafka topics in the environments or Ingest pipeline application properties as shown below.
Kafka connection and re-indexing is available in this documentation. Please check from here. mGramseva- Services Re-Indexing
Main-Monthly Dashboard
For the main monthly dashboard, we are using the service API to fetch the data and to show it in the main monthly dashboard table.
Ws-services:
/ws-services/wc/_revenueCollectionData
Should be added to get the main monthly dashboard details. It is used to show the table data based on the no of months for selected financial year.
eChallan-Service:
/echallan-services/eChallan/v1/_chalanCollectionData
it is added to get the main monthly dashboard data for the expense.
Dashboard-Metrix:
To show the data in metrix format in specific month dashboard we are using service API which will fetch the data based on dash board type.
Ws-services:
/ws-services/wc/_revenueDashboard
Should be added to get the revenue dashboard metrix data. It will show the data of revenue collections information
eChallan-Service:
/echallan-services/eChallan/v1/_expenseDashboard
Is added in echallan-service to show the data of expenses in metrix format.
MDMS- changes for the dashboard:
"WS":"
transform_expense.electricity_bill_v1 Configuration: config-mgramseva/transform_expense.electricity_bill_v1.json at QA · misdwss/config-mgramseva
transform_expense.om_v1 Configuration : config-mgramseva/transform_expense.om_v1.json at QA · misdwss/config-mgramseva
transform_expense.salary_v1 Configuration: config-mgramseva/transform_expense.salary_v1.json at QA · misdwss/config-mgramseva
transform_ws_v1 Configuration: config-mgramseva/transform_ws_v1.json at QA · misdwss/config-mgramseva
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
echallan-services/eChallan/v1/_chalanCollectionData
echallan-services/eChallan/v1/_chalanCollectionData
/ws-services/wc/_revenueDashboard
/ws-services/wc/_revenueCollectionData
/dashboard-analytics/dashboard/getChartV2