Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Core service configuration and promotion docs
The Fiscal Event Service maintains the fiscal event flow activities of multiple projects. The root level entity is a tenant which is basically the State Government. The tenant has several projects that are tagged to multiple attributes like department, scheme, mission, department, hierarchy, etc. All the entity information and COA (Chart of Account) are passed along with amount details which constitutes an array of fiscal event instances under the system.
Current version : 2.0.0
Before you proceed with the configuration, make sure the following pre-requisites are met.
Java 8
MongoDB instance
Required Service Dependencies.
iFIX-Master-Data-Service
iFIX-Fiscal Event Post Processor
Apache Kafka Server
This service provides two features - Push fiscal event data and search fiscal event data.
It is a secure endpoint and user info details are required to access it.
It receives fiscal detail along with mandatory tenant and project info under attributes detail in publish requests, which triggers the whole fiscal event process and forwards it to the post-process service.
Before encompassing all entities, it checks for entities validation via iFix core Master Data Service.
While we create any fiscal event instance in the system, we log ingestion time and event occurrence timestamp which makes event flow clear on timestamp activities.
After processing all these activities, it passes fiscal event information to other fiscal event post-processor services for further processing by publishing fiscal data into the Kafka topic "fiscal-event-request-validated" and “fiscal-event-mongodb-sink“.
It also responds to snapshots of enriched fiscal event data which can be used by the source system for reference or any further processing.
It provides a search feature on the existing fiscal events which has been processed before by post-processor service and persisted into the MongoDB instance.
We can mainly search on eventType, tenantId, referenceId and event time interval and in return we get enriched data (dereference with nested id details).
Note: Kafka topic needs to be configured with respect to the environment
Update all the DB and URI configuration in the dev.yaml, qa.yaml, prod.yaml file.
Make sure the keycloak server is up and running And have been configured with the required client ID.
Master data service maintains information about Government and Chart of Accounts. We can create these details and search for the same details based on the given parameters/request data.
Current version : 2.0.0
Before we proceed with the configuration, make sure the following pre-requisites are met:
Java 8
MongoDB instance
It creates secure endpoints for the master data service. The access token is required to create any master data.
The subsequent sections on this page discuss the service details maintained by the IFIX core master data service.
This service provides the capabilities to maintain the Government details and allow users to Create and Search data. For creating the Government, we need a unique Id for the Government and a name for the same. Optionally, we can pass some additional details as part of the attribute. In the case of search, passing the unique ID(s) as search parameters can give you all the details of the required Government.
This service provides the capabilities to maintain the Chart of Account (COA) details and support create and search of COA. The following information is passed while creating the Chart of Accounts - Government Id, majorHead, subMajorHead, minorHead, subHead, groupHead, objectHead and corresponding head names & types. A unique code named COACode is generated by combining (concatenating) majorHead, subMajorHead, minorHead, subHead, groupHead, objectHead with a hyphen ("-") and stored with the given request. Searching the details for COA is done based on the given search parameters like the Chart of Account IDs, COACodes, Government ID, majorHead, subMajorHead, minorHead, subHead, groupHead, objectHead.
No environment variables are required specific to the environment (migration).
Update the DB and URI configurations in the dev.yaml, qa.yaml, prod.yaml file.
Make sure the keycloak server is up and running and has been configured with the required client ID.
Before the iFIX platform can be used, the master data must be configured directly into the respective collections in MongoDB. Associated APIs for configuring Master Data will be made available in the next version.
Add Government - Insert into MongoDB collection
Add Department
Add Department Hierarchy - It defines the hierarchy definition for the department
department id: It is master department id (UUID)
level: It defines the depth of hierarchy of department level
Root level department hierarchy will not contain any parent value and level value will be zero
Level value incrementation rule:
When parent id is having any value then we search parent in department hierarchy record for hierarchy level evaluation
Get level value from parent department hierarchy and increment current department hierarchy level value by one
parent: It provides details about department parent (UUID)
Add Department Entity - It contains department entity information along with its hierarchy level and also attaches master department information (department id - UUID). Here we keep all children information list at every department node (department record). Leaf level department will not have any children info. Children list contains department entity id list, which makes current department entity parent of all children list (department id list), that's how it maintains department entity level. Entities can be created by
First defining the hierarchy level top to bottom because it has parent's reference
Then we can start adding department entities, bottom-to-top because it has child's reference.
Note:
The root department hierarchy will have the label "Department", and the root department entity will be the department itself
When we have to update the existing children list of Department Entity then update the existing children list using mongodb command like below:
Find the department entity parent where the new children need to be added. Do search by name and hierarchy level db.departmentEntity.find({"name" : "<current_department_entity's_name>","hierarchyLevel": <Current_department_entity_hierarchy_level>});
Append the department entity id at the end of the current department entity children's list. So first find the length of the current array and then set it as "children.n": "" (where n is the length of the array.)db.departmentEntity.update({"_id": ""},{$set: {"children.n": "", "children.n+1": ""} })
Add Chart of Accounts - Insert manually into MongoDB collection
Add Expenditure - Insert manually into MongoDB collection.
Please follow the below steps to create iFIX Master Data. Have a look at the individual service documentation for details here.
Create the Government by providing valid government details. Once created, you’ll get an Id with provided details in the response. we are calling this id as a tenant id.
Create the Chart of Account (COA) by providing valid details. While creating, you need to pass the valid tenant Id in the COA request. Once created, you’ll get an id with all the provided details in the response. We are calling this id as COA Id and a code as COACode.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
Key Domain Service Details
All content on this page by is licensed under a .
iFix uses Keycloak to manage clients. Click on this for instructions to set up a Keycloak server and manage clients.
All content on this page by is licensed under a .
This page provides details on the steps involved in cleaning up the iFIX core data from various environments. Follow the instructions specific to select environment listed below.
Druid Data Clean-Up
Mongo DB Data Clean-Up
Postgres Data Clean-Up process.
Open the druid console in the respective environment.
Go to Ingestion → Supervisors. And select the particular supervisor (fiscal-event)
In Action, click on Terminate. This terminates the supervisor. Wait for a minute.
Go to DataSources and select the particular data source name (fiscal-event). And scroll down to Action.
Click on the Mark as unused all segments → Mark as unused all segments.
Click on Delete unused segments (issue kill task) and enable the permission to delete.
Connect to the playground pod and run the command below to connect with mongo DB. mongo --host <mongo-db-host>:<mongo-db-port> -u <mongo-db-username> -p <mongo-db-password>
Use the ifix core DB; use <db name>
Run the below command to delete all the data from fiscal_event collection. db.fiscal_event.remove({});
If you want to delete all fiscal event records of a particular Gram Panchayat then run the command below.
Here let us assume, we have to delete "LODHIPUR" GP (Gram Panchayat) details (that is hierarchy level 6) in the DWSS department. Run the command below.
If you want to delete a fiscal event record based on some other attributes, you have to write a custom mongo delete query.
Connect to the playground pod and run the below command to connect with postgresDB.
psql -h <psql-host> -p <psql-port> -d <psql-database> -U <psql-username>
It prompts for a password. Enter the password.
Run the below query to delete all the data from fiscal_event_aggregated.
If you want to delete the particular Gram Panchayat details from fiscal event aggregated record, run the query below.
Here let us assume, we want to delete "LODHIPUR" GP (Gram Panchayat) details (that is hierarchy level 6) in the DWSS department.
Note: If you are not sure about deleting the fiscal event aggregated record, you can delete all the records from the fiscal_event_aggregated table. Once the records are deleted, either run the fiscal event aggregate Cron Job manually to UpSert all the records or the system UpSerts the records every midnight from Druid to Postgres.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by is licensed under a .
All content on this page by is licensed under a .
Once the clean-up process is completed, follow the instructions here - to run the supervisor.
All content on this page by is licensed under a .
Link
/events/v1/_push
/events/v1/_search
Key
Value
Description
fiscal-kafka-push-topic
fiscal-event-request-validated
Once the fiscal event data will get validated and enriched , it will be published to this topic for further processing.
fiscal.event.kafka.mongodb.topic
fiscal-event-mongodb-sink
Once the fiscal event data will get validated and enriched , push the details to mongo db sink
Title
Link
Swagger Yaml
Postman collection
Title | Link |
/government/v1/_create |
/government/v1/_search |
Title | Link |
/chartOfAccount/v1/_create |
/chartOfAccount/v1/_search |
Title | Link |
Swagger Yaml |
Postman collection |
Department Entity service manages the department and its hierarchies metadata. It deals with department entity and department hierarchy only. Department Hierarchy store only hierarchies definition like metadata for department level and department entity stores actual department data with its ancestry information.
Current Version: 2.0.0
Before you proceed with the configuration, make sure the following pre-requisites are met.
Java 8
MongoDB instance
Required Service Dependencies - Adapter-Master-Data-Service
It defines the hierarchy definition for the department.
department id: It is the ID of the department from the department master
level: It defines the depth of hierarchy of department-level
parent: It provides details about department hierarchy parent (UUID)
Root level department hierarchy should not contain any parent value and the level value will be zero
When parent ID is having any value then we search parent in the department hierarchy record for hierarchy level evaluation.
Get level value from the parent department hierarchy and increment the current department hierarchy level value by one.
It contains department entity information along with its hierarchy level and also attaches master department information (department id - UUID). It keeps all child level information lists at every department node (department record). Leaf level department does not have any children info. Child list contains department entity ID list, which makes the current department entity parent of all children list (department ID list). This is how it maintains the department entity level.
Define the hierarchy level using the top to bottom because it has the parent's references. For department entity, create using the bottom-to-top approach. The leaf department entity does not have any child reference. When the department entity goes higher (parent) only then does it defines child reference in its child list.
Create Department Hierarchy: We pass the current hierarchy level and its parent details along with master department and tenant information. It stores data as meta-information about hierarchy level for department entity data processing, it works as a reference meta index which will tell about hierarchy level information.
Search Department Hierarchy: It just provides a preview of department hierarchies by providing request parameters - tenant ID, hierarchy level or department hierarchy ID.
Create Department Entity: It passes tenant Id, master department id, hierarchy level, its children list with department entity name and code. Tenant, hierarchy level and the master department is root info about the department entity. If the department entity does not contain any child, that means it is a leaf department entity it can only seek for its parent.
Search Department Entity: It can make search requests based on any department entity attribute but can't skip tenant information, it returns the whole department entity details along with its child information. It finds all ancestry information of the current department entity.
There will not be any environment variables required specific to the environment (migration).
Update the DB and URI configurations in the dev.yaml, qa.yaml, prod.yaml file.
Department Hierarchy defines the definition(Meta- Data) for actual department entity creation.
In the above image, the second row contains the actual Department Entities. Department Entity Create API - follows Bottom to Up approach.
In the above example data, the very first Department Entity will be created for “BARUWAL” with code “7278”. Below are the Steps to create Department Entity :
For reference, the Bottom to Up request & response of Department Entity Create API will look like :
Request :[For Department Entity at Bottom - “BARUWAL”]
Response:
Subsequent Request: [For Department Entity - “Kiratpur Sahib”]
Response:
And so on for the next Department Entity(ies) ...
Note: Below is the observation from the above example :
Tenant Id and Department Id will be created before creating the Department Entity hence before Department Hierarchy Create. Tenant Id and Department Id will be the same in all hierarchy levels for a Single Department Hierarchy and corresponding Department Entity(ies).
In the children attribute, we can pass a set of previous Department Entity Ids (or at least one or empty in case of Bottom level Department Entity).
When we have to update the existing children list of Department Entity then update the existing children list using mongodb command like below
Find the parent department entity where the new children need to be added. We should know which is the parent Department Entity beforehand. For the same, Do search by name and hierarchy level and find the corresponding parent department entity’s id (UUID).
Append the department entity id at the end of the parent department entity children's list that we got from step 1. And count the length of the parent department entity children’s list (it is 0 based indexing ) that we are calling as ‘n’ here (it could be any integer number as per the children’s list size ) and then set it as
"children.n": "<picked_from_step_1_query_department_entity_id>"
e.g.
As part of the iFIX-2.0-alpha release, we have migrated the following master data from ifix_db to mgramseva_db:
Adapter Master Data Service
Department
Expenditure
Project
Department Entity Service
Department Entity
Department Hierarchy
Out of these, the project data SHOULD NOT be copied to the new DB because a new feature of the multi-tenant (GP) project is introduced with this release. New projects can be created and linked to multiple GPs.
Other master data can be copied.
Make sure the playground pod is “dwssio/playground:mongo-v2” or newer.
Keep the MongoDB Credentials handy in the following format
Host(in this format): “<host-address>:27017”
Username - a user that has access to BOTH source and destination dbs
Password
Source and Destination DB Names
List of collection names to be copied
The mongo-migration.sh
script copied to the playground pod. (You must have necessary kube permissions to copy a file to a pod.)
Sample Command to copy the file:
kubectl cp mongo_migration.sh ifix/<playground pod>:/
A MongoDB Dump script is provided that will copy a list of collections from a source DB to the destination DB.
Execute the provided script using following parameters:
-h = Host Address
-u = username
-p = password
-s = source db
-d = destination db
-c = collection name - You can provide multiple collection as depicted in the following example
./mongo-migration.sh -h <host-address>:27017 -u <ifix username> -p <ifix password> - s <ifix db> -d <mgramseva db> -c department -c expenditure -c departmentHierarchyLevel -c departmentEntity
Fiscal Event Post Processor is a streaming pipeline for validated fiscal event data. Apache Kafka has been used to stream the validated fiscal event data, process it for dereferencing, unbundle, flatten, and finally push these details to the Druid data store.
Current version : 2.0.0
Before you proceed with the configuration, make sure the following pre-requisites are met:
Java 8
Apache Kafka and Kafka-Connect server should be up and running
Druid DB should be up and running
Below dependent services are required: iFix Master data service iFix Fiscal Event service
Fiscal Event post-processor consumes the fiscal event validated data from Kafka topic named “fiscal-event-request-validated” and processes it by following the below steps:
Fiscal event validated data gets dereferenced. For dereferencing, service ids like COA id, Tenant id etc. are passed to corresponding services - master service and fetch the corresponding object(s). Once the fiscal event data is dereferenced, push/send the same data to the dereferenced Topic.
Unbundle consumers pick up the dereferenced fiscal event data from the dereferencing topic. Dereference fiscal event data gets unbundled and then flattened. Once the flattening is complete, push/send the same data to the Druid Sink topic.
Flattened fiscal event data is pushed to Druid DB from a topic named: fiscal-event-druid-sink.
The Kafka-connect is used to push the data from a Kafka topic to MongoDB. Follow the steps below to start the connector.
Connect (port-forward) with the Kafka-connect server.
Create a new connector with a POST API call to localhost:8083/connectors.
The request body for that API call is written in the file fiscal-event-mongodb-sink.
Within that file, wherever ${---} replace it with the actual value based on the environment. Get ${mongo-db-authenticated-uri} from the configured secrets of the environment. (Optional) Verify and make changes to the topic names.
The connector is ready. You can check it using API call GET localhost:8083/connectors.
The Druid console is used to start ingesting data from a Kafka topic to the Druid data store. Follow the steps below to start the Druid Supervisor.
Open the Druid console
Go to the Load Data section
Select Other
Click on Submit Supervisor
Copy...Paste the JSON from the druid-ingestion-config.json file in the available text box.
Verify the Kafka topic name and the Kafka bootstrap server address before submitting the config
Submit the config and the data ingestion should start into the fiscal-event data source
Note: Kafka topic needs to be configured with respect to the environment
Update the DB, Kafka producer & Consumer And URI configurations in the dev.yaml, qa.yaml, prod.yaml file.
Department Entity service manages the department and its hierarchies metadata. It deals with department entities and department hierarchy only. Department Entity and Department Hierarchy were earlier in the iFIX core. Now, they have been moved to the mGramSeva iFIX adapter side. This page provides details on how to migrate that data from iFIX DB to mGramSeva iFIX adapter DB.
Create (if it's not available) DB schema (Mongo) in mgramseva namespace.
Create (if it's not available) DB schema (Mongo) in mgramseva namespace.
We can drop unused collections from iFix DB using the below steps :
Connect to ifix namespace playground pod
kubectl exec -it <playground-pod-name> -n ifix -- /bin/bash
Connect to the particular mongo db
mongo --host <hostname>:27017 -u <username> -p <password>
Use db
use <db_name>
Check the above-mentioned collection name using the below commands
db.getCollectionNames();
If above mentioned (highlighted in bold) collections are there then drop them off using below commands
db.departmentEntity.drop();
db.departmentHierarchyLevel.drop();
Port-forward the Department Entity service in localhost from a specific environment (like QA/UAT/Prod). Below is the command to port-forward :
kubectl port-forward <pod-name> 8032:8080 -n mgramseva
Keycloak console is available at https://<host-name>/auth
. The Ops team will provide the username and password secrets.
Open the Keycloak console
Near the top-left corner in the realm drop-down menu, select Add Realm
Select the ifix-realm.json file
After the realm gets created, select the ifix
realm from the drop-down near the top-left corner
Remember to select the ifix
realm from the Keycloak console before proceeding
From the Clients section of Keycloak Admin Console, create a client
Provide a unique username for the client
Go to the client's settings
Change Access Type to confidential
Turn on Service Account Enabled
In the Valid Redirect URIs field provide the root URL of the iFIX Instance (Not important for our purposes but need to set it because it is mandatory)
And Save these changes
In the Service Account Roles tab, assign the role "fiscal-event-producer"
In the Mappers tab, create a new mapper to associate the client with a tenantId
Select Mapper Type
to be "Hardcoded claim"
In Token Claim Name
, write "tenantId"
In Claim value
, write the under which the client is being created. (For example, "pb")
Set Name
same as Token Claim Name
i.e. "tenantId"
Select Claim Json Type
to be "String"
Now you can get the credentials from the Credentials tab and configure them in the client's system.
The mGramSeva iFIX Adapter receives multiple event requests from the mGramSeva service and helps in attributes conversion to map with fix-fiscal-event-service request data.
Environment variable
Configure "fiscal-event-service", "adapter-master-data-service" and "ifix-department-entity-service" in egov-service-host in respective environment yaml file (iFix-DevOps/mgramseva-qa.yaml at mgramseva · misdwss/iFix-DevOps ).
Redis data clean up
Execute the below commands on redis-cli before doing any activity on the mgramseva iFix adapter while performing the deployment first time.
Connecting Redis Client:
kubectl exec -it <redis-pod> -n mgramseva -- /bin/bash
redis-cli
e.g.
kubectl exec -it redis-647449f6b9-gg4gw -n mgramseva -- /bin/bash
redis-cli
keys * (check all keys)
Redis cleanup command
del <key>
e.g.
del "10101" "10102" "10201" "20101" "20201" "20301" "20401"
keys * (check all keys)
Note: Make sure all client codes, which have been mentioned in the "ifix_adapter_coa_map" table, are listed in the redis "DEL" command list and recheck those are removed using "keys *" command.
Use the postman tool to verify mGramSeva iFix adapter API end. It is open-end and therefore does not require any access token.
MGramSeva IFIX Adapter Services v2.0
This section provides the steps and details required to migrate the data from one service to another.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
The fiscal event post-processor receives a collection of fiscal event lists and after processing those events it pushes them to the Mongo DB and Druid DB. Before deploying the fiscal event post-processor build to the respective environment, make sure you have ingested the updated druid config file to the Druid DB as per the instructions given here.
Before upgrading the iFIX fiscal event service v2.0, clean up the existing fiscal event data. This is a one-time activity. Follow the steps outlined in the doc here.
Get the access token from keycloak server and pass it as a bearer token while requesting to iFIX fiscal event service.
IFIX Core Fiscal Event Service v2.0
iFix Core Fiscal Event Post Processor v2.0
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
This page provides step-wise details on how to migrate Department, Project and Expenditure from iFix Master Data Service to Adapter Master Data Service.
Yaml configurations
Update "mongo-db-username", "mongo-db-password" and "mongo-db-authenticated-uri" in secrets db config in respective environment secret yaml file ( ).
Update "mongo-db-name", "mongo-db-host" and "mongo-db-url" in egov-config of configmaps in respective environment yaml file ( ).
Update "adapter-master-data-service" and "ifix-department-entity-service" in egov-service-host in respective environment yaml file ( ).
Create a new DB in the MongoDB instance for these new services.
Next, create the same master data again in the new (mgramseva) database using the adapter-master-data-service's /_create APIs.
After restoring DB collections, drop the unused collections from the iFIX MongoDB.
Follow the steps below to drop the unused collections.
Connect to ifix namespace playground pod
kubectl exec -it <playground-pod> -n ifix -- /bin/bash
Connect to the particular mongo db
mongo --host <hostname>:27017 -u <username> -p <password>
Use db
use <db_name>
Check the above-mentioned collection name using the below commands
db.getCollectionNames();
If collections are there then drop them off using the below commands
e.g.
db.department.drop();
db.expenditure.drop();
db.project.drop();
Port-forward the Adapter master data service in localhost from a specific environment (like QA/UAT/Prod).
Below is the command to port-forward :
e.g.
kubectl port-forward <pod-name> 8030:8080 -n mgramseva
We need the access token from keycloak server and pass it as a bearer token while requesting the ifix-master-data-service create APIs.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by is licensed under a .
Title
Link
/departmentEntity/hierarchyLevel/v1/_create
/departmentEntity/hierarchyLevel/v1/_search
Title
Link
/departmentEntity/v1/_create
/departmentEntity/v1/_search
Title
Link
Swagger Yaml
Postman collection
Key
Value
Description
Remarks
fiscal-event-kafka-push-topic
fiscal-event-request-validated
Fiscal event post processor will consume data from this topic
Kafka topic should be same as configured in Fiscal event service.
fiscal-event-kafka-dereferenced-topic
fiscal-event-request-dereferenced
Dereferenced fiscal event data will be pushed to this topic
NA
fiscal-event-kafka-flattened-topic
fiscal-event-line-item-flattened
NA
NA
fiscal-event-processor-kafka-druid-topic
fiscal-event-druid-sink
Flattened Fiscal Event data will be pushed to this topic.
While druid ingest of fiscal event , make sure it has the same topic as mentioned here.
Title
Link
Swagger Yaml