Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Deploy the latest version of FSM.
Add fsm-calculator-persister.yml file in the config folder in GIT, and add that path in persister (the file path is to be added in environment yaml file in param called persist-yml-path):
https://github.com/egovernments/configs/blob/DEV/egov-persister/fsm-calculator-persister.yaml
Configurations that we can manage through values.yml fsm-calculator in infraops repo are as follows. values.yml for fms-calculator can be found here.
Configurations sample in Values.yml
Deploy the latest version of the vendor.
Add vendor-persister.yml file in the config folder in git and add that path in persister (the file path is to be added in the environment yaml file in param called persist-yml-path), and restart egov-persister-service.
Integrate the following below changes in vendor-persister.yml .
.
Configurations that we can manage through values.yml vehicle in infra-ops repo are listed below. values.yml for the vehicle is available below.
Configurations sample in Values.yml
In this document, we will learn how to legacy index/re-index the fsm index.
Kubectl access to the required environment in which you want to run the re-indexing
playground pod access
Legacy index mapping/configuration done in the respective indexer-config ( in this case for FSM, legacy index configuration for fsm is done , Similarly for VehicleTrip also exists )
Postman collection to re-index the data for FSM, VehicleTrip, Vehicle, Vendor Services can be downloaded
After importing the postman collection downloaded from above section, you can find two request
fsm-legacy : This request helps to get the data from fsm/plainsearch api and push data to fsm-enriched topic by indexer service
fsm-legacy-kafkaconnector : This is the request to create a connector which can listen to the fsm-enriched topic and push data to the elastic search with the new index fsm-enriched
Run the fsm-legacy-kafkaconnector request in the playground pod, which would create a connector which would intern start listening to the topic fsm-enriched-sink
Run the fsm-legacy request in the playground pod, which would call indexer service to intiate the process of fetching the data from plainsearch and push the data prepared according to the legacy-index mapping and push the data to the fsm-enriched-sink topic
Whole process would take some time, mean while you can searc for the data in fsm-enriched index in the elastic search
we can go through the logs of the indexer pod, which would help to understand the job is done
Once the job is done, delete the kafka connector running the below curl in the playground curl --location --request DELETE 'http://kafka-connect.kafka-cluster:8083/connectors/fsm-enriched-es-sink'
Once reindexing is completed, please verfiy the count in fsm index and fsm-enriched index, then delete the fsm index and create alias for fsm-enriched index as fsm.Please use below command for alias creating.
After importing the postman collection downloaded from above section, you can find two request
vehicleTrip-legacy : This request helps to get the data from vehicletrip/plainsearch api and push data to vehicletrip-enriched topic by indexer service
vehicle-trip-legacy-kafkaconnector : This is the request to create a connector which can listen to the vehicletrip-enriched topic and push data to the elastic search with the new index vehicletrip-enriched
Run the vehicletrip-legacy-kafkaconnector request in the playground pod, which would create a connector which would intern start listening to the topic vehicletrip-enriched-sink
Run the vehicletrip-legacy request in the playground pod, which would call indexer service to intiate the process of fetching the data from plainsearch and push the data prepared according to the legacy-index mapping and push the data to the vehicletrip-enriched-sink topic
Whole process would take some time, mean while you can searc for the data in vehicletrip-enriched index in the elastic search
we can go through the logs of the indexer pod, which would help to understand the job is done
Once the job is done, delete the kafka connector running the below curl in the playground curl --location --request DELETE 'http://kafka-connect.kafka-cluster:8083/connectors/vehicletrip-enriched-es-sink'
Once reindexing is completed, please verfiy the count in vehicletrip index and vehicletrip-enriched index, then delete the vehicletrip index and create alias for vehicletrip-enriched index as vehicletrip.Please use below command for alias creating.
Kubectl access to the required environment in which you want to run the re-indexing
playground pod access
After importing the postman collection downloaded from above section, you can find two request
fsm-legacy inbox : This request helps to get the data from fsm/plainsearch api and push data to fsm-application-enriched topic by indexer service
inbox-kafka-connector : This is the request to create a connector which can listen to the fsm-application-enriched topic and push data to the elastic search with the new index fsm-application-enriched
Run the inbox-kafka-connector request in the playground pod, which would create a connector which would intern start listening to the topic fsm-application-enriched-sink
Run the fsm-legacy-inbox request in the playground pod, which would call indexer service to intiate the process of fetching the data from plainsearch and push the data prepared according to the legacy-index mapping and push the data to the fsm-application-enriched-sink topic
Whole process would take some time, mean while you can search for the data in fsm-application-enriched index in the elastic search
we can go through the logs of the indexer pod, which would help to understand the job is done
Once the job is done, delete the kafka connector running the below curl in the playground curl --location --request DELETE 'http://kafka-connect.kafka-cluster:8083/connectors/fsm-application-enriched-es-sink'
Once reindexing is completed, please verfiy the count in fsm-application index and fsm-application-enriched index, then delete the fsm-application index and create alias for fsm-application-enriched index as fsm-application.Please use below command for alias creating.
Legacy index mapping/configuration done in the respective indexer-config ( in this case for FSM Inbox, legacy index configuration for fsm inbox is done )
Postman collection to re-index the data for FSM inbox can be downloaded
Description
name in values.yml
Current Value
contextPath of the api’s
SERVER_CONTEXTPATH
/fsm-calculator
Kafka Consumer Group
SPRING_KAFKA_CONSUMER_GROUP_ID
fsm-calculator
kafka topic to which service push data to save new billing slab
PERSISTER_SAVE_BILLING_SLAB_TOPIC
save-fsm-billing-slab
kafka topic to which service push data to update the existing billing slab
PERSISTER_UPDATE_BILLING_SLAB_TOPIC
update-fsm-billing-slab
mdms service host
EGOV_MDMS_HOST
egov-mdms-service from egov-service-host
billing-service host
EGOV_BILLINGSERVICE_HOST
billing-service from egov-service-host
fsm service host
EGOV_FSM_HOST
fsm from egov-service-host
Description | Name in values.yml | Current value |
Kafka Consumer Group | SPRING_KAFKA_CONSUMER_GROUP_ID | egov-vendor-services |
Kafka topic to which service push data to save new vendor | PERSISTER_SAVE_VENDOR_TOPIC | save-vendor-application |
MDMS service host | EGOV_MDMS_HOST | egov-mdms-service from egov-service-host |
Vehicle service host | EGOV_VEHICLE_HOST | vehicle from egov-service-host |
User service host | EGOV_USER_HOST | egov-user-service from egov-service-host |
Location service Host | EGOV_LOCATION_HOST | egov-location from egov-service-host |
In this document, we will learn how to legacy index/re-index the pqm index
Pre-Requisuites
Kubectl access to the required environment in which you want to run the re-indexing
playground pod access
Legacy index mapping/configuration done in the respective indexer-config ( in this case for pqm , legacy index conifiguration for pqm is done here)
Postman collection to re-index the data for PQM can be downloaded here .
After importing the postman collection downloaded from above section, you can find two request
pqm-services-legacy : This request helps to get the data from pqm/plainsearch api and push data to pqm-services-enriched topic by indexer service
pqm-services-legacy-kafkaconnector : This is the request to create a connector which can listen to the pqm-application topic and push data to the elastic search with the new index pqm-application-enriched
Run the pqm-legacy-kafkaconnector request in the playground pod, which would create a connector which would intern start listening to the topic pqm-application-legacyindex-enriched-sink
Run the pqm-legacy request in the playground pod, which would call indexer service to initiate the process of fetching the data from plainsearch and push the data prepared according to the legacy-index mapping and push the data to the pqm-application-legacyindex-enriched-sink topic
Whole process would take some time, meanwhile you can search for the data in pqm-application-legacyindex-enriched index in the elastic search
we can go through the logs of the indexer pod, which would help to understand the job is done
Once the job is done, delete the kafka connector running the below curl in the playground curl --location --request DELETE 'http://kafka-connect.kafka-cluster:8083/connectors/pqm-application-enriched-es-sink'
Once reindexing is completed, please verify the count in pqm-application index and pqm-application-legacyindex-enriched index, then copy the pqm-application-legacyindex-enriched index to pqm-application and delete pqm-application-legacyindex-enriched index.Please use below command for coping.
Kubectl access to the required environment in which you want to run the re-indexing
playground pod access
Legacy index mapping/configuration done in the respective indexer-config ( in this case for pqm , legacy index conifiguration for pqm is done here)
Postman collection to re-index the data for PQM can be downloaded here .
After importing the postman collection downloaded from above section, you can find two request
pqm-services-legacy : This request helps to get the data from pqm/plainsearch api and push data to pqm-services-enriched topic by indexer service
pqm-services-legacy-kafkaconnector : This is the request to create a connector which can listen to the pqm-application topic and push data to the elastic search with the new index pqm-application-enriched
Run the pqm-services-legacy-kafkaconnector request in the playground pod, which would create a connector which would intern start listening to the topic pqm-service-index-enriched-sink
Run the pqm-services-legacy request in the playground pod, which would call indexer service to initiate the process of fetching the data from plainsearch and push the data prepared according to the legacy-index mapping and push the data to the pqm-service-index-enriched-sink topic
Whole process would take some time, meanwhile you can search for the data in pqm-service-legacyindex-enriched index in the elastic search
we can go through the logs of the indexer pod, which would help to understand the job is done
Once the job is done, delete the kafka connector running the below curl in the playground curl --location --request DELETE 'http://kafka-connect.kafka-cluster:8083/connectors/pqm-service-index-enriched-es-sink0500'
7.Once reindexing is completed, please verify the count in pqm-service index and pqm-service-index-enriched index, then copy the pqm-service-index-enriched index to pqm-service and delete pqm-service-index-enriched index.Please use below command for coping.
Kubectl access to the required environment in which you want to run the re-indexing
playground pod access
Legacy index mapping/configuration done in the respective indexer-config ( in this case for pqm-Anomaly , legacy index conifiguration for pqm-Anomaly is done here)
Postman Collection
Postman collection to re-index the data for pqm-Anomaly can be downloaded here .
After importing the postman collection downloaded from above section, you can find two request
pqm-Anomaly-services-legacy : This request helps to get the data from pqm-Anomaly/plainsearch api and push data to pqm-Anomaly-services-enriched topic by indexer service
pqm-anomaly-services-legacy-kafkaconnector : This is the request to create a connector which can listen to the pqm-anomaly-finder topic and push data to the elastic search with the new index pqm-anomaly-finder-enriched
Run the pqm-anomaly-services-legacy-kafkaconnector request in the playground pod, which would create a connector which would intern start listening to the topic pqm-anomaly-finder-enriched-sink
Run the pqm-Anomaly-services-legacy request in the playground pod, which would call indexer service to initiate the process of fetching the data from plainsearch and push the data prepared according to the legacy-index mapping and push the data to the pqm-Anomaly-application-enriched-sink topic
Whole process would take some time, meanwhile you can search for the data in pqm-anomaly-finder-enriched index in the elastic search
we can go through the logs of the indexer pod, which would help to understand the job is done
Once the job is done, delete the kafka connector running the below curl in the playground curl --location --request DELETE 'http://kafka-connect.kafka-cluster:8083/connectors/pqm-Anomaly-services-legacy-enriched-es-sink'
Once reindexing is completed, please verify the count in pqm-anomaly-finder index and pqm-anomaly-finder-enriched index, then copy the pqm-anomaly-finder-enriched index to pqm-anomaly-finder and delete pqm-anomaly-finder-enriched index.Please use below command for coping.
Deploy the latest version of FSM.
Add fsm-persister.yml file in the config folder in git, add that path in persister (the file path is to be added in environment yaml file in param called persist-yml-path), and restart the egov-persister service.
If index are to be created, add the indexer config path in the indexer service (the file path is to be added in environment yaml file in param called egov-indexer-yaml-repo-path), and restart egov-indexer service.
path:- deploy-as-code/helm/charts/sanitation/fsm/values.yaml - link
Configurations that we can manage through values.yml fsm-calculator in infraops repo as follows:
Sample values.yml
Deploy the latest version of the vehicle.
Add vehicle-persister.yml file in config folder in git and add that path in persister . (The file path is to be added in environment yaml file in param called persist-yml-path ) and restart egov-persister-service.
Integrate the following below changes in vehicle-persister.yml
Configurations that we can manage through values.yml vehicle in infraops repo are listed below. values.yml for the vehicle can be found.
Configurations sample in Values.yml
Description
name in values.yml
Current Value
id-gen host, to generate the application number
EGOV_IDGEN_HOST
egov-idgen from egov-service-host
Kafka Consumer Group
SPRING_KAFKA_CONSUMER_GROUP_ID
egov-fsm-service
kafka topic to which service push data to save new fsm application
PERSISTER_SAVE_FSM_TOPIC
save-fsm-application
kafka topic to which service push data to save workflow status
PERSISTER_UPDATE_FSM_WORKFLOW_TOPIC
update-fsm-workflow-application
kafka topic to which service push data to update the existing fsm application
PERSISTER_UPDATE_FSM_TOPIC
update-fsm-application
mdms service host
EGOV_MDMS_HOST
egov-mdms-service from egov-service-host
billing-service host
EGOV_BILLINGSERVICE_HOST
billing-service from egov-service-host
fsm-calculator service host
EGOV_FSM_CALCULATOR_HOST
fsm-calculator from egov-service-host
workflow v2 service host
WORKFLOW_CONTEXT_PATH
egov-workflow-v2 from egov-service-host
ui host, to return send the url of new application in sms notification
EGOV_UI_APP_HOST
egov-services-fqdn-name from egov-service-host
vendor service host, to get DSO details
EGOV_VENDOR_HOST
vendor from egov-service-host
Vehicle service host, to get vehicle details and manage vehicleTrip
EGOV_VEHICLE_HOST
vehicle from egov-service-host
Collection service host, to get the payment details
EGOV_COLLECTION_SERVICE_HOST
collection-services from egov-service-host
localization service host, to get the locale data
EGOV_LOCALIZATION_HOST
egov-localization from egov-service-host
user service host, to get the locale data
EGOV_USER_HOST
egov-user from egov-service-host
pdf service host, to get the locale data
EGOV_PDF_HOST
pdf-service from egov-service-host
url shortening service host, to get the short url for the long once
EGOV_URL_SHORTNER_HOST
egov-url-shortening from egov-service-host
Description | name in values.yml | Current Value |
id-gen host, to generate the application number | EGOV_IDGEN_HOST | egov-idgen from egov-service-host |
mdms service host | EGOV_MDMS_HOST | egov-mdms-service from egov-service-host |
workflow v2 service host | WORKFLOW_CONTEXT_PATH | egov-workflow-v2 from egov-service-host |
user service host, to get the locale data | EGOV_USER_HOST | egov-user from egov-service-host |
Kafka Consumer Group | SPRING_KAFKA_CONSUMER_GROUP_ID | egov-vehicle-services |
kafka topic to which service push data to save new vehicle application | PERSISTER_SAVE_VEHICLE_TOPIC | save-vehicle-application |
kafka topic to which service push data of the vehicleTrip to save | PERSISTER_SAVE_VEHICLE_TRIP_TOPIC | save-vehicle-trip |
kafka topic to which service push data of the vehicleTrip to update | PERSISTER_UPDATE_VEHICLE_TRIP_TOPIC | update-vehicle-trip |
kafka topic to which service push data of the vehicleTrip to update the status | PERSISTER_UPDATE_VEHICLE_TRIP_WORKFLOW_TOPIC | update-workflow-vehicle-trip |
VehicleTrip Appilcatiion Number format` | egov.idgen.vehicle.trip.applicationNum.format | "[CITY.CODE]-VT-[cy:yyyy-MM-dd]-[SEQ_EGOV_VEHICLETRIP]" |