iFIX Core Fiscal Event Post-Processor
Overview
Fiscal Event Post Processor is a streaming pipeline for validated fiscal event data. Apache Kafka has been used to stream the validated fiscal event data, process it for dereferencing, unbundle, flatten, and finally push these details to the Druid data store.
Version
Current version : 2.0.0
Prerequisites
Before you proceed with the configuration, make sure the following pre-requisites are met:
Java 8
Apache Kafka and Kafka-Connect server should be up and running
Druid DB should be up and running
Below dependent services are required: iFix Master data service iFix Fiscal Event service
Features
Fiscal Event post-processor consumes the fiscal event validated data from Kafka topic named “fiscal-event-request-validated” and processes it by following the below steps:
Fiscal event validated data gets dereferenced. For dereferencing, service ids like COA id, Tenant id etc. are passed to corresponding services - master service and fetch the corresponding object(s). Once the fiscal event data is dereferenced, push/send the same data to the dereferenced Topic.
Unbundle consumers pick up the dereferenced fiscal event data from the dereferencing topic. Dereference fiscal event data gets unbundled and then flattened. Once the flattening is complete, push/send the same data to the Druid Sink topic.
Flattened fiscal event data is pushed to Druid DB from a topic named: fiscal-event-druid-sink.
Kafka to Data Store Sink
MongoDB Sink
The Kafka-connect is used to push the data from a Kafka topic to MongoDB. Follow the steps below to start the connector.
Connect (port-forward) with the Kafka-connect server.
Create a new connector with a POST API call to localhost:8083/connectors.
The request body for that API call is written in the file fiscal-event-mongodb-sink.
Within that file, wherever ${---} replace it with the actual value based on the environment. Get ${mongo-db-authenticated-uri} from the configured secrets of the environment. (Optional) Verify and make changes to the topic names.
The connector is ready. You can check it using API call GET localhost:8083/connectors.
Druid Sink
The Druid console is used to start ingesting data from a Kafka topic to the Druid data store. Follow the steps below to start the Druid Supervisor.
Open the Druid console
Go to the Load Data section
Select Other
Click on Submit Supervisor
Copy...Paste the JSON from the druid-ingestion-config.json file in the available text box.
Verify the Kafka topic name and the Kafka bootstrap server address before submitting the config
Submit the config and the data ingestion should start into the fiscal-event data source
Interaction Diagram
Environment
Note: Kafka topic needs to be configured with respect to the environment
Configurations and Setup
Update the DB, Kafka producer & Consumer And URI configurations in the dev.yaml, qa.yaml, prod.yaml file.
References and Notes
Last updated