Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Follow our blog section to get insights on public finance management systems, innovations, and impact.
Loading...
About the Platform
The platform is built as a Digital Public Good and follows a key set of design principles listed below.
Single Source of Truth - Data resides in multiple systems across departments and getting an integrated, consistent and disaggregated view of data is imperative.
Federated - Central, State and Local Governments represent the federated structure of government. This federated structure must be taken into cognisance while designing systems that enable intergovernmental information exchange.
Unbundled - Break down complex systems into smaller manageable and reusable parts that are evolvable and scale independently.
Minimum - Minimum data attributes that have a well-defined purpose is mandatory; everything else is optional.
Privacy and Security - Ensuring the privacy of citizens, employees and users while ensuring the system's security.
Performance at Scale - The system is designed to perform at scale.
Open Standards - The system is based on open standards so that it is easy to discover, comprehend, integrate, operate and evolve.
The platform applies the microservice architecture concept. Access the list of services supported by iFIX here.
The services are designed to comply with standards which consist of the Common Information Model and APIs. Click here to view the details.
To set up the platform, follow the installation steps listed here.
With the iFIX v2.4-alpha update, some of the DIGIT core services also need to be deployed. The builds of the same are also listed below. These have been picked from DIGIT-v2.8.
Category | Services | Docker Artefact ID | Remarks |
---|---|---|---|
Fiscal Information Exchange Platform
Enable governments to continuously improve their financial health, ensure a functioning and responsive service delivery system, and deploy available funds in ways that create an environment for businesses, communities, and individuals to prosper and stay healthy.
iFIX is an open-source fiscal information exchange platform. The platform enables connected applications to exchange standardised fiscal events e.g. Demand, Receipts, Bills, and Payments. The fiscal event consists of attributes explaining the details of why, who, what, where and when it happened.
As government service delivery requires multiple actors and interactions to come together across different levels, visibility of information is critical to bring down the cost of coordination. This includes real-time information on the financial health of a government agency/department- expenditure, revenue and availability of funds.
Public finance management faces significant challenges in terms of promoting accountability and transparency. The problem is more on account of siloed information structures that restrict the scope to get a broader view of the flow of funds or data across agencies and stakeholders.
The flow of information is both slow and limited, resulting in a number of gaps and breaks in the PFM processes. This is essentially attributed to the lack of information standards and exchange mechanisms that make it difficult for the seamless flow of data across agencies.
iFIX allows departments to share fiscal information from existing systems without having to invest in multiple integrations for different stakeholder requirements. The platform plays a pivotal role in driving efficient and performance-driven financial planning across all levels of governance. Real-time availability of financial information to stakeholders facilitates data-driven deployment of public funds and policy-making. The platform specification sets the base for the real-time exchange of fiscal information across funding and implementing agencies.
The iFIX platform identifies and resolves Public Financial Management issues like delays in funds flow, floating of unutilised funds, problems of low data fidelity, the administrative burden of implementation and outcome-oriented funding using the platform approach and associated policy reforms.
The platform helps achieve overall fiscal discipline, allocation of resources to priority needs, efficient and effective allocation of public services. Public financial management includes all phases of the budget cycle, including the preparation of the budget, internal control and audit, procurement, monitoring and reporting arrangements, and external audit.
iFIX helps address the following challenges with respect to the information on government fiscal programmes
Unlocking fiscal and operational data locked in silos
Generating relevant information based on common data standards
Ensuring the information generated is credible, reliable and verifiable
Accelerating fund flows by providing trusted, usable data and reducing the interdepartmental coordination time
The present-day public finance system supports multiple types of information flows that are unique and accessed by multiple sources. iFIX simplifies the network and fiscal information flows through the use of standardised formats and protocols.
No change to the source system: Enabling information exchange through iFIX does not necessitate any change to the source systems, therefore ensuring minimal disruption to current ways of working.
Scalability across governance levels and geographies: iFIX approach for standardisation can be applied to multiple entities at the same level and across different levels. This gives the platform infinite applicability within the PFM context to provide visibility on aggregated and disaggregated information.
The standardised fiscal event and exchange mechanism enable various government agencies e.g. various departments, local governments, autonomous bodies, national government, and development agencies to -
exchange fiscal data much like email systems exchange data with each other
ease flow of fiscal information resulting in better planning, better execution, better accounting and better auditing thus transforming the entire PFM cycle
promote transparency and improve accountability while ensuring real-time access to the financial health of the government stakeholders
iFIX offers a platform that standardises fiscal event data and the capabilities to exchange information across multiple sources. The platform is built as a digital public good and is open source with minimal specifications. The design incorporates the security and privacy of all data and users and ensures performance at scale.
Data standards streamline the flow of information paving the way for timely exchange and cost-effective means of managing fiscal events. The iFIX platform establishes the standards and specifications for fiscal event data that make it easier to exchange fiscal information between funding and implementing agencies. Event data is captured in real-time and at the micro level which ensures there is no intentional or unintentional data loss. Any transaction triggers a fiscal event that is accessible to integrated agencies for necessary approvals.
The standardized data exchange platform approach ensures that iFIX is not a replacement for the existing finance system. It is just a coordination and visibility layer between agencies within government and across governments.
Here are the test cases for Program Service, Digit Exchange, Mukta-ifix-adapter.
v2.4-alpha release details
IFIX 2.4 is a new release that offers new platform features and functions, the details of which are provided below.
DIGIT Exchange - This service offers the capability to exchange data while ensuring it is signed.
Program Service - This facilitates the creation of programs, sanctions, allocations, and disbursements.
MUKTA iFIX Adapter - This transforms data from Mukta payment to program disbursement.
IFMS Adapter - Integration to program disburse and on-disburse APIs.
NA
Features | Description |
---|
If Kafka malfunctions, data will be directed to the error queue, necessitating manual processing until an error queue handler is developed.
If the disbursement status is set to be SUCCESSFUL then it should not be changed to INITIATED again for the disbursement id.
As per IFMS guidelines, if a transaction fails, the amount is deducted and if a revised payment is not generated within 90 days, the amount should be deducted again.
Establishing alert mechanisms for critical errors, particularly in the context of billing, is required.
Performance testing and benchmarking of services.
Visit the to learn more about the iFIX platform capabilities.
IFIX Domain Services
Digit Exchange
This is new service for exchange data
Program Service
This is new service for implemented with iFix specs.
Adapters
ifms-adapter
Connects iFix to IFMS system.
iFix-mukta-adapter
Transform works plateform bills to disbursement
Core Services
egov-mdms-service
egov-mdms-service:v1.3.2-44558a0-3
egov-indexer
egov-indexer:v1.1.7-f52184e6ba-25
egov-idgen
egov-idgen:v1.2.3-72f8a8f87b-7
SSU Details
MDMS Service
Head Of Accounts
MDMS Service
ID Gen
Program Service
Exchange
Program Service
Exchange indexer
Digit Exchange
Exchange devops
Digit Exchange
ifms-pi-indexer
Mukta-ifix-Adapter
mukta-ifix-adapter-persister
Mukta-ifix-Adapter
Charts
Program Service
Environment
Program Service
Secrets
Program Service
Adapter Helm
Mukta-ifix-Adapter
Adapter Environment
Mukta-ifix-Adapter
Adapter Encryption
Mukta-ifix-Adapter
DIGIT Exchange | This service offers the capability to exchange data while ensuring it is signed. |
Program | This facilitates the creation of programs, sanctions, allocations, and disbursements. |
MUKTA iFIX Adapter | This transforms data from Mukta payment to program disbursement. |
IFMS Adapter | Integration to program disburse and on-disburse APIs. |
iFIX specification details
iFIX is a fiscal data exchange platform that enables the exchange of standardized fiscal data between various agencies. iFIX is designed to enable the exchange of fiscal data between various agencies and ensure the visibility of fiscal data. iFIX makes it possible to chain the fiscal data with each other and establish a chain of custody for the entire lifecycle from budgeting to accounting.
From the iFIX perspective, there are two types of agencies
Fiscal Data Provider - posts the fiscal data into iFIX using well-defined formats.
Fiscal Data Consumer - can query the fiscal data.
Both these roles are interchangeable.
Providers and consumers need to register on iFIX before they can post or query fiscal data. To register the concerned person from the agency must be provided with the following information on the iFIX portal - Name of the Agency, Contact Person, Name, Contact Person’s Phone Number, and Contact Person’s Official Email Address. The OTP is sent to the registered email address of the person.
Registered users -
logs in to the iFIX portal using the official email address
registers one or more systems as a provider, consumer or both
provides the name of the system
a unique ID is generated for each system (example: mgramseva@punjab.ifix.org)
a secret API key is also generated for each system - use this key to post or query fiscal data
The API key can be regenerated if required - only one API key is active at a given point in time
The portal provides the ability to generate new keys for each system.
Fiscal data providers post the fiscal data in two ways.
A fiscal message - is directed to a specific consumer and is delivered to intended consumers. These messages are available for query by intended consumers only.
A fiscal event - iFIX stores the events for consumers to query
Fiscal Event consists of
Header
From
To
Date of Posting
Body
Fiscal Event Type e.g. Revenue, Expenditure, Debt
Fiscal Event Subtype
Revenue - Estimate, Plan, Demand, Receipt, Credit
Expenditure - Estimate, Plan, Bill, Payment, Debit
Debt - in progress - will be provided later.
Array of fiscal line items
Amount
CoA
Location Code - from Location Registry
Program Code - from Program Registry
Project Code - from Project Registry
Administrative Hierarchy Code from Administrative Hierarchy Registry
Start Date of Period
End Date of Period
….
….
Attachment - Attachments consist of additional attributes like key-value pairs e.g. Account Number, Correlation ID or Documents
Signature - Fiscal messages can be signed by multiple agencies and add the signature to the Signature Array that contains the below-mentioned values -
Array of Signature
System
eSign - Signed Value of the Fiscal Event/Message Body using the System Key
Purpose - Acknowledgement or Approval or Rejection
Comments
Date of Signing
Data providers can reverse a previous fiscal message or event. The data provider reverses the data by posting the same event with a negative amount in the line item(s). The data consumers should handle reversals appropriately.
Data consumers can query fiscal data. They can query Messages - the unread messages delivered to them. When consumers read the unread messages, these messages are marked as read. Events - Consumers can also query fiscal events posted by other data providers.
Location
Administrative
Chart of Account …. … …
The Program Service is constructed using iFIX specifications and serves as an extensive platform aimed at simplifying program creation, sanction management, fund allocation, and disbursement execution. It equips organizations with essential tools to effectively oversee available funds and guarantee transparent and accountable distribution to designated beneficiaries.
IFMS adapter manages funds summary based on the head of accounts and SSU codes. It creates sanctions for each head of accounts and SSU details based on ULB tenant ID.
Three types of transactions can be received from the JIT VA API -
Initial Allotment - A new sanction will be created only if AllotmentTxnType is Initial Allotment.
Additional allotment - For this type of transaction it will update the amount of existing sanction.
Allotment withdrawal - It deducts the transaction amount from the sanction for this type of transaction.
When a bill is approved this service creates payment using the expense service.
Some consumers keep listening to the payment create Kafka topic and generate payment instructions (PI) using payment and bill details and post the PI to the IFMS system using JIT API.
A new PI will be generated when enough funds are available for any head of accounts for that tenantId.
Before posting the PI there were multiple enrichments like bank account details, org and individual details, etc.
After creating the PI it deducts the available balance from the funds summary.
If a PI is created for any payment then the user can not generate a PI again till the PI fails.
It keeps a log of each status call of PI and saves it in the DB
Program Service takes care of Program, Sanction, Allocation and Disbursement using the standardized exchange interface.
Base Path: /program-service/
TBD
Platform architecture
The iFIX platform architecture and the interacting systems are divided into 3 parts:
DIGIT Exchange functions as a connector bridging services deployed across diverse domains. Its primary role involves signing and verifying exchange messages and generating events for ingestion into Elasticsearch for dashboard visualisation.
Program service handles all the financial transactions like sanction, allocation and disbursement.
Users can establish programs within which these transactions take place. It receives messages from the adapter service, validates them and forwards them to digit-exchange for sending it to the IFMS system. In case of any validation failure, it responds with an error status and message. Additionally, the service maintains records of sanctioned, allocated, and available amounts for disbursement.
Two adapters will be deployed. One on the program side and the other on the IFMS side, both of which connect to the program service. As the program service is generic, there must be some data transformation for the connection between the program and the IFMS side, which is handled by the adapters.
Business requirements for iFIX
Make sure to read this in conjunction with the following documents:
This page provides the details of the fiscal event-based approach for building out iFIX as an information exchange platform. These details are used to define the technical specifications for iFIX and the functional specifications that are open for validation and inputs internally and from the ecosystem.
The specifications have been defined from the lens of a sub-national government (state in the current context) and will be limited to interactions from a financial information perspective. The interactions in the scope of the current draft are:
State Finance Department (FD) and Central (National) Government
State Line Department’s interactions with other line departments
State Line Department’s interactions with government autonomous bodies including local government and/or non-government agencies
The current draft of specifications was built using publicly available information and inputs from the Department of Water Supply and Sanitation (DWSS), Punjab, and the Finance Department, Punjab.
Financial information-related interactions of the Central Government with other Central Line Departments are out of the scope of the current draft.
The broad objective of iFIX is to enable the flow of reliable and verifiable fiscal information on time. Recognizing the multiplicity of unique types of information flows in the current PFM system, iFIX aims to simplify the information flow network. The key driver of this simplification is the application of standardised formats for fiscal information exchange. To arrive at a crystallised set of formats and protocols covering all fiscal information exchanges follow the steps below:
Events that trigger the generation of relevant fiscal information, at any stage in the budget cycle, are termed fiscal events. To be classified as a fiscal event, the event will need to meet one of the following criteria:
Transaction-Based Fiscal Events: Such fiscal events are triggered when there is fiscal information being generated due to an actual change of hands of a financial asset or in simple words due to a financial transaction.
Non-Transactional Fiscal Events: While there are numerous non-transactional events that occur throughout the budget cycle, these are termed Fiscal events only if they meet at least one of the following criteria:
Minimum Degree of Finality: A non-transactional event will be considered a fiscal event only when the action resulting from or the document produced from the event has definitive implications for the budgetary cycle. Examples:
A draft of the budget that has been prepared at the state line department level (DDO) and sent to the next competent authority (BCO) for approval will trigger a fiscal event. The reason for this is that it is assumed that the line department at its level has done the calculations for arriving at a final figure which then needs to be approved by the next competent authority.
With respect to any payments to be made out of state treasury, any verbal/ email-based intra-Finance department go-ahead (between State Treasury and Cash Planning Department) to make the payment will not be considered a Fiscal Event. Only the generation of Payment Advice to the concerned bank will trigger a fiscal event.
Change in ability to claim/ use/ dispose of a financial asset: A non-transactional event will be considered a fiscal event if it results in (de-)authorizing a certain individual or entity to claim/ utilize/ dispose of a financial asset. Examples:
Allocation of the budget by the department to respective DDOs authorises the DDOs to utilize the funds as planned in the budget and will trigger a fiscal event.
The fiscal information generated during each fiscal event needs to be recorded in a specific format for the purpose of exchange. Based on the current budgetary practices and guidelines followed at the sub-national and national levels in India, the fiscal events triggered during the course of the budget cycle can be classified into four major types:
Revenue Receipts: Revenue receipts comprise receipts that do not result in the creation of a liability on the government. The total revenue receipts include the State’s Own Tax and Non-Tax revenues and Grants-in-Aid and Share in Central Taxes from the Government of India. The non-tax revenues consist mainly of interest and dividends on investments made by the Government, fees and other receipts for services rendered by the Government.
Capital Receipts: The capital receipts are loans raised by the Government from the public (these are termed as market loans), borrowings by the Government through the sale of Treasury Bills, the loans received from Central Government and bodies, disinvestment receipts and recoveries of any loans and advances given.
Revenue Expenditure: Revenue expenditure is for the normal running of different Government Departments and for the rendering of various services, making interest payments on debt, meeting subsidies, grants in aid, etc. Broadly, the expenditure which does not result in the creation of assets for the Government of India is treated as revenue expenditure. All grants given by the State are also treated as revenue expenditure even though some of the grants may be used for the creation of capital assets.
Capital Expenditure: Capital payments consist of capital expenditure on the acquisition of assets like land, buildings, machinery, and equipment, as also investments in shares, etc., and loans and advances made by the State Government to boards, corporations and other institutions.
Further within each major type of fiscal event, there are varied types of events. To enable information exchange using an easily understandable and standardised format, subtypes of fiscal events are identified based on the similarity in nature of fiscal information generated due to these events.
Budget Cycle comprises the following stages:
Budget Planning
Budget Preparation
Budget Approval
Budget Allocation
Budget Execution
Budget Accounting
Budget Auditing
Additionally, Budget Planning is an activity that sits outside of the Budget Cycle and takes place throughout the year based on need. Examples:
A scheme/project announced by a state government official can happen at any point of the year, the planning for which begins right after the announcement. Thereafter, all the required approvals happen and estimates are prepared accordingly which then feed into the budget preparation phase of the budget cycle
Planning for already approved projects and planning to get approval
Process for Planning for the New Projects/ Schemes (for Revenue and Capital Expenditure)
Release of funds to DDO
Work awarded to vendor
Work bill payment to vendor
Each fiscal event extracted above will be defined in terms of
Header
Body
Array of Fiscal Line Items
Attachment
Signature
Fiscal Events and Attributes are defined here
One of the key objectives of capturing the current state processes is to ensure all the variations and varieties of processes within the defined boundary are covered. In this context, certain dimensions to be kept in mind while capturing current state processes are:
Central sector (CS) schemes are schemes with 100% funding by the Union government and implemented by the Union Government machinery. Besides, there are some other programmes that various Ministries at the Union implement directly in States and UTs which also come under CSS. However, in the latter, the financial resources are not shifted to States. The CS schemes are mostly formulated on subjects mainly from the Union List.
Centrally Sponsored Schemes (CSS) are the schemes by the Union government where there is financial participation by both the Union and State governments. A stipulated percentage of the funding is provided by the States in terms of percentage contribution - it may vary in 50:50, 60:40, 70:30, 75:25, or 90:10. Implementation of the CSS is the responsibility of the State/UT Governments. CSS are formulated on subjects under the State List.
State Plan Schemes (SPS) are schemes planned, funded and implemented by State governments. SPS are formulated on subjects under the State List. At times, some existing SPS become part of the umbrella schemes (CSS or CSS) if they align to similar sectors/areas.
Government debt comprises Public Debt and Public Account Liabilities. Public Debt is a component raised against the Consolidated Fund of India and included in the State Budget Document. Public Account Liabilities are raised against the Government’s Public Account comprising Small Savings, PFs, Reserve Funds etc. Public Debt comprises the following two types of loans.
The state government raises debt from various sources within the country as allowed by the constitution. These debts, apart from the loans and advances received from the central government are known as internal debt of the state. The components of internal debt as per the Finance Accounts are as follows:
Market Borrowings
Special securities issued to National Small Savings Fund (NSSF)
Compensation and other bonds
Loans from financial institutions
Ways and means advance from RBI
Other loans
This includes loans and advances that the states receive from the central government and the loans from International Development Partners that are on-lent by the central government to the state government.
At present, a six tier classification of Accounts is adopted in State Budget Documents: The budget literature prepared and presented by the Government of a State to the Legislative Assembly follows a six tier accounting classification comprising Major Heads, Sub‐Major Heads, Minor Heads, Sub Heads, Group Heads and Object Heads. These are shown in the budget to ensure that financial transactions are recorded to the minutest detail.
Major Heads: are the main units of accounts classification under various sectors/ sub sectors. They normally reflect the distribution of expenditure among broad functions of the government.
Sub Major Heads: are opened under a Major Head to record those transactions which are of a distinct nature and of sufficient importance to be recorded exclusively, but at the same time allied to the function of the Major Head.
Minor Heads: are subordinate to Major or Sub Major Heads and correspond to programmes/ broad groups of programmes undertaken to achieve objectives of the functions represented by Major Heads.
Sub Heads represent schemes under programmes subordinate to Minor Heads.
Group Heads represent sub schemes under schemes and are subordinate to Sub Heads.
Object Heads represent the actual nature and form of expenditure.
Under the current budget classification of the State, a 15 and 13 digit coding pattern is adopted for expenditure (with some exceptions) and receipts respectively.
There is opaqueness in data on transfers to states. The State-wise details of transfers, information on releases to states under the various functional heads are not captured.
There is a lack of standardization of scheme classification. Plan schemes are not captured uniformly at one level. Some schemes are classified at sub-head level, some at detailed head level, and some other at Minor head level
Major Heads, which are supposed to represent government functions, do not reflect the true functional character of expenditures and do not correspond to Heads of development used in the planning and resource allocation process.
Breakup of central transfers into constituent flows such as Finance Commission grants, Normal Central Assistance, Additional Central Assistance, Special Central Assistance etc. are not captured.
There are emerging special requirements such as gender budgeting, budgeting for SC/ST, North Eastern Region (NER), that are not very well catered to by the existing system.
Administrative segment - Would need a new mapper to be created based on which it can be determined which administrative departments handle which budget codes, and would vary according to geography and time
Programme/Scheme segment - Can be derived from Minor Head, Sub Head and Group Head of existing COA
Recipient segment - Would need additional effort to be published at the budget estimate stage itself by way of registry
Target segment - Would need additional effort to be published at the budget estimate stage itself by way of registry
Object/Economic classification - Can be derived from Object head of existing COA but would also need
Location - Would need additional effort to put in place location codes by way of registry
New fields might need to be added for accrual accounting - for instance, accounts receivable and payable
One bank account from which all scheme expenditure is incurred - TSA for a particular scheme
All Implementing Agencies spend scheme funds from out of subsidiary accounts to SNA or through zero balance accounts linked to SNA
Systems involved for implementation of SNA - PFMS, State IFMS, Bank Systems
SNA - mandatory for all Central Schemes, but States too are gradually porting their State plan schemes to PFMS and would want SNA operational for those schemes
Better cash and debt management
Better reconciliation of government and bank data
Reduces the government debt servicing cost
Helps maximise the return on investments of surplus cash
Enhances the overall effectiveness of the PFM system
Structure has been created to ensure coordination between all stakeholders
EAT and DBT modules are the two PFMS modules relevant for SNA
Preliminary arrangements needed for SNA implementation
Configuring scheme code and scheme hierarchy in PFMS
Mapping scheme and scheme components to IFMS CoA
Arrangement with scheme banks
Opening SNA accounts in banks
Registering beneficiaries/payees in PFMS
Release and expenditure under SNA
Varying levels of maturity of different State IFMS
Exchange of data between IFMS and PFMS, especially wrt mapping and timing issues leading to heavy reconciliation burden
Integrating beneficiary database with IFMS and PFMS
Reconciliation and closing of existing bank accounts of implementing agencies
Disruptions to existing arrangements with banks
Disaggregated Previous year Budget Utilization data as input for Budget Review & Rationalization: Current process of review and rationalization of budget shared by Line Departments is driven by broad Fiscal Deficit targets and ball-park estimates of how expenditure under a particular budget head will increase or decrease. Recording each fiscal event will allow easy extraction of data
Outcomes/ Outputs of Budget Spent in the Previous Year by the Department as input for Budget Review & Rationalization
Near real-time view of performance against budget
Use of ‘Plan’ fiscal event data to inform about admitted liabilities and expected bills with respect to long term projects
Disaggregated Previous year Budget Utilization data as input for Budget Review & Rationalization: Current process of review and rationalization of budget shared by Line Departments is driven by broad Fiscal Deficit targets and ball-park estimates of how expenditure under a particular budget head will increase or decrease. Recording each fiscal event will allow easy extraction of data
Outcomes/ Outputs of Budget Spent in the Previous Year by the Department as input for Budget Review & Rationalization
Shortened time for allocation against approved budget
Visibility into receipts deposited into government accounts other than the State Treasury
Monitor Bill-Ageing
Near real-time view of performance against budget
A gender budget, or a gender-responsive budget, is not a separate budget for women but an understanding of any form of public expenditure or revenue, or method of raising public money, from a gender perspective. Given the understanding that budgets are the constructs of specific economic, sociocultural and political context of any country (though prima facie it appears as gender neutral) and that there is often a lack of financial backing for gender equality policies, gender budget was proposed as a fiscal tool to translate the government’s gender commitments to fiscal commitments.
Gender responsive budgeting provides the framework for undertaking this exercise, questioning the gender neutrality of budgets, and scoping the need for targeted intervention to make budgets more gender responsive and help reduce gender inequality. Needless to say, there is a need for sex-disaggregated data and an understanding of the gender relations that would feed into each of the five stages of the budget cycle (budget planning, approval, execution, accounting, and auditing) to make gender responsive budgeting a meaningful and fruitful exercise. A useful lens could be to question what kind of impact does a fiscal tool like Gender Budget have on gender (in)equality.
The Union Government and the State Governments currently publish a Gender Budget Statement (GBS) as part of the Budget Document. The GBS is an accounting statement that presents the government’s allocation of funds and expenditure thereof on schemes substantially meant for welfare of women and children[8]. Like most budget documents, the GBS presents the Budgeted Estimates of the current Fiscal Year (FY), Revised Estimates of the previous FY and the Actual figures of the FY previous to the last one.
The GBS is usually prepared and presented under two parts – (i) Part A: reflecting schemes that have 100% allocation for women and girl children, and (ii) Part B: reflecting schemes with 30% or more (and less than 100%) allocation for women and girl children[9]. The former are also called ‘Gender Specific Schemes’ and the latter ‘Gender Sensitive Schemes’. The sum total of the two parts constitute the size of the total gender budget of the government. It may be noted that the schemes under Part B are often called composite schemes as their beneficiaries are expected to be across gender, and the challenge is then to identify the share of scheme expenditure that may specifically help the cause of gender equality (which is then to be reported in Part B of the GBS).
Various Ministries/Departments are expected to identify ongoing programmes/schemes/sub-schemes that are meant to primarily benefit women and girl children, and depending on the share of expenditure of those schemes, they are reported under Part A or Part B of the GBS. For instance, the allocation for the Janani Suraksha Yojna scheme, a safe motherhood intervention under the National Health Mission, is reported under Part A of the GBS by the Department of Health and Family Welfare. However, a share of those schemes under the National Rural/Urban Health Mission that would benefit the entire population, including women and girl children, would be reported under Part B of the GBS depending on the share of scheme expenditure meant for women and girl children beneficiaries.
Guidelines exist on how a Gender Budget needs to be prepared but there are no standard processes followed across governments. To begin with, for instance, a GBS is typically structured under Part A and Part B as mentioned in Section 2.2. Neither is this structure uniform across governments (Kerala, for instance, reports all those allocations under Part B of GBS where allocations for gender concerns in a department are less than 100 %), nor is there a standard methodology to assess which departments/ministries should report scheme related expenditure in GBS and what allocations should be made by them to address existing gender concerns.
Gender-disaggregated data are not readily available in most departments/ministries for the various ongoing schemes. For instance, beneficiary registries for scheme/s are often not available which can help identify if the scheme is targeted towards a particular gender or would impact one gender differently than another. As a result, gender assessment frameworks are rarely used, and reported allocations in the GBS often end up being a by-product of subjective assessments of governments and concerned stakeholders involved in the process[10].
The GBS rarely follows the Chart of Account (COA) structure to report allocations and expenditure of respective departments/ministries. That gender is not an attribute in the current COA is a cause of concern, but more importantly, the lack of charity on what does the GBS capture - programme or scheme or sub-scheme or detailed scheme level allocations and expenditure - is what obscures information on the real intent of the GBS. Barring a few exceptions, like Odisha GBS which reports allocations on Major Head, most GBS, including that of the Government of India, report allocations under Part and Part B while being silent on the COA. This is a challenge since compiling information for GBS then seems like an ad-hoc process with no clarity on what constitutes a Gender Budget. To the reader of the GBS, it is not clear what proportion of the total Demand for Grants of a department/ministry goes on to be reported in the GBS, and whether the gendered allocation are for the entire programme (Minor Head level) or scheme (Sub Head level) or sub-scheme (Detailed Head level). This absence of crucial information makes it difficult to slice and dice information at the aggregate and disaggregate level (hence to derive some meaningful insights), and is a major blow on overall fiscal accountability and transparency.
A preliminary enquiry into the process of preparation of GBS suggests that it is an ad-hoc process with less objective and more subjective criteria involved in the same. As a result, not surprisingly, a fair amount of human effort is required to collate fiscal information for a GBS at present.
Before we begin to explore how iFIX can enable the stated objective, it is important to underscore that iFIX is a technology platform hence capable of handling objective and not subjective information (at least with the current architecture). This is a critical aspect to understand and appreciate since it helps define the scope of iFIX wrt gender budgeting. Gender budgeting, even in the nascent stages it is in India as well as in the most mature forms as it is in a few countries, involves a mix of objective and subjective criteria that goes in designing the same. Hence, to begin with, iFIX can enable only those elements of GBS for which objective logic forms the basis of decision making. The hope is that iFIX, along with other reforms in the public financial management, will gradually enable generation of more (and quality) fiscal information that could feed into incorporating other subjective concerns as well in gender budgeting exercise.
A related aspect to consider for this exercise is that gender budgeting encompasses a variety of schemes - direct benefit schemes that are targeted for women (for instance, scholarship scheme for women); schemes targeted towards creation/maintenance of assets that would primarily benefit women (for instance, creation of creche facility in workplaces); schemes with the intention of creating gender awareness (for instance, gender sensitization in workplaces). We believe that iFIX is more suited to handle schemes of the first two kinds where beneficiary data would be available (or could be made available with the intervention by competent authorities)[11]. The gender-disaggregated data could thus feed into designing gender-specific allocations of the involved departments/ministries. Therefore, we limit the scope of this exercise to explore how iFIX can enable generation of gender-disaggregated data for schemes of a specific nature, as elucidated above.
Lastly, a crucial consideration is that the nuance of gender budgeting lies in ex-ante planning and ex-post outcome analysis. Since iFIX platform enables exchange of fiscal information, the ex-post outcome analysis of the gender budget is out of the current scope. The utility of iFIX thus lies in enabling proper (informed) planning of gender allocations at the department/ministry level that could then feed into accounting and auditing[12].
To explore how iFIX can help in generation of gender-disaggregated data for schemes reported under Part A and Part B of the GBS, we explore what fiscal attributes and data registries would be needed.
The next crucial aspect is to identify whether the entire scheme is targeted towards the intended beneficiaries or a sub-scheme (or alternatively, proportion of scheme expenditure targeted towards the intended beneficiaries). This is to not just assist in accurate planning and estimation of funds (scheme being reported at the Sub Head level, and sub-scheme at the Detailed Head level in the current COA), but also to ensure that the subsequent reporting and accounting can then follow making it easier to extract the gender budget component from the overall demand for grant of the involved department/ministry, which would then be reported in the GBS. It will be a crucial factor to determine accurate identification and then reporting of schemes/sub-schemes in Part A and Part B of the GBS.
At this stage, we believe that it suffices to have the attributes of scheme name and objective in the plan and estimate fiscal events, mapped to the beneficiary registry (Target Segment) to derive relevant fiscal information needed for preparation of GBS. As stated earlier, since execution of the budget doesn’t distinguish between gender specific expenditure and other expenditure, no additional attributes other than those already identified for the remaining fiscal events (demand, bill, credit/debit) would be needed.
The fundamental role of iFIX would be to enable planning and estimation of funds for gender specific or gender sensitive schemes using data available from the relevant registries. Linking the scheme to the correct beneficiary registry is the objective criterion needed to ensure that accurate gender-disaggregated data can be used to design better gender budgets. The expenditure incurred on those schemes and then the subsequent accounting would then help establish the chain of custody and trace the flow of funds from planning to the accounting stage. Availability of this fiscal information across the entire budget cycle could then also facilitate gender audits.
To illustrate the approach stated above, let us consider the Sarva Shiksha Abhiyan (SSA), Government of India's flagship programme for achievement of Universalization of Elementary Education (UEE) in a time bound manner. The programme is targeted towards making free and compulsory Education to the Children of 6-14 years age group, a Fundamental Right. Hence, if one were to map the target segment, the beneficiary registry for this programme would be all eligible children (irrespective of gender) in the stated age-group. However, the programme covers many schemes (and sub-schemes), some targeted primarily towards girl children. These components of the programme would be reported in the GBS, some in Part A and some in Part B. A scheme under the SSA programme, the objective of which is to construct a primary school only for girls, would ideally be reported under Part A of the GBS by the Department of Education. On the other hand, if a sub-scheme under the programme entails construction of toilets for girl children in already existing primary schools, then such allocation should ideally be reported at the sub-scheme level and in Part B of the GBS by the Department of Education.
This distinction is crucial at the planning stage itself so that subsequent accounting can help report accurate expenditure in the GBS. For instance, the entire allocation and the expenditure thereof on the scheme - construction of primary school for girls - would be reported under Part A of the GBS; while the allocation and the expenditure thereof the sub-scheme - construction of toilets for girls in primary schools - would constitute only a percentage of allocation (expenditure) on the scheme (construction of primary schools). This share of expenditure on the sub-scheme that is intended to have the gendered impact (encourage retention of girls in schools, who otherwise would have dropped out of schools due to lack of toilet facilities), if constitutes 30% or more of the toal scheme expenditure, is what would be reported in Part B of the GBS.
To understand how iFIX can use objective criteria as explored above to help generate gender-disaggregated data (which in turn could feed into design of better gender budgets), can be understood by considering the scheme examples stated above. Let us first consider the scheme reported under Part A of the GBS - construction of primary school for girls. From the beneficiary registry of all eligible children between the age-group of 6-14 years, iFIX would need to fetch the subset of intended beneficiaries for this scheme, i.e. number of girls from this registry who are eligible for primary school education. This gender-disaggregated data made available from the beneficiary registry (maintained and updated regularly by the government and other relevant authorities) is what will form the basis for a needs-assessment study by the Department of Education in the planning stage. The beneficiaries data (number of eligible girl children) alongwith norms like ideal student-teacher ratio, would help the department determine the number of schools to be constructed, the staffing requirements attached to it, along with the other infrastructural requirements needed to construct and run a well-equipped school. Thus, instead of relying on subjective criteria, the availability of correct beneficiary registry can help determine estimated budget for the scheme in a somewhat objective manner. Since this scheme is targeted entirely towards girl children, the number derived from this exercise would go on to be reported as the Budget Estimate figure for the current FY under Part A of the GBS.
On the other hand, for the Part B scheme - construction of toilets for girls in primary schools - the starting point again has to be the beneficiary registry. The exercise stated above has to be repeated to do a needs-assessment - how many toilets need to be constructed in the schools depends on the number of girl students enrolled in those schools. The subset of the beneficiary registry would again be the entry point to arrive at the allocation needed for this sub-scheme. Once this is done, needs-assessment would further entail determining cost of constructing toilets and equipping them with needed facilities. This exercise would yield an estimate that would then form the basis for arriving at the total cost of this project (sub-scheme) and obtaining necessary sanctions from competent authorities. This estimate would however be a proportion of the total scheme cost (the total cost for constructing/maintaining primary schools), and would be reported under Part B of the GBS (assuming it is at least 30 percent of the total scheme cost).
The design of scheme plans and derivation of scheme estimates using the gender-disaggregated data (from the relevant registries) would then feed into the fiscal information available on iFIX. This would enable better and more accurate gender budgeting by design as the fiscal events at each stage of the budget cycle would then capture the relevant fiscal attributes which would give visibility to gender allocations in the budget. This visibility would ensure that accurate identification of schemes (or sub-schemes) can happen at the department/ministry level, accurate estimates can be prepared, and then meaningful analysis be undertaken (for instance, what proportion of the department's budget is reported in the GBS can be derived by identifying programmes, schemes and sub-schemes (the relevant COA level) from the attributes of scheme objective and target segment, or beneficiaries).
It is to be emphasized that the ability of iFIX to enable generation of gender-disaggregated data and gender budgeting rests on the ability of the government department to proactively engage in objective needs-assessment and maintain (and thereafter judiciously utilize) the beneficiary registry to determine the scheme estimates (or allocations) right in the planning stage and estimation stages. The organization, and the Mission team in particular, thus have to actively engage with the government departments/ministries to channelize the latter’s efforts in the needed direction.
Public health is generally a shared fiscal responsibility between centre and state in most LMICs. This also adds complexity to the way it is financed and how it affects the Out Of pocket (OOP) spending for beneficiaries.
Government expenditure on health affects Universal Health Coverage (UHC) at the end of the day, hence it is important to
Ideal budget planning should happen bottom up right from village to centre. Unfortunately, due to paucity of time, processes, systems, standards, it ends up happening on an incremental basis discounting the current needs of the beneficiaries.
Refer the Miro board here
*This is currently based on a few preliminary interactions we have had with ecosystem partners, our experience and existing material available on this topic. As we engage with them and deep-dive into the topic, this section will be refined to capture the problems/gaps in details.
Several siloed systems and datasets have information which can be synthesized and made available to the administration for decision making.
For e.g.
Service delivery
E.g. Number of C-sec deliveries at the facility level,
Generally sits in registers or point solutions or HMIS at the facilities
Programmatic data
E.g. epidemiological and programmatic data on malaria incidence in a particular area, number of trainings conducted
Generally sits in registers, or Excels
Financial data
E.g. amount to be spent on trainings, transport, HR, procurement etc
Generally sits in MS Excel, point solutions, registers or paper formats, IFMIS, PFMS and other systems
Key attributes - (not only NHM)
Health data
E.g. health data of individuals stored in electronic/non electronic formats, MCP card, household registers etc.
Generally sits in HMIS, point solutions, registers
Need to have an interface between programmatic, service delivery and financial data
Identify areas of potential duplication or to improve efficiency
Review planned activities more accurately at the central level
Sub-Types could instead be considered as Primary Fiscal Events. - Addressed.
Plan as sub-type: Definition does not include upstream view. - Proposed solution to break the process into Estimate (Across years and precedes budgeting), Budget and Plan. Yet to be reflected in the definitions. [Plan for getting the budget (Budget proposals) b. Once approved, there is planning for using that budget (Execution) c. New scheme ]
Classification of
Utilization Certificate is a data attribute related to a bill ( will only be an attachment -does not trigger a fiscal event )
Financial Sanction (Check with KK once) does it trigger a fiscal event.
Classification of Advance Bill
Advance against a project (EAT) (Adjusted against Expense)
Advance given to contractors (Adjusted against Contractor bills)
Within Bill - create a sub-type (Advance Bill - Internal and External Bill)
Reconsider if Auditing will trigger fiscal events.
Option 1- Need to introduce new fiscal event types to incorporate this. Debit and Credit may be re-defined to incorporate the accounting aspect.
Option 2 - Need to incorporate the correct data attributes in current fiscal events to enable auditing.
Reconsider if Accounting will trigger fiscal events. Posting of entries is a separate process with the AG Office. May need to be considered a separate fiscal event as we cannot change the existing systems. (Add to PFM Discussion)
Failure of payment leads to credit of Suspense Account, how will this be dealt as a fiscal event. Need inputs from ground to elaborate this further.
Chart of Accounts
There are already defined codes for RLBs - can these be directly leveraged?
Few of the registries mentioned in the iFIX diagram seem to be components of the COA, is there alignment between the two? If not, we need to define what those registries mean, what information is available on those, and what policy reforms would be needed to put those in place in order for data to be generated by governments in the said format or capturing the details needed.
For Gender Budgeting, COA needs to reflect the same. How can this be reflected in design ?
COA mapping with source systems ? For non- treasury transactions.
Tenant is mandatory ? Will tenant details be available in all use cases ? When we onboard various systems on iFIX. (To be discussed with Ghanshyam and Manish) . Will not have direct implications for the current design stage. If there are exceptions at a later stage, we can account for those.
Do we need to establish chain of transactions or fiscal events-
How are the various Fiscal events linked to corresponding events? E.g. demand and Receipt, Bill and Payment.
How is modification or cancellation handled with each of these ? Reversals*
What are the controls that need to be addressed?
We should ideally allow fiscal events to be exchanged independently of preceding or succeeding events. To be discussed further.
Do we handle both the use cases - Cash basis and Accrual basis. ?? Needs to be explored.
May be needed to introduce a separate fiscal event for receivable and liability in case of cash basis. Both the generation of receivable and actual receipt are to be recorded against the same budget head.
Should budget checks happen on iFIX ? Then usage needs to be tracked.
Further refine the data attributes as per the Fiscal Event format - Head, Signature, Body etc, and abstract sub-types by grouping attributes as mandatory and variable. The next would also include developing standardized definitions for identified attributes and establishing hierarchies within them. e.g. - ‘Department’ - includes state FD or only line departments or how are Scheme and Project related. (Product Team and Program to work together on this).
Budget Preparation for Capital Receipt (Raising Debt and Recovery of Loans and Advances)
Budget Preparation for Capital Expenditure - (Debt Repayment and Loans and Advances by the State)
Request for Release of Tranche of a Sanctioned Loan (Debt) into State Treasury - Capital Receipt
Recovery for Loans and Advances (Debt) into State Treasury - Capital Receipt
Gather on–ground inputs for validating and enhancing audit and accounting related processes
Identify any variations in budgetary processes for various kinds of schemes - CS, CSS, SPS. Current processes primarily address the processes pertaining to CSS.
Complete the understanding of Surrender of Excess/ Savings and Supplementary Grants process, Re-appropriation of grants, and opening new heads of accounts. (Meeting scheduled with Budget Officer, Punjab FD for 23rd May 2022).
Revenue Receipt Collection - Explore if there is an offline version of the process and document the variation resulting from it.
SNA Related aspects to be explored
How many CSS schemes are currently operational in DWSS for which SNA will become the default route for fund transfer?
What is the status of implementation of SNA for these schemes? Have Single Nodal Accounts been opened for the CSS schemes in the department?
What challenges are being faced by the DWSS to implement SNA?
Are registries for all implementation agencies, beneficiaries, vendors, etc. in place for SNA implementation?
To put in place SNA, what integrations with IFMS and PFMS would be needed?
Would SNA help track flow of funds in the execution stage completely? Would information about fund flow and utilization available in the form of dashboards to concerned stakeholders for decision making?
Would scheme codes on SNA be similar to scheme codes in the State budget?
For a CSS, how would iFIX provide fund flow information from budget preparation to audit stage? SNA would have information on budget execution in most likelihood.
Continue work on Gender and Health use cases
Document potential impact of iFIX across processes and validate the same.(Annexure II). High-priority use cases to be used as inputs for identifying the prototypes.
Department of Rural Department & Panchayat (FFC grants are routed through RD) | PSPCL | GPWSC | DLGP (for instance, DLGP works in urban areas in providing water supply and sanitation services. So there is merit in exploring if DWSS interacts with DLGP and for what purposes.) ↑
Once financial sanction is obtained and a COA identified (if provision exists), expenditure can be incurred in the samne financial year after the budget provision for it is included in the Revised Estimates for the current financial year. If a COA can not be identified for the scheme/project, then when it becomes part of the budget provision, FD has to create a new subhead/detailed head and then it is put up for BFC scrutiny as part of the preparation process. ↑
DDO is the first level of authority, everything below that level is assessment. Therefore, only the estimates prepared by the DDO can be considered an estimate with a certain degree of finality. ↑
Details not included here, since Budget Document is assumed to be the primary document for tracking the complete budget cycle from iFIX perspective. ↑
In most states the process of allocation to DDOs does not involve the FD, while in Punjab a different point of view was presented by some stakeholders. This needs to be confirmed again. ↑
The process needs to be extended for offline collection of receipts as well, and how they are recorded in the treasury. We will be exploring this with GoP officials. ↑
At times, BCO may surrender the savings at his/her level itself on IFMS. The process needs to be validated with Finance Department officials in Punjab. ↑
The term gender here is being used in the very limited sense to convey the binary classification of gender, but ideally, the gender budget is supposed to incorporate gender concerns of the transgender as well (LGBTQIA+). ↑
This is based on inputs received from various subject matter experts, few of whom are directly involved in the preparation of GBS for governments (Unior or State), as well as on our in-house research. We are aware that exceptions do exist across governments as well as departments/ministries. ↑
For instance, gender sensitisation programmes in schools could play a great role in addressing gender concerns in the society but an approach that considers gender-disaggregated data may not be the best one to determine allocation for such a programme. Similarly, it is difficult to use iFIX to determine allocation for schemes like capacity building in government departments/ministries to undertake GBS, hence best left out of the scope of iFIX. ↑
It may be noted here that the execution of gender budget is no different from execution of funds meant for any other purpose. ↑
This migration is mukta-specific, it will not be part of our master code.
After installation of all required services, port-forward the program service and create programs for each ULB. Sample curl is added bellow.
Configure alll program codes that you created for each ULB to the MDMS.
IFMS adapter data migration for mukta-adapter ad program service
Update environment variables according to the environment.
Build the python migration script and deploy it in the environment.
Port forward the service and call the API to start the migration.
Technology used for the platform
Java 8 ()
Apache Kafka ()
Postgres ()
Elastic-search ()
API spec Click below to view it in Swagger Editor.
API spec Click below to view it in Swagger Editor.
State Finance Department (FD) and [1] at state
For further details on the context of this document and our approach to reimagining Public Finance Management please refer to the .
Step 1 - Define what is a fiscal event (and its types) or what is the scope of relevant fiscal information from an iFIX perspective (refer to section)
Step 2 - Document the current public finance management processes at a generic level, i.e. defined using actors, verbs, inputs and outputs to ensure they are representative of all minor variants of the process at the level of various sub-national governments in India. (refer to tables in , columns 1-6)
Step 3 - Apply the definition of the fiscal event to the current processes to collapse or abstract the fiscal event essence of the whole process to a set of fiscal events. (refer to tables in , columns 7-9).
Step 4 - Extract the current data attributes used for fiscal information exchange to define the format for fiscal information exchange. (refer to tables in , columns 10)
# (1) | Actors (2) | Input (3) | Verb & Noun (4) | Output (5) | Attributes (6) | Does it trigger a fiscal event (7) | Fiscal Event Type (8) | Fiscal Event Sub Type (9) | Data Attributes for iFIX (10) |
---|
The , is an information exchange platform that helps facilitate the exchange of fiscal information across central, state and local governments and also between various departments in a standardized manner. The objective of this exercise is to explore if iFIX can help collate relevant fiscal information for gender budgeting in a more standardized, efficient, timely and transparent manner, and if iFIX can play a role in incorporating gender concerns in the budgeting exercise.
As stated above, the first point of intervention would be in the planning stage hence we look at the . The attributes important for GBS in this stage are scheme/project name and the name of the department. These are important to identify the department which is doing gendered allocation as well as to identify the scheme that is being run to address the gender concerns. The scheme objective is crucial to understand the nature of the scheme - who is it targeted to, for what purpose, what would be the nature of intervention by the government (direct benefit transfer, construction/maintenance of asset, gender awareness, etc.), what are the components of the scheme (sub-schemes therewith), who is responsible for designing and executing the scheme[13], etc.
This has to be followed by having the beneficiary registry in place. At present, the COA does not capture any information on the intended beneficiary of any government. The proposed COA by the Sundaramurti Committee Report however suggests the introduction of the ‘Target Segment’ that would be used to identify expenditures targeted at special policy objectives, including women-centric expenditures that would enable the capturing of budget and accounting data pertaining to gender budgeting. In iFIX, this would mean introducing the Beneficiary Registry that would be a constituent of the . The fiscal events of plan and estimate can then be linked to this registry/ common reference data to derive more value from the fiscal events data while also allowing data users to run better analytical queries.
Debt Related Processes currently available in process map format (Link: ) to be translated to specifications. These processes are listed below:
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
↑
This information could be critical to report data under the administrative segment, as proposed in the new COA by the Sundaramurti Committee Report (). ↑
Use specific branch for iFix migration.
1. | Department/s | Announcement of the new Project/ Scheme by Chief Minister/ Department Minister | Create project/scheme | New Project/ Scheme created | Announcement Date, Place, Project/Scheme Name, Department Name, Financial Year, Announcement By | N | - | DPR Preparation and Approval CORE: Name of project, Type of Project, Goal & Objectives (Outcome) of the Project, Work Plan, Source of funding, Major milestones, Outputs of each activity, Cost of each Activity, Total Cost of the Project ANCILLARY: Administrative Approval Date, Approved By, Approval Remarks, Rejection Remarks, |
2. | HoD | New Project/ Scheme created | Draft Detailed Project Report | Drafted DPR | Name of project, Type of Project, Goal & Objectives (Outcome) of the Project, Work Plan, Source of funding, Major milestones, Outputs of each activity, Cost of each Activity, Total Cost of the Project | Y | Plan | DPR |
3. | Administrative Department (AD) | Drafted DPR | Review DPR | Approved DPR | Administrative Approval Date, Approved By, Remark | Y | Plan | DPR |
2. | Administrative Department (AD) | Drafted DPR | Review the DPR | Rejected DPR Sent to HoD for review | Remarks for rejecting the DPR | Y | Plan | DPR |
4. | Chief Minister/ Department Minister | Approved DPR | Review the DPR | Approved DPR with administrative sanction | Administrative sanction date, Approved By, Remark | Y | Plan | DPR |
3. | Chief Minister/ Department Minister | Approved DPR | Review the DPR | Rejected DPR Sent to AD for review | Remarks for rejecting the DPR | Y | Plan |
5. | AD/ HoD | Approved DPR with administrative sanction | Prepare Financial sanction proposal Prepare case for financial sanction | Financial sanction proposal | Project Name, total Budget, Multi Year Plan, Financial Year, Project Start Year, Project Duration, Remark, Proposed COA, Amount | Y | Plan | Financial Sanction Preparation and Approval CORE: Name of project, Type of Project, Goal & Objectives (Outcome) of the Project, Work Plan, Mode of funding, Funding Agency, Major milestones, Outputs of each activity, Cost of each Activity, Total Cost of the Project ANCILLARY: Administrative Approval Date, Approved By, Approval Remarks, Rejection Remark |
6. | FD/Planning Department | Financial sanction proposal | Review financial sanction proposal | Approved, Generate Financial Sanction | Financial Year, Sanction No., Sanction Date, Sanction Amount, COA | Y | Plan |
5. | FD/Planning Department | Financial sanction proposal | Review Financial Sanction | Rejected Objection sent to HoD for consideration and review | Remarks for rejecting financial proposal | Y | Plan |
6. | AD/ HoD | Approved DPR with financial sanction | Prepare plan for budget provision and send to the FD via BFC | Budget provision[2] (as RE for the current fiscal year) | Department Name, Project Name, Project Amount, Financial Year, COA | Y | Plan |
# (1) | Actors (2) | Input (3) | Verb and Noun 4) | Output (5) | Attributes (6) | Does it trigger a fiscal event (7) | Fiscal Event Type (8) | Fiscal Event Sub-Type (9) | Data Attributes for iFIX (10) |
1. | Finance Department | Initiate budget process | Issue Budget Circular and Budget Calendar Budget Circular is issued to all departments inviting estimates of receipts and expenditure of the respective department | Budget Circular and budget calendar issued | Department name, annual receipts and expenditure estimates (BE and RE both), Demand for Grant details, Details of competent authority (BCO, DDO), dates | N | - | - |
2. | [3]Estimating Officer / DDO/HoD | Budget Circular | Prepare revenue receipt estimates Estimating officers prepare Budget Estimate for current FY and Revised Estimate for previous FY based on historic trends, projection of future demand and effect of any policy change to be formulated by the department/State | Budget Estimate for current FY and Revised Estimate for previous FY created | Department Name, COA details, projection of demand (for service/goods) and associated revenue receipt - gross amount, arrears, refunds, existing rate of tariff/fee/tax, proposed change in tariff/fee/tax as sanctioned by government | Y | Estimate | Proposed Estimate | Inputs for Department Level Proposed Budget CORE: Department Name, COA details, projection of demand and associated revenue - gross amount, arrears, refunds, ANCILLARY: Existing rate of tariff/fee/tax, proposed change in tariff/fee/tax as sanctioned by government. |
3. | Estimating Officer / DDO | Budget Estimates - current FY and Revised Estimates - previous FY | Upload on IFMS | Budget Estimates and Revised Estimates uploaded on IFMS | Department Name, COA details, estimated revenue receipt - gross amount, arrears, refunds | Y | Estimate |
4. | HoD | Budget Estimates and Revised Estimates | Scrutinize the estimates | Approved/rejected/modified estimates | Department Name, COA details, estimated revenue receipt - gross amount, arrears, refunds | Y | Estimate | Department Level Proposed Budget CORE: Department Name, COA details, estimated revenue receipt - gross amount, arrears, refunds ANCILLARY: Remarks for cuts and modifications introduced by HOD/ BCO etc. |
5. | AD | Approved estimates by BCO | Scrutinize the estimates | Approved/rejected/modified estimates | Department Name, COA details, estimated revenue receipt - gross amount, arrears, refunds | Y | Estimate |
6. | FD | Approved estimates by AD | Constitute BFC | BFC constituted | Not Applicable | N | - | - |
7. | BFC | Approved estimates by AD | Review estimates | Approved/rejected/modified estimates by BFC | Department Name, COA details, estimated revenue receipt- gross amount, arrears, refunds | Y | Estimate | State Level Proposed Budget CORE: Department Name, COA details, estimated revenue receipts - gross amount, arrears, refunds ANCILLARY: Remarks for cuts and modifications introduced by BFC, Cabinet etc. - |
8a. | Finance Department | Approved estimates by BFC | Consolidate budget estimates for cabinet approval FD makes any necessary changes to the received estimates and consolidate s department-wise detailed estimates | Budget documents | Department Name,COA, estimated revenue receipts | Y | Estimate |
8b. | Finance Department | Approved estimates by BFC | Communicate details of finalized estimates to concerned ADs of departments for information | Consolidated department-wise estimates | Department Name, COA details, estimated revenue receipts - gross amount, arrears, refunds | N | Estimate |
9. | Finance Department | Budget documents | Prepare memorandum based on the budget documents and submit to the Cabinet for approval | Memorandum containing all budget documents | [4]Multiple Documents - Annual Financial Statement, Receipt Budget, Budget At A Glance | N | - | - |
# (1) | Actors (2) | Input (3) | Verb and Noun 4) | Output (5) | Attributes (6) | Does it trigger a fiscal event (7) | Fiscal Event Type (8) | Fiscal Event Sub-Type (9) | Data Attributes for iFIX (10) |
1. | Finance Department | Issue Budget Circular along with Budget Calendar Budget Circular is issued to all departments inviting estimates of expenditure (Revenue) of the respective department | Budget Circula along with budget calendar issued | Department name, annual revenue expenditure estimates (BE and RE both), Demand for Grant details, Details of competent authority (BCO, DDO), dates | N | - | - |
2. | Estimating Officer / DDO/HoD | Budget Circular | Prepare expenditure (Revenue) estimates Estimating officers prepare Budget Estimate for current FY and Revised Estimate for previous FY based on historic trends, projection of future demand and effect of any policy change to be formulated by the department/State | Budget Estimate for current FY and Revised Estimate for previous FY created | Department Name, COA details, estimated revenue expenditure, changes in pay scale,, no. of employee, electricity tariffs, interest rates etc, | Y | Estimate | Inputs for Department Level Proposed Budget CORE: Department Name, COA details, estimated revenue expenditure ANCILLARY: changes in pay scale,, no. of employee, electricity tariffs, interest rates etc, |
3. | Estimating Officer / DDO | Budget and Revised Estimated | Upload on IFMS | Budget and Revised Estimates uploaded on IFMS | Department Name, COA details, estimated revenue expenditure | Y | Estimate |
4. | HoD | Budget and Revised Estimates prepared by DDO | Scrutinize the estimates | Approved/rejected/modified estimates | Department Name, COA details, estimated revenue expenditure | Y | Estimate | Department Level Proposed Budget CORE: Department Name,COA, estimated revenue expenditure ANCILLARY: Remarks for cuts and modifications introduced by HoD, AD etc. |
5. | AD | Approved estimates by HoD | Scrutinize the estimates | Approved/rejected/modified estimates | Department Name, COA details, estimated revenue expenditure | Y | Estimate |
6. | Finance Department | Approved estimates by AD | Form Budget Finanilization Committee to scrutinize the submitted estimates | Approved/rejected/modified estimates by BFC | Department Name, COA details, estimated revenue expenditure | Y | Estimate | State Level Proposed Budget CORE: Department Name,COA, estimated revenue expenditure ANCILLARY: Remarks for cuts and modifications introduced by BFC, Cabinet etc. |
7a. | Finance Department | Approved estimates by BFC | Prepare budget documents for submission to cabinet for approval FD makes any necessary changes to the received estimates and consolidate s department-wise detailed estimates | Budget documents | Department Name,COA, estimated revenue expenditure | Y | Estimate |
7b. | Finance Department | Approved estimates by BFC | Communicate details of finalized estimates to concerned ADs of departments for information | Consolidated department-wise estimates | Department Name, COA details, estimated revenue expenditure | Y | Estimate |
8. | Finance Department | Budget documents | Prepare memorandum based on the budget documents and submit to the Cabinet for approval | Memorandum containing all budget documents | Multiple Documents - Annual Financial Statement, Expenditure Budget, Budget At A Glance | N | - | - |
# (1) | Actors (2) | Input (3) | Verb and Noun 4) | Output (5) | Attributes (6) | Does it trigger a fiscal event (7) | Fiscal Event Type (8) | Fiscal Event Sub-Type (9) | Data Attributes for iFIX (10) |
1. | Planning Department | Issue planning Circular along with meeting Calendar Planning circular is issued to all departments inviting ceiling of expenditure (Project/Scheme) of the respective department | Planning Circular along with meeting calendar issued | Department Name, Meeting Date | N | - | - |
2. | Department HoD | Discussion on ceiling for New project/Scheme | Planning Dept given the budget ceiling In the discussion, planning department define the ceiling for the budget preparation for the respective department | Defined the Budget ceiling for new project/Scheme | Department name, annual New capital/Project expenditure ceiling,, Details of competent authority (HoD, DDO), dates | N | - | - |
3. | Finance Department | Issue Budget Circular along with Budget Calendar Budget Circular is issued to all departments inviting estimates of expenditure (capital) of the respective department | Budget Circular along with budget calendar issued Profile with all relevant details created which can be edited, enabled, disabled | Department name, annual capital outlay estimates (BE and RE both), Demand for Grant details, Details of competent authority (BCO, DDO), dates | N | - | - |
4. | Estimating Officer / DDO/HoD | Budget Circular | Prepare expenditure (Capital) estimates under the defined ceiling by the planning dept. Estimating officers prepare Budget Estimate for current FY and Revised Estimate for previous FY based on historic trends, projection of future demand and effect of any policy change to be formulated by the department/State | Budget Estimate for current FY and Revised Estimate for previous FY created | Department Name, COA details, estimated capital outlay | Y | Estimate | Inputs for Department Level Proposed Budget CORE: Department Name, COA details, estimated capital outlay ANCILLARY: None |
5. | Estimating Officer / DDO | Budget and Revised Estimated | Upload on IFMS | Budget and Revised Estimates uploaded on IFMS | Department Name, COA details, estimated capital outlay | Y | Estimate |
6. | HoD | Budget and Revised Estimates prepared by DDO | Scrutinize the estimates | Approved/rejected/modified estimates | Department Name, COA details, estimated capital outlay | Y | Estimate | Department Level Proposed Budget CORE: Department Name, COA details, estimated capital outlay ANCILLARY: Remarks for cuts and modifications introduced by HOD, BCO, Planning Department etc. |
7. | AD | Approved estimates by BCO | Scrutinize the estimates | Approved/rejected/modified estimates | Department Name, COA details, estimated capital outlay | Y | Estimate |
8. | Planning Department | Approved estimated by AD | Scrutinize the estimates | Approved/rejected/modified estimates | Department Name, COA details, estimated capital outlay | Y | Estimate |
9. | Finance Department | Approved estimates by PD | Form Budget Finanilization Committee to scrutinize the submitted estimates | Approved/rejected/modified estimates by BFC | Department Name, COA details, estimated capital outlay | Y | Estimate | State Level Proposed Budget CORE: Department Name, COA details, estimated capital outlay ANCILLARY: Remarks for cuts and modifications introduced by BFC, Cabinet etc. |
10a. | Finance Department | Approved estimates by BFC | Prepare budget documents for submission to cabinet for approval FD makes any necessary changes to the received estimates and consolidate department-wise detailed estimates | Budget documents | Department Name,COA, estimated capital outlay | Y | Estimate |
10b. | Finance Department | Approved estimates by BFC | Communicate details of finalized estimates to concerned ADs of departments for information | Consolidated department-wise estimates | Department Name, COA details, estimated capital outlay | Y | Estimate |
11. | Finance Department | Budget documents | Prepare memorandum based on the budget documents and submit to the Cabinet for approval | Memorandum containing all budget documents | Multiple Documents - Annual Financial Statement, Expenditure Budget, Budget At A Glance | N | - | - |
# (1) | Actors (2) | Input (3) | Verb and Noun 4) | Output (5) | Attributes (6) | Does it trigger a fiscal event (7) | Fiscal Event Type (8) | Fiscal Event Sub-Type (9) | Data Attributes for iFIX (10) |
1. | Finance Department | Memorandum containing all budget documents | Present to the Legislative Assembly | Budget speech of the FM | Priorities of the Government, Current status of some important existing schemes, New schemes and programmes to be launched during the ensuing year, Tax/Tariff proposals and reliefs to be granted, if any, Summary of Revised Estimates and Budget Estimates | N | - | - | - |
2. | Finance Department | Memorandum containing all budget documents | General discussion on the budget as a whole and/or on any question of principle or policy involved therein | Tabling of the Budget Documents | Not Applicable | N | - | - | - |
3. | Finance Department | Memorandum containing all budget documents | Vote on Demand for Grants | Voted Demand for Grants | Demand No., Department Name, COA, Amount for BE, RE and Actuals | N | - | - | - |
4. | Finance Department | Demand for Grants | Introduce Appropriation Bill | Appropriation Bill | N | - | - | - |
5. | Finance Department | Appropriation Bill | Obtain Governor’s assent | Appropriation Act | N | - | - | - |
6a. | Finance Department | Appropriation Act | Issue a circular authorising incurring of expenditure as per guidelines contained in the Appropriation Act | Circular containing details as per the Appropriation Act | N | - | - | - |
6b. | Finance Department | Appropriation Act | Issue notification in the Official Gazette | Gazette notification | N | - | - | - |
# (1) | Actors (2) | Input (3) | Verb and Noun 4) | Output (5) | Attributes (6) | Does it trigger a fiscal event (7) | Fiscal Event Type (8) | Fiscal Event Sub-Type (9) | Data Attributes for iFIX (10) |
1a. | Finance Department | Circular containing details as per the Appropriation Act | Upload budget for department on IFMS | Budget by department There is detailed information of allotments placed at the disposal of each department during the budget year | Department name, Department code, COA heads, amount, financial year, authorization authority with details | Y | Estimate | State Level Approved Budget CORE: Department name, Department code, COA heads, amount, financial year ANCILLARY: Authorization authority with details |
1b. | Finance Department | Circular containing details as per the Appropriation Act | Send the minutes of the meeting of BFC to the concerned AD | Minutes of the BFC | Department name, Department code, COA heads, amount, financial year, authorization authority with details | N | - | - |
2. | AD | Minutes of the BFC | Issue orders for DDO-wise estimates | Order to prepare DDO wise estimates | Department name, Department code, COA heads, amount, financial year, authorization authority with details | N | - | - |
3. | HoD | Order to prepare DDO wise estimates | Prepare DDO wise estimates | DDO wise estimates available on IFMS | Department name, Department code, COA heads, amount, financial year, authorization authority with details, DDO Name and Code | Y | Estimate | Approved Estimate | DDO Level Approved Budget CORE: Department name, Department code, COA heads, amount, financial year, DDO Name and Code ANCILLARY: Authorization Authority with Details |
4. | DDO | DDO wise estimates available on IFMS | View allocation on IFMS | DDO receives details of allotments provided for expenditure | Department name, Department code, COA heads, amount, financial year, DDO Name and Code | Y | Estimate | Approved Estimate |
# (1) | Actors (2) | Input (3) | Verb and Noun 4) | Output (5) | Attributes (6) | Does it trigger a fiscal event (7) | Fiscal Event Type (8) | Fiscal Event Sub-Type (9) | Data Attributes for iFIX (10) |
1. | DDO | 60% of first tranche of project/scheme funds utilized | Prepare Utilization Certificate | Utilization Certificate | Department name, department code, Vendor name, project details, work done against total deliverable, amount sanctioned, amount utilized, balance amount, DDO name and DDO code | N | - | - |
2. | HoD/AD | Utilization Certificate | Review utilization certificate | Approved, forward to FD with demand request | Department name, department code, Vendor name, project details, work done against total deliverable, amount sanctioned, amount utilized, balance amount, amount requested (next tranche), DDO name and DDO code | N | - | - |
3. | FD | Utilization Certificate, and demand request | Review utilization certificate and demand request | Approve next tranche Amount reflected on IFMS, visible to HoD | Approved UC + Department name, department code, amount, COA | Y | Demand | Demand Request by DDO CORE: Department name, department code, amount, COA, DDO name and DDO code ANCILLARY: (Approved UC: Department name, department code, Vendor name, project details, work done against total deliverable, amount sanctioned, amount utilized, balance amount, DDO name and DDO code) |
4. | HoD | Approved next tranche information | Inform to respective DDO | Amount reflected on IFMS, visible to DDO | Department name, department code, amount, COA, DDO name and DDO code | Y | Demand |
# (1) | Actors (2) | Input (3) | Verb and Noun 4) | Output (5) | Attributes (6) | Does it trigger a fiscal event (7) | Fiscal Event Type (8) | Fiscal Event Sub-Type (9) | Data Attributes for iFIX (10) |
1. | HoD/DDO | Approved DPR with financial sanction | Invite bids from vendor/contract for work Review process of tender | Bid invitation | Bid Reference no. Department name, Project details, date of bid invitation, selection/eligibility criteria Bid Reference no. | N | - | - |
2. | HoD/DDO | Bid invitation | Award work to a vendor/contractor | Work awarded to vendor/contractor | Bid Reference no. , Vendor name, date of award of work, Project details - name, terms and conditions, deliverables expected, vendor account details, Tender value, Contract rate | Y | Plan | Work Allocation to Vendor CORE: Bid Reference No., Contract Reference No., Vendor Name, Vendor account details, Tender value, Contract rate, Date of award, Project Name ANCILLARY: Project Details- Terms and Conditions, Deliverables expected |
3. | HoD/DDO | Contract sign | Create Work order/purchase order | Work Execution start | Bid Reference no., Contract Reference No., Vendor name, date of award of work, project details - name, terms and conditions, deliverables expected, vendor account details, Tender value, Contract rate | Y | Plan |
# (1) | Actors (2) | Input (3) | Verb and Noun 4) | Output (5) | Attributes (6) | Does it trigger a fiscal event (7) | Fiscal Event Type (8) | Fiscal Event Sub-Type (9) | Data Attributes for iFIX (10) |
1. | Vendor | Work execution | Execute work and submit bill with Physical report of assign work | Physical report with invoice bill | Vendor name, date of award of work, project details - terms and conditions, deliverables expected, vendor account details, physical report, tender value, contract rate, invoice amount, date | Y | Bill | Bill Generation by Vendor CORE: Work order reference, Bill date, Name of vendor, Bill amount (Gross, Deduction, Net), Bill type, Bank account details ANCILLARY: (Physical Report: Vendor name, date of award of work, project details - terms and conditions, deliverables expected, vendor account details, physical report, tender value, contract rate, invoice amount, date) |
2. | Vendor | Eligibility criteria (Milestones, MRN, PO) | Submit bill Bill submitted with an invoice covering letter as per the work done on the deliverable designed | Acknowledge bill receipt When bill is submitted, receiving received from DDO on the invoice covering letter | Work order reference, Bill date, Name of vendor, Bill amount (Gross, Deduction, Net), Bill type, Vendor Bank account details | Y | Bill |
3. | DDO | Vendor bill | Review bill DDO reviews the invoice against set rules/norms (verification process) | Verified vendor bill | Project name, Sanction, Cost, Deliverable, Timeline, Bill amount against work done, Vendor Bank account details | Y | Bill | Bill Generation by DDO CORE: Bill date, Name of vendor, Bill amount (Gross, Deduction, Net), Bill type, COA head, DDO code, Department code, Vendor Bank account details, Reference No. ANCILLARY: Token No., Treasury rules, Bill approval status, Approval or Rejection Remarks, Relevant Fields from Physical Report (Project name, Sanction, Cost, Deliverable, Timeline, Bill amount against work done, Vendor Bank account details) |
4. | DDO | Verified vendor bill | Generate bill (government format) Bill generated by DDO based on verified bill | Bill available in department system Bill available for sharing/uploading on treasury system | Bill date, Name of vendor, Bill amount (Gross, Deduction, Net), Bill type, COA head, DDO code, Department code, Vendor Bank account details . Reference No. | Y | Bill |
5. | Treasury Token Section | Bill received from DDO | Assign token number and audit officer | Bill tagged with token number reflected in Treasury system | Token number (Bill No.), date, gross amount, deduction, net amount, COA head, DDO code, Department code, Vendor Bank account details | Y | Bill |
6. | Treasury Audit Section - auditor, accountant and treasury officer | Bill with token number | Audit bill (as per treasury rules) | Bill status - approved or rejected, with remark | Treasury rules, bill approval status (approved or rejected, with remark) | Y | Bill |
7. | Treasury Audit Section | Rejection criteria | Create intimation (with details for reason for rejection) | Status and rejection criteria is reflected in Department Bill system | Remarks DDO reviews the remarks, rectifies the bill, and sends again to treasury for payment | Y | Bill |
8. | Treasury Payment Section | Bill - approved | Generate payment advice | Payment advice pushed to banking system | Token number, date, gross amount, deduction, net amount, COA head, DDO code, Department code, Vendor Bank account details | Y | Payment | Payment Advice Generation CORE: Token number, date, gross amount, deduction, net amount, COA head, DDO code, Department code, Vendor Bank account details ANCILLARY: Payment Advice Status, Remarks |
9. | RBI/Bank | Payment advice receipt | Validate payment advice | Payment advice status - accepted or rejected | Payment advice status and remarks, if rejected | Y | Payment |
10. | RBI/Bank | Payment advice - accepted | Credit beneficiary account Debit State Govt account | e-scroll pushed to Treasury System | Token number, date, gross amount, deduction, Vendor Bank account details | Y | Payment |
11. | Treasury Audit Section | e-scroll | Generate voucher number | Consolidate e-scrolls for preparation of AG accounts | Voucher number, Date, Amount, COA head, DDO code, Department code | Y | Debit | Payment Completion CORE: Voucher number, Date, Amount, COA head, DDO code, Department code ANCILLARY: None |
12. | RBI/Bank | Payment advice - rejected | Credit suspense head | Status and rejection criteria is reflected in Treasury system | Remarks | Y | Payment | (Included in Ancillary attributes mentioned against Row 8-10 of this process. ) |
13. | Treasury Audit Section | Rejection criteria | Intimation of rejection DDO rectifies the bill and sends again to treasury for payment | Status and rejection criteria is reflected in Department Bill System | Remarks DDO reviews the remarks, Rectifications to the bill, and | Y | Bill | (Included in Ancillary attributes mentioned against Row 3-7 of this process. ) |
# (1) | Actors (2) | Input (3) | Verb and Noun 4) | Output (5) | Attributes (6) | Does it trigger a fiscal event (7) | Fiscal Event Type (8) | Fiscal Event Sub-Type (9) | Data Attributes for iFIX (10) |
1. | DDO (Maker) | Eligibility criteria (Monthly Salary Bill/ reimbursement of TA/Medical and Other Expenditure) | Submit bill Bill submitted by the maker(DDO) | Acknowledge bill receipt When bill is submitted to DDO officer for approval | Department code, DDO Code, Employee name, Bank details, Month, Year, designation, Gross Amount, Net Amount, Deduction Amount, bill type(Salary/TA/Medical and other expense ), Reference No. | Y | Bill | Bill Generation by DDO CORE: Department code, DDO Code, Employee name, Bank details, Month, Year, designation, Gross Amount, Net Amount, Deduction Amount, bill type (Salary/TA/Medical and other expense ), COA (Budget Head), Reference No. ANCILLARY: Token Number, Treasury Rules, Bill Approval Status, Approval or Rejection Remarks |
2. | DDO (Approval) | Revenue bill | Review bill DDO reviews the bill against set rules/norms (verification process) | Verified Revenue bill | Department code, DDO Code, Employee name, Bank details, Month, Year, designation, Gross Amount, Net Amount, Deduction Amount, bill type(Salary/TA/Medical and other expense ), Reference No. | Y | Bill |
3. | DDO | Verified Revenue bill | Generate bill (government format) Bill generated by DDO based on verified bill | Bill available in department system Bill available for sharing/uploading on treasury system | Department code, DDO Code, Employee name, Bank details, Month, Year, designation, Gross Amount, Net Amount, Deduction Amount, bill type(Salary/TA/Medical and other expense, COA (Budget Head) | Y | Bill |
4. | Treasury Token Section | Bill received from DDO | Assign token number and audit officer | Bill tagged with token number reflected in Treasury system | Token number (Bill No.) , date, gross amount, deduction, net amount, COA head, DDO code, Department code, Bank account details | Y | Bill |
5. | Treasury Audit Section - auditor, accountant and treasury officer | Bill with token number | Audit bill (as per treasury rules) | Bill status - approved or rejected, with remark | Treasury rules, bill approval status (approved or rejected, with remark) | Y | Bill |
6.. | Treasury Audit Section | Rejection criteria from Treasury Audit | Create intimation (with details for reason for rejection) | Status and rejection criteria is reflected in Department Bill System | Remarks DDO reviews the remarks, rectifies the bill, and sends again to treasury for payment | Y | Bill |
7. | Treasury Payment Section | Bill - approved | Generate payment advice | Payment advice pushed to banking system | Token number, date, gross amount, deduction, net amount, COA head, DDO code, Department code, Bank account details | Y | Payment | Payment Advice Generation CORE: Token number, date, gross amount, deduction, net amount, COA head, DDO code, Department code, Bank account details ANCILLARY: Payment Advice Status, Remarks |
8. | RBI/Bank | Payment advice receipt | Validate payment advice | Payment advice status - accepted or rejected | Payment advice status and remarks, if rejected | Y | Payment |
9. | RBI/Bank | Payment advice - accepted | Debit State Govt account Credit beneficiary account | e-scroll pushed to Treasury System | Token number, date, gross amount, deduction, Bank Account details | Y | Payment |
10. | Treasury Audit Section | e-scroll | Generate voucher number | Consolidate e-scrolls for preparation of AG accounts | Voucher number, Date, Amount, COA head, DDO code, Department code | Y | Debit | Payment Completion CORE: Voucher number, Date, Amount, COA head, DDO code, Department code ANCILLARY: None |
11. | RBI/Bank | Payment advice - rejected | Credit suspense head | Status and rejection criteria is reflected in Treasury system | Remarks | Y | Payment | (Included in Ancillary attributes mentioned against Row 7-9 of this process. ) |
12. | Treasury Audit Section | Rejection criteria from RBI | Create intimation (with details for reason for rejection) DDO rectifies the bill and sends again to treasury for payment | Status and rejection criteria is reflected in Department Bill System | Remarks DDO reviews the remarks, Rectifications in the Bill | Y | Bill | (Included in Ancillary attributes mentioned against Row 1-6 of this process.) |
# (1) | Actors (2) | Input (3) | Verb and Noun 4) | Output (5) | Attributes (6) | Does it trigger a fiscal event (7) | Fiscal Event Type (8) | Fiscal Event Sub-Type (9) | Data Attributes for iFIX (10) |
1. | DDO | Receipt (water charges collected) | Create profile DDO logs in eReceipt Module of IFMS and creates profile to start receipt deposit process | Profile created Profile with all relevant details created which can be edited, enabled, disabled | User name (DDO), Department Name and Code, Treasury and Sub-Treasury Name and Code, COA Head | N | - | - |
2. | DDO | DDO profile | Fill Challan details DDO fills out all necessary details including the the type of payment, bank and mode of payment | e-Challan created | DDO details - name, division, etc., Nature of payment, Financial Year, Period (annual, monthly, etc.), COA head, gross amount, net amount, mode of payment | Y | Demand | Challan Created CORE: DDO details - name, division, etc., Nature of payment, Financial Year, Period (annual, monthly, etc.), COA head, gross amount, net amount, mode of payment ANCILLARY: None |
3a. | DDO | e-Challan | Deposit payment - online DDO chooses a payment mode and fills in all required details for making payment | Bill Invoice reflected in IFMS eReceipt | Bank details, date and time of payment, CIN, Reference Number, | Y | Payment | Payment Initiation CORE: DDO details - name, division, etc., Nature of payment, Financial Year, Period (annual, monthly, etc.), COA head, gross amount, net amount, mode of payment ANCILLARY: None |
3b. | DDO | e-Challan | Deposit payment - offline[6] | Challan | Bank details, date and time of payment, CIN, Reference Number, | Y | Payment |
4. | Bank | Deposited Challan | Credit the treasury account | E-scroll generated | UTR No., CIN, date and time of transaction, Department Code, treasury code, COA head, amount | Y | Credit | Collection Completion CORE: DDO details - name, division, etc., Nature of payment, Financial Year, Period (annual, monthly, etc.), COA head, gross amount, net amount, mode of payment ANCILLARY: None |
5. | Treasury | e-scroll | Generate Treasury Challan No. and consolidate all challans | Consolidated report for AG | Treasury challan No. Treasury Challan Date, Amount, COA, Department Code | N | - | - |
# (1) | Actors (2) | Input (3) | Verb and Noun 4) | Output (5) | Attributes (6) | Does it trigger a fiscal event (7) | Fiscal Event Type (8) | Fiscal Event Sub-Type (9) | Data Attributes for iFIX (10) |
1. | DDO | Excess expenditure /Need for supplementary grants | Prepare Statement of need of Supplementary Demand for Grants | Submit Statement of Supplementary Demand for Grants | Department Name and Code, Demand for Grant No., DDO code, COA, Original Appropriation, Actual Expenditure, Supplementary grant (amount) required and reasons for same | Y | Estimate | Estimation of Supplementary Grants and Approval CORE: Department Name and Code, Demand for Grant No., DDO code, COA, Original Appropriation, Actual Expenditure, Supplementary grant (amount) ANCILLARY: Reasons for supplementary grant, comments from BCO, comments from AD, Approval / Rejection Remarks |
2. | BCO | Statement of Supplementary Demand for Grants | Review against budget provision and prepare proposal for Supplementary Grants | Proposal for Supplementary Grants | Department Name and Code, Demand for Grant No., COA, Original Appropriation, Actual Expenditure, Supplementary grant (amount) required and reasons for same, comments of BCO | Y | Estimate |
3. | AD | Proposal for Supplementary Grants | Review the proposal | Approved, submit proposal to FD | Department Name and Code, Demand for Grant No., COA, Original Appropriation, Actual Expenditure, Supplementary grant (amount) required and reasons for same, comments of AD | Y | Estimate |
4. | FD | Proposal for Supplementary Grants | Review the proposal | Approved, send to Legislative Assembly for approval | Department Name and Code, Demand for Grant No., COA, Original Appropriation, Actual Expenditure, Supplementary grant (amount) required | Y | Estimate |
4b. | FD | Proposal for Supplementary Grants | Review the proposal | Rejected, send communication with remarks to AD | Remarks for rejection | Y | Estimate |
5. | Legislative Assembly | Proposal for Supplementary Grants | Review the proposal | Approved, issue communication to FD on Supplementary Demand for Grants | Department Name and Code, Demand for Grant No., COA, Original Appropriation, Actual Expenditure, Approved Supplementary grant (amount) | Y | Estimate | Approved Estimated |
# (1) | Actors (2) | Input (3) | Verb and Noun 4) | Output (5) | Attributes (6) | Does it trigger a fiscal event (7) | Fiscal Event Type (8) | Fiscal Event Sub-Type (9) | Data Attributes for iFIX (10) |
1. | DDO | Excess savings/Need for reappropriation | Prepare Statement of expected savings/Revised Estimates of Expenditure | Submit Statement of expected savings/Revised Estimates of Expenditure | Department Name and Code, Demand for Grant No., DDO code, COA, Original Appropriation, Actual Expenditure, Savings and Amount | Y | Estimate | Excess/ Savings Estimation- and Approval CORE: Department Name and Code, Demand for Grant No., DDO code, COA, Original Appropriation, Actual Expenditure, Savings/Excess,- Amount ANCILLARY: Comments from BCO, Comments from AD, Remarks for rejection |
2. | BCO | Statement of expected savings/Revised Estimates of Expenditure | Review against budget provision and consolidate all Revised Estimates | Approved, submit proposal to AD | Comments from BCO | Y | Estimate |
3. | AD | Consolidate Revised Estimates | Review the Revised Estimates | Approved, submit proposal to FD | Comments from AD | Y | Estimate |
4a. | FD | Consolidated Revised Estimates | Review the proposal | Approved, upload Revised Estimates on IFMS | Department Name and Code, Demand for Grant No., DDO code, COA, Original Appropriation, Actual Expenditure, Approved Savings/Excess,- Amount | Y | Estimate |
4b. | FD | Consolidated Revised Estimates | Review the proposal | Rejected, send communication with remarks to AD | Remarks for Rejection | Y | Estimate |
5.[7] | FD | Approved request for savings | Revoke amount under the COA of respective department | Savings surrendered under the COA of respective department | Department Name and Code, Demand for Grant No., COA, Savings, Amount surrendered |
# (1) | Actors (2) | Input (3) | Verb and Noun 4) | Output (5) | Attributes (6) | Does it trigger a fiscal event (7) | Fiscal Event Type (8) | Fiscal Event Sub-Type (9) | Data Attributes for iFIX (10) |
1. | Treasury | e-scroll | Generate Treasury Challan No. and consolidate all challans | Consolidated challans | Treasury challan No. Treasury Challan Date, Amount, COA, Department Code | N | - | - | - |
2. | Treasury | Consolidated challans | Prepare monthly receipt report | Consolidated receipt report sent to AG | Department name and code, total budget provision, receipt in the last month, total receipts till last month of the FY, expected collections remaining | N | - | - | - |
3. | DDO | Prepare daily/bi-weekly/monthly receipt report | Consolidated report sent to BCO | Department name and code, total budget provision, receipt in the last month, total receipts till last month of the FY, expected collections remaining | N | - | - | - |
4. | HoD/AD | Consolidated receipt report | Review consolidated receipt report | Approved, shared with AG | Department name and code, total budget provision, receipt in the last month, total receipts till last month of the FY, expected collections remaining | N | - | - | - |
# (1) | Actors (2) | Input (3) | Verb and Noun 4) | Output (5) | Attributes (6) | Does it trigger a fiscal event (7) | Fiscal Event Type (8) | Fiscal Event Sub-Type (9) | Data Attributes for iFIX (10) |
1. | Treasury Audit Section | e-scroll | Generate voucher number and consolidate all vouchers | Consolidated vouchers | Voucher number, Date, Amount, COA head, DDO code, Department code | N | - | - | - |
2. | Treasury | Consolidated vouchers | Prepare monthly expenditure reports | Consolidated expenditure report sent to AG | Department name and code, COA, total budget provision, expenditure incurred in the last month, total expenditure incurred till last month of the FY, balance remaining | N | - | - | - |
3. | DDO | Prepare monthly expenditure reports | Consolidated expenditure report sent to HoD | Department name and code, total budget provision, expenditure incurred in the last month, total expenditure incurred till last month of the FY, balance remaining | N | - | - | - |
4. | HoD/AD | Consolidated expenditure report | Review consolidated expenditure report | Approved, shared with AG | Department name and code, total budget provision, expenditure incurred in the last month, total expenditure incurred till last month of the FY, | N | - | - | - |
# (1) | Actors (2) | Input (3) | Verb and Noun 4) | Output (5) | Attributes (6) | Does it trigger a fiscal event (7) | Fiscal Event Type (8) | Fiscal Event Sub-Type (9) | Data Attributes for iFIX (10) |
1a. | AG | Consolidated receipt /expenditure report received from Treasury and HoD/AD | Review consolidated receipt/expenditure report | Approved, publish final accounts | <To be Identified> | N | - | - | - |
1b. | AG | Consolidated receipt /expenditure report received from HoD/AD | Review consolidated receipt/expenditure report | Rejected, reconcile accounts with DDO | <To be Identified> | N | - | - | - |
Receipts coding pattern | Expenditure coding pattern |
Major Head | Four digit | Four digit |
Sub major Head | Two digit | Two digit |
Minor Head | Three digit | Three digit |
Sub Head | Two digit | Two digit |
Group Head | Two digit |
Object Head | Two digit | Two digit |
Existing COA | What information can be derived solely from COA | What more needs to be done |
Major Head (e.g. 0202 Education, Sports, Arts and Culture) | Function Segment In some cases: Target Segment | A map of which administrative departments handle which budget |
Sub-major head (e.g. 01 General Education) | Function Segment In some cases: Target Segment | A map of which administrative departments handle which budget codes |
Minor Head (e.g. 111 Sarva Siksha Abhiyan) | Programme/Scheme Segment | A map of which administrative departments handle which budget codes |
Sub-head (for schemes under Minor Head) | Programme/Scheme Segment | A map of which administrative departments handle which budget codes |
Detailed Head (sub-scheme) | Programme/Scheme Segment | A map of which administrative departments handle which budget codes |
Object Head | Economic Segment | A second map of current economic classification with new economic classification |
AD | Administrative Department |
ADFA | Assistant Director, Finance & Accounts |
BCO | Budget Controlling Officer |
BFC | Budget Finalisation Committee |
BE | Budget Estimate |
COA | Chart of Accounts |
CS | Central Sector Schemes |
CSS | Centrally Sponsored Schemes |
DDO | Drawing & Disbursement Officer |
DPR | Detailed Project Report |
DWSS | Department of Water Supply and Sanitation |
FD | Finance Department |
FFC | Fifteenth Finance Commission |
HoD | Head of the Department |
IFMS | Integrated Financial Management System |
RE | Revised Estimate |
SNA | Single Nodal Account |
SPS | State Plan Schemes |
Tenant Management
Master Data Management (Chart of Accounts)
Fiscal Project Management
Immutable Fiscal events
Type of events: Bill, Demand, Payment, Receipt (budgetary, cash and accrual events in the budget cycle)
Deduplication Reversals, Adjustments - Planned
Fiscal Messaging and Subscription (coordination between different entities) - Planned
Fiscal reporting and analytics - Planned
Reference Adaptor
Reference Dashboard
Core service configuration and promotion docs
# | Sub-Type Fiscal Event | Definition |
Revenue Receipts | Capital Receipts | Revenue Expenditure | Capital Expenditure |
1 | Estimate | Event resulting in fiscal information containing a high-level view regarding what amount of receipts/ expenditure is expected/ needed. |
2 | Plan | Event resulting in fiscal information containing a detailed view regarding how the estimated receipts/ expenditure will be met/ utilized. |
3 | Demand | Event resulting in fiscal information containing a request for transfer or payment of money into the government account. |
4 | Bill | Event resulting in fiscal information containing a request for transfer or payment of money out of the government account. |
5 | Receipt | Event resulting in fiscal information containing banking transaction initiation details for any fund transferred into the government account. |
6 | Payment | Event resulting in fiscal information containing banking transaction initiation details for any fund transferred out of the government account. |
7 | Debit | Event resulting in fiscal information containing banking transaction completion details for any fund transferred out of the government account. |
8 | Credit | Event resulting in fiscal information containing banking transaction completion details for any fund transferred into the government account. |
DIGIT Exchange functions as a connector bridging services deployed across diverse domains. Its primary role involves signing and verifying exchange messages and generating events for ingestion into Elasticsearch for dashboard visualisation.
Signs the exchange messages and sends them to respective systems according to receiver ID and current domain.
Verifies data received from other domains, converts it to JSON object and forwards it to the program service.
Pushes the data to Kafka for dash-boarding and making the calls async.
In case of any exception, send a reply to the service that initiated the call.
/digit-exchange/
TBD
The MUKTA iFIX Adapter service is designed to facilitate communication between the Expense Service and the Program Service. It acts as a mediator, listening for payment creation events from the Expense Service, enriching the payment request data, and generating disbursement requests. These disbursement requests are then sent to the Program Service for further processing.
Expense Service
Expense Calculator Service
MDMS Service
Bank Account Service
Individual Service
Organization Service
User Service
Program Service
Encryption Service
The creation of a disbursement request involves listening to payment creation events on a designated topic. When a payment is created, the adapter processes the event, extracts relevant information, and forwards the enriched disbursement request to the Program Service for further processing.
In case of a failure in the payment topic, we also have the option to manually create a disbursement using the adapter by providing the payment number.
We can search for the created disbursements.
After forwarding the disbursement to the program service, it undergoes sanction enrichment. Subsequently, it is forwarded to the Digit Exchange service, establishing a connection between two servers. Once a response is received from the IFMS system, the disbursement undergoes further enrichment and is sent back to the Mukta Adapter. The adapter then updates the payment status based on the statuses received in the disbursement.
/mukta-ifix-adapter/
TBD
TBD
The Amazon Elastic Kubernetes Service (EKS) is one of the AWS services for deploying, managing, and scaling any distributed and containerized workloads, here we can provision the EKS cluster on AWS from ground up and using an automated way (infra-as-code) using terraform and then deploy the DIGIT-iFIX Services config-as-code using Helm.
Know about EKS: https://www.youtube.com/watch?v=SsUnPWp5ilc
Know what is terraform: https://youtu.be/h970ZBgKINg
AWS account with the admin access to provision EKS Service, you can always subscribe to free AWS account to learn the basics and try, but there is a limit to what is offered as free, for this demo you need to have a commercial subscription to the EKS service.
Install kubectl on your local machine that helps you interact with the kubernetes cluster
Install Helm that helps you package the services along with the configurations, envs, secrets, etc into a kubernetes manifests
Install terraform version (0.14.10) for the Infra-as-code (IaC) to provision cloud resources as code and with desired resource graph and also it helps to destroy the cluster at one go.
Install AWS CLI on your local machine so that you can use aws cli commands to provision and manage the cloud resources on your account.
Install AWS IAM Authenticator that helps you authenticate your connection from your local machine so that you should be able to deploy DIGIT services.
Use the AWS IAM User credentials provided for the Terraform (Infra-as-code) to connect with your AWS account and provision the cloud resources.
You'll get a Secret Access Key and Access Key ID. Save them safely.
Open the terminal and run the following command. The AWS CLI is already installed and the credentials are saved. (Provide the credentials and you can leave the region and output format blank).
The above will create the following file In your machine as /Users/.aws/credentials
Before we provision the cloud resources, we need to understand and be sure about what resources need to be provisioned by terraform to deploy DIGIT. The following picture shows the various key components. (EKS, Worker Nodes, Postgress DB, EBS Volumes, Load Balancer)
Considering the above deployment architecture, the following is the resource graph that we are going to provision using terraform in a standard way so that every time and for every environment, it'll have the same infra.
EKS Control Plane (Kubernetes Master)
Work node group (VMs with the estimated number of vCPUs, Memory)
Node-pool's (ifix)
EBS Volumes (persistent volumes)
RDS (Postgresql)
VPCs (private network)
Users to access, deploy and read-only
Ideally, one would write the terraform script from the scratch using this doc.
Here we have already written the terraform script that provisions the production-grade DIGIT Infra and can be customized with the specified configuration.
Let's clone the iFix-DevOps GitHub repo where the terraform script to provision EKS cluster is available and below is the structure of the files.
Example:
VPC Resources:
VPC
Subnets
Internet Gateway
Route Table
EKS Cluster Resources:
IAM Role to allow EKS service to manage other AWS services
EC2 Security Group to allow networking traffic with EKS cluster
EKS Cluster
EKS Worker Nodes Resources:
IAM role allowing Kubernetes actions to access other AWS services
EC2 Security Group to allow networking traffic
Data source to fetch the latest EKS worker AMI
AutoScaling Launch Configuration to configure worker instances
AutoScaling Group to launch worker instances
Database
Configuration in this directory creates a set of RDS resources including DB instance, DB subnet group, and DB parameter group.
Storage Module
Configuration in this directory creates EBS volume and attaches it together.
The following main.tf with create s3 bucket to store all the state of the execution to keep track.
iFix-DevOps/Infra-as-code/terraform/sample-eks/remote-state
2. The following main.tf contains the detailed resource definitions that need to be provisioned, please have a look at it.
Dir: iFix-DevOps/Infra-as-code/terraform/sample-eks
Define your configurations in variables.tf. Provide the environment-specific cloud requirements and use the same terraform template to customize the configurations.
The values given below must be mentioned in the following files. The blank ones will be prompted for inputs while execution.
variables.tf
Important: Create your own key base key before you run the terraform
Use the URL https://keybase.io/ to create your own PGP key. This creates both public and private keys on your machine. Upload the public key into the keybase account that you have just created, give a name to it and ensure that you mention that in your terraform. This allows the encryption of all sensitive information.
Example - the keybase user (in eGov case is "egovterraform") needs to be created and has to be uploaded the public key here - https://keybase.io/egovterraform/pgp_keys.asc
you can use this portal to decrypt your secret key. To decrypt PGP Message, upload the PGP Message, PGP Private Key and the Passphrase.
Now that we know what the terraform script does, the resources graph that it provisions and what custom values should be given with respect to your env.
Let's begin to run the Terraform scripts to provision infra required to Deploy DIGIT on AWS.
First CD into the following directory and run the following command 1-by-1 and watch the output closely.
Upon Successful execution following resources get created which can be verified by the command "terraform output"
s3 bucket: to store terraform state.
Network: VPC, security groups.
IAM users auth: using the key base to create admin, deployer and the user. Use this URL https://keybase.io/ to create your own PGP key, this will create both public and private keys in your machine. Upload the public key into the keybase account that you have just created, give a name to it and ensure that you mention that in your terraform. This allows for encrypting all sensitive information.
Example: keybase user (in eGov case is "egovterraform") needs to be created and has to be uploaded the public key here - https://keybase.io/egovterraform/pgp_keys.asc
you can use this portal to decrypt your secret key. To decrypt PGP Message, Upload the PGP Message, PGP Private Key and Passphrase.
EKS cluster: with master(s) & worker node(s).
Storage(s): for es-master, es-data-v1, es-master-infra, es-data-infra-v1, zookeeper, kafka, kafka-infra.
2. Use this link to get the kubeconfig from EKS to get the kubeconfig file and be able to connect to the cluster from your local machine so that you should be able to deploy DIGIT services to the cluster.
3. Verify that you are able to connect to the cluster by running the following command
Whola! All set and now you can go with Deploy Product..
Program service handles all the financial transactions like sanction management, fund allocation, and disbursement of funds.
Users can establish programs within which these transactions take place. It receives messages from the adapter service, validates them and forwards them to digit-exchange for sending it to the IFMS system. In case of any validation failure, it responds with an error status and message. Additionally, the service maintains records of sanctioned, allocated, and available amounts for disbursement.
DIGIT Exchange
MDMS Service
IdGen Service
Creates Program for enabling further financial transactions.
Creates on-sanction when a sanction is received from the IFMS system. Maintains allocated and available amount for disbursal for a particular sanction. Forwards the sanction to the client-server.
Creates on-allocation when allocations are received from the IFMS system. Updates the allocated and available amount for the given sanction. Forwards the allocation to the client-server.
Creates disbursement, deducts available amount and forwards it to the IFMS for disbursement. On failure increases the available amount in sanction.
/program-service/
TBD
iFIX Adapter enables existing or new source systems to integrate seamlessly with iFIX. iFIX Adapter has been developed as a reference implementation for developers of source systems who want to integrate their departmental system with iFIX.
iFix Adapter works as a mediator between iFIX and its clients. This system will receive requests from the client system and converts the data into the iFIX fiscal event or associated formats.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Java 8
Kafka server is up and running
PSQL server is running
Redis
Following services should be up and running:
Client Service Like mgramseva-ifix-adapter
Target service IFIX- fiscal-event-service
Target Service IFIX-keycloak
iFIX client requests pushed to IFIX
Auth token is fetched from keycloak and cached. Token will be re-fetched 5 minutes before expiry
Every push to iFIX is recoded with http status
status series 200 series considered success
status 400 are marked client error and reported back to client
status 500 resubmitted by scheduler
Update the keycloak credentials client-id and secrets in the environment file
Map the coa in HeadCodeToCoaMapping.yml
Map project in ProjectMapping.yml
Deploy the latest version of ifix-reference-adapter
Update Key cloak credentials in dev.yaml, qa.yaml, prod.yaml according to environment
The DIGIT Exchange Service is a robust data interchange platform designed to facilitate the seamless and secure exchange of digital information between two endpoints. Built with a fixed schema for headers and dynamic messaging capabilities, this service ensures reliable communication while prioritizing data integrity and confidentiality.
DIGIT-exchange can implement any service if that has the same request structure as the program.
Base Path: /digit-exchange/
API spec YAML is here. Click below to view it in Swagger Editor.
TBD
# | Checklist | Sign-off Status | Reference link | Owner | Reviewer | Remarks |
---|---|---|---|---|---|---|
This object captures the fiscal information of program service to exchange between domains.
Field | Type | Description |
---|---|---|
Header to exchange information between different domains
Field | Type | Description |
---|---|---|
Captures the codes and type and status of the fiscal message
Field | Type | Description |
---|---|---|
The model extends exchangeCode and captures specific attributes used for sanction
Field | Type | Description |
---|---|---|
The model extends exchangeCode and captures specific attributes used for sanction
Field | Type | Description |
---|---|---|
The model extends exchangeCode and captures specific attributes used for allocation
Field | Type | Description |
---|---|---|
The model extends exchangeCode and captures specific attributes used for disbursement
Choose your infra type and provision the necessary infra before you actually deploy the services
iFIX is a microservices-based distributed cloud-native application. Each of these context-specific microservices is dockerized and deployed on Kubernetes infrastructure.
It is essential to understand some of the key concepts, benefits and best practices of the Kubernetes platform before we understand the deployment of the iFIX.
Know the basics of Kubernetes: https://www.youtube.com/watch?v=PH-2FfFD2PU&t=3s
Know the basics of kubectl commands
Know Kubernetes manifests: https://www.youtube.com/watch?v=ohSUtEfDefc
Know how to manage environment values and secrets of any service deployed in Kubernetes https://www.youtube.com/watch?v=OW244LxB4oI
Know how to port forward to a pod running inside k8s cluster and work locally https://www.youtube.com/watch?v=TT3nd5n5Yus
Know sops to secure your keys/creds: https://www.youtube.com/watch?v=DWzJ87KbwxA
Choose the target infra type and follow the instructions to set up a Kubernetes cluster before moving on to the deployment.
Before we begin the deployment, it is important to understand the deployment architecture that starts from the source code to the production-ready stage. Deploying and managing Kubernetes have emerged as a streamlined way to deploy containers in the cloud infrastructure. When running Kubernetes at scale, managing, operating, and scaling its infrastructure to maximize cluster utilization can be challenging. There are too many parameters the development team needs to manage and configure. This includes selecting the best instance type and size, determining when to scale up or down, and making sure all of the containers are scheduled and running on the best instances — and that is even before starting to think about cost resource optimization.
The simplest way to get started with the deployment process is to manage deployment configuration as code. Each service deployment configuration is defined as Helm charts and deployed into the Kubernetes cluster. We can collocate the deployment-as-code as source code, leveraging all the benefits of source control including change tracking and branching and then packaging it. The source code repo below contains the deployment-as-code details for iFIX.
Use the command below to clean up the setup cluster. This deletes the entire cluster and other cloud resources that were provisioned for the iFix Infra Setup.
All done, the infra on local, cloud, and deployment of iFIX into the Kubernetes cluster is completed successfully.
Infrastructure Setup
iFIX is a microservices-based distributed cloud-native application. The microservices streamline processes to meet outcomes at scale and speed. Each of the microservices is dockerized and deployed on Kubernetes infrastructure.
It is essential to understand some of the key concepts, benefits and best practices of the Kubernetes platform before we understand the deployment of the iFIX.
Know the basics of Kubernetes: https://www.youtube.com/watch?v=PH-2FfFD2PU&t=3s
Know the basics of kubectl commands
Know Kubernetes manifests: https://www.youtube.com/watch?v=ohSUtEfDefc
Know how to manage env values, and secrets of any service deployed in Kubernetes https://www.youtube.com/watch?v=OW244LxB4oI
Know how to port forward to a pod running inside the k8s cluster and work locally https://www.youtube.com/watch?v=TT3nd5n5Yus
Know sops to secure your keys/creds: https://www.youtube.com/watch?v=DWzJ87KbwxA
Choose the target infra type and follow the instructions to set up a Kubernetes cluster before moving on to the deployment.
Before we begin the deployment, it is important to understand the deployment architecture that starts from the source code to the production-ready stage. Deploying and managing Kubernetes have emerged as a streamlined way to deploy containers in the cloud infrastructure. When running Kubernetes at scale, managing, operating, and scaling its infrastructure to maximize cluster utilization can be challenging. There are too many parameters the development team needs to manage and configure. This includes selecting the best instance type and size, determining when to scale up or down, and making sure all of the containers are scheduled and running on the best instances — and that is even before starting to think about cost resource optimization.
The simplest way to get started with the deployment process is to manage deployment configuration as code. Each service deployment configuration is defined as Helm charts and deployed into the Kubernetes cluster. We can collocate the deployment-as-code as source code, leveraging all the benefits of source control including change tracking and branching, and then package it. So below is the source code repo that contains all the deployment-as-code for iFIX.
Clean up the cluster Setup using the command below, if required. This deletes the entire cluster and other cloud resources that were provisioned for the iFix Infra Setup.
All done, the infra on local, cloud, and deployment of iFIX into the Kubernetes cluster is completed successfully.
iFix Infra Setup & Deployment
The Azure Kubernetes Service (AKS) is one of the Azure services used for deploying, managing, and scaling any distributed and containerized workloads. Here we can provision the AKS cluster on Azure from the ground up and using an automated way (infra-as-code) using and then deploy the DIGIT-iFIX Services config-as-code using .
This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see .
If you don't have an , create a before you begin.
Use the Bash environment in .
If you prefer, the Azure CLI to run CLI reference commands.
If you're using a local installation, sign in to the Azure CLI by using the command. To finish the authentication process, follow the steps displayed in your terminal. For additional sign-in options, see .
When you're prompted, install Azure CLI extensions on first use. For more information about extensions, see .
Run to find the version and dependent libraries that are installed. To upgrade to the latest version, run .
This article requires version 2.0.64 or greater of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
The identity you are using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see .
Install on your local machine that helps you interact with the kubernetes cluster
Install that helps you package the services along with the configurations, envs, secrets, etc into a
Install version (0.14.10) for the Infra-as-code (IaC) to provision cloud resources as code and with desired resource graph and also it helps to destroy the cluster at one go.
Note: Run the commands as administrator if you plan to run the commands in this quickstart locally instead of in Azure Cloud Shell.
Before we provision the cloud resources, we need to understand and be sure about what resources need to be provisioned by terraform to deploy DIGIT. The following picture shows the various key components. (AKS, Worker Nodes, Postgres DB, Volumes, Load Balancer)
Considering the above deployment architecture, the following is the resource graph that we are going to provision using terraform in a standard way so that every time and for every env, it'll have the same infra.
AKS Azure (Kubernetes Service Master)
Work node group (VMs with the estimated number of vCPUs, Memory
Volumes (persistent volumes)
PostGres Database
Virtual Network
Users to access, deploy and read-only
Here we have already written the terraform script that provisions the production-grade DIGIT Infra and can be customized with the specified configuration.
The following main.tf contains the detailed resource definitions that need to be provisioned, please have a look at it.
Dir: iFix-DevOps/Infra-as-code/terraform/aks-ifix-dev
You can define your configurations in variables.tf and provide the environment-specific cloud requirements so that using the same terraform template you can customize the configurations.
Following are the values that you need to mention in the following files, the blank ones will be prompted for inputs while execution.
Now that we know what the terraform script does, the resources graph that it provisions and what custom values should be given with respect to your env.
Let's begin to run the terraform scripts to provision infra required to Deploy DIGIT on AZ.
First CD into the following directory and run the following command 1-by-1 and watch the output closely.
Upon Successful execution following resources gets created which can be verified by the command "terraform output"
Network: Virtual Network.
AKS cluster: with nodepool(s), master(s) & worker node(s).
Storage(s): for es-master, es-data-v1, es-master-infra, es-data-infra-v1, zookeeper, kafka, kafka-infra.
Downloads credentials and configures the Kubernetes CLI to use them.
3. Finally, Verify that you are able to connect to the cluster by running the following command
iFIX Quickstart - this is not for a production
Quickstart Installation helps you jump-start with the iFix basic installation with limited functionalities.
iFix is a distributed microservice-based platform that comprises many services that are containerized. Depending upon the required features, the specific services can be run on any container-supported orchestration platform like docker-compose, Kubernetes, etc.
The Quickstart guide covers the installation steps for basic services to get the platform up. Before setting up iFix, create a lightweight Kubernetes cluster called on a local machine with specified H/W requirements. The H/W requirements are listed below to ensure before we proceed further.
To provision a lightweight Kubernetes cluster, please follow the instructions below in context to your OS and install the k3d on your machine.
min 4 vCPUs (recommended 8)
min 8GiB of RAM (recommended 16)
min 30GiB of HDD (recommended 30+)
Linux distribution running in a VM or bare metal
Ubuntu 18.04 or Debian 10 (VM or bare metal)
Install
on Linux
Open terminal and Install k3d on Linux using the below command
OSX or Mac
local Kubernetes cluster enabled
on Mac
Install k3d on Mac, on terminal use (Homebrew is available for MacOS) using the below command
Windows 10 or above
need to be installed
on Windows
package manager for windows
Install as an alternative command prompt that allows most of the Linux commands on windows.
Open gitbash and Install k3d on Windows using the below command
Once the above prerequisites are met, run the following tasks depending upon your OS.
login/ssh into the machine, go to terminal/command prompt and run the following commands as an admin user.
Create /Kube directory and change permission. To use this directory for persistent data mount. This means data from all container logs will be stored here.
Create a cluster with a single master node and 2 agents (Worker Nodes) and mount the pre-created directory (for data persistence).
When cluster creation is successful, get the kubeconfig file, that allows you to connect to the cluster at any time.
Verify the Cluster creation by running the following commands from your local machine where the kubectl is installed. It gives you the sample output as below
You can verify the workers' nodes created by using the following command.
Once the above steps are completed successfully, your Cluster is now up and running ready to proceed with the DIGIT Deployment.
Now that we have the Infra setup to proceed with the DIGIT Deployment. Following are the tools that need to be installed on the machine before proceeding with the DIGIT Services deployment.
What we'll deploy in Quickstart:
iFix core platform services
After cloning the repo CD into the folder iFix-DevOps and type the "code ." command that will open the visual editor and opens all the files from the repo iFix-DevOps
Once the prerequisite setup is complete, go to the following repo, run the command and follow the instructions.
You can now test the DIGIT application status in the command prompt/terminal by using the below command.
iFix Deployment
Post infra setup (Kubernetes Cluster), there starts the deployment process.
Pipeline as code is a practice of defining deployment pipelines through source code, such as Git. Pipeline as code is part of a larger “as code” movement that includes infrastructure as code. Teams can configure builds, tests, and deployment in code that is trackable and stored in a centralized source repository. Teams can use a declarative approach or a vendor-specific programming language, such as Jenkins and Groovy, but the premise remains the same.
A pipeline as code file specifies the stages, jobs, and actions for a pipeline to perform. Because the file is versioned, changes in pipeline code can be tested in branches with the corresponding application release.
The pipeline as code model of creating continuous integration pipelines is an industry best practice, but deployment pipelines used to be created very differently.
The deployment process has got 2 stages and 2 modes. We can see the modes first and then the stages.
Essentially, iFix deployment means that we need to generate Kubernetes manifests for each individual service. We use the tool called the helm, which is an easy, effective and customizable packaging and deployment solution. So depending on where and which env you initiate the deployment there are 2 modes that you can deploy.
From - whatever we are trying in this sample exercise so far.
Advanced: like Jenkins - Depending on how you want to set up your CI/CD and the expertise the steps will vary, however here you can find how we eGov has set up an exemplar CI/CD on Jenkins and the pipelines are created automatically without any manual intervention.
All content on this page by is licensed under a .
Essentially, there are 2 stages that should allow you to use the full potential of DeploymentConfig and pipeline-as-code.
Stage 1: Clone the DevOps , choose your iFix product branch
Prepare an <> master config file, you can name this file as you wish which will have the following configurations, this env file needs to be in line with your cluster name.
each service global, local env variables
credentials, secrets (You need to encrypt using and create a <env>-secret.yaml separately)
Number of replicas/scale of individual services (Depending on whether dev or prod)
mdms, config repos (Master Data, ULB, Tenant details, Users, etc)
sms g/w, email g/w, payment g/w
GMap key (In case you are using Google Map services in your PGR, PT, TL, etc)
S3 Bucket for Filestore
URL/DNS on which the DIGIT will be exposed
SSL Certificate for the above URL
End-points configs (Internal/external)
Stage 2: Run the ifix_setup deployment script and simply answer the questions that it asks
CI/CD setup
Post infra setup (Kubernetes Cluster), We start with deploying the Jenkins and kaniko-cache-warmer.
Sub Domain to expose CI/CD URL
GitHub
With
(username and password)
SSL Certificate for the sub-domain
Prepare an <> master config file and <>, you can name this file as you wish which will have the following configurations.
credentials, secrets (You need to encrypt using and create a ci-secret.yaml separately)
Check and Update details (like github Oauth app clientId and clientSecret, GitHub user details gitReadSshPrivateKey and gitReadAccessToken etc..)
To create Jenkins namespace mark this true
Add your env's kubconfigs under kubConfigs like
KubeConfig env's name and deploymentJobs name from ci.yaml should be the same
Update the and repo name with your forked repo name and provide read-only access to github user to those repo's.
SSL Certificate for the sub-domain
You have launched the Jenkins. You can access the same through your sub-domain which you configured in ci.yaml.
The Jenkins CI pipeline is configured and managed 'as code'.
Example URL - https://<Jenkins_domain>
Since there are many services and the development code is part of various git repos, you need to understand the concept of cicd-as-service which is open-sourced. This page also guides you through the process of creating a CI/CD pipeline.
As a developer - To integrate any new service/app to the CI/CD below is the starting point:
Once the desired service is ready for the integration: decide the service name, type of service, whether DB migration is required or not. While you commit the source code of the service to the git repository, the following file should be added with the relevant details which are mentioned below:
Build-config.yml –It is present under the build directory in each repository
This file contains the below details which are used for creating the automated Jenkins pipeline job for your newly created service.
While integrating a new service/app, the above content needs to be added in the build-config.yml file of that app repository. For example: If we are onboarding a new service called egov-test, then the build-config.yml should be added as mentioned below.
If a job requires multiple images to be created (DB Migration) then it should be added as below,
Note - If a new repository is created then the build-config.yml should be created under the build folder and then the config values are added to it.
The git repository URL is then added to the Job Builder parameters
When the Jenkins Job => job builder is executed the CI Pipeline gets created automatically based on the above details in build-config.yml. Eg: egov-test job will be created under the core-services folder in Jenkins because the “build-config was edited under core-services” And it should be the “master” branch only. Once the pipeline job is created, it can be executed for any feature branch with build parameters (Specifying which branch to be built – master or any feature branch).
As a result of the pipeline execution, the respective app/service docker image will be built and pushed to the Docker repository.
If git repository URL is available build the Job-Builder Job
If the git repository URL is not available ask the Devops team to add it.
The services deployed and managed on a Kubernetes cluster in cloud platforms like AWS, Azure, GCP, OpenStack, etc. Here, we use helm charts to manage and generate the Kubernetes manifest files and use them for further deployment to the respective Kubernetes cluster. Each service is created as charts which will have the below-mentioned files in them.
To deploy a new service, we need to create the helm chart for it. The chart should be created under the charts/helm directory in iFix-DevOps repository.
We have an automatic helm chart generator utility that needs to be installed on the local machine, the utility prompts for user inputs about the newly developed service (app specifications) for creating the helm chart. The requested chart with the configuration values (created based on the inputs provided) will be created for the user.
Name of the service? test-service Application Type? NA Kubernetes health checks to be enabled? Yes Flyway DB migration container necessary? No, Expose service to the internet? Yes, Route through API gateway [zuul] No Context path? hello
The generated chart will have the following files.
This chart can also be modified further based on user requirements.
The Deployment of manifests to the Kubernetes cluster is made very simple and easy. We have Jenkins Jobs for each state and are environment-specific. We need to provide the image name or the service name in the respective Jenkins deployment job.
Enter a caption for this image (optional)
Enter a caption for this image (optional)
The deployment Jenkins job internally performs the following operations,
Reads the image name or the service name given and finds the chart that is specific to it.
Generates the Kubernetes manifests files from the chart using the helm template engine.
Execute the deployment manifest with the specified docker image(s) to the Kubernetes cluster.
Multiple reference applications built on the iFIX platform demonstrate its potential capabilities. These applications or exemplars facilitate the resolution of issues related to fiscal management of projects and enable real-time, fiscal visibility across the projects.
The current exemplars have been developed in collaboration with the Punjab State Government - Department of Water Supply and Sanitation (DWSS).
DWSS has developed more than 8500 water projects across Punjab. After the construction of these projects, the operations and maintenance of these are done by the GPWSC (Gram Panchayat Water Scheme Committee) consisting of members from the local bodies. For the projects that are handed over, GPWSCs are responsible for setting up the water charges, collecting the water charges from the households and managing the expenditure (O&M) for the project.
Due to various reasons, if a GPWSC is unable to fiscally manage the project - the asset deteriorates, water supply to the household is adversely impacted and vendors are unpaid - arrears pile up. For example, the electricity bill which is 75% of the O&M expenses is unpaid. Eventually, these arrears are presented to the State Finance Department for clearance.
The scenario above exemplifies some of the pivotal challenges addressed by the reference applications built on iFIX.
- A mobile-based application that will enable GPWSC members and collection agents to collect & manage revenue and expenditure.
- Transforms the Demand, Receipt, Bill and Payment information entered in mGramSeva into anonymized and standardized fiscal events. These are then posted on the iFIX Platform.
- stores micro-level fiscal data and provides secure standard APIs for source systems to post and query fiscal data in raw and aggregated form.
- provides a dashboard for the department to have a real-time view of the fiscal sustainability of various projects.
Master Name | Sample Data | Description |
---|---|---|
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by is licensed under a .
Field | Type | Description |
---|---|---|
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
Ideally, one would write the terraform script from the scratch using this .
Let's clone the GitHub repo where the terraform script to provision the AKS cluster is available and below is the structure of the files.
To manage a Kubernetes cluster, use the Kubernetes command-line client, . kubectl
is already installed if you use Azure Cloud Shell.
Install kubectl
locally using the command:
Configure kubectl
to connect to your Kubernetes cluster using the command. The following command:
Uses ~/.kube/config
, the default location for the . Specify a different location for your Kubernetes configuration file using --file.
Whola! All set and now you can go with .
All content on this page by is licensed under a .
iFix uses (required v1.13.3) automated scripts to deploy the builds on to Kubernetes - or or
All iFix services are packaged using helm charts
is a CLI to connect to the kubernetes cluster from your machine
Install for making API calls
IDE Code for better code/configuration editing capabilities
The iFix services deployment configurations are in which needs to and then it to your local.
to run some DIGIT bootstrap scripts
Check the file that needs to be configured as per any specific values according to your needs. (For a quick start you can run as it is)
Add the following entries in your host file /etc/hosts depending on your OS, instructions can be found .
All content on this page by is licensed under a .
All content on this page by is licensed under a .
Job Builder – Job Builder is a Generic Jenkins job that creates the Jenkins pipeline automatically which are then used to build the application, create the docker image of it and push the image to the docker repository. The Job Builder job requires the git repository URL as a parameter. It clones the respective git repository and reads the file for each git repository and uses it to create the service build job.
Check git repository URL is available in
All content on this page by is licensed under a .
All content on this page by is licensed under a .
1
Development is completed for all the features that are part of the release.
Yes
Shailesh
Satish N
Development is completed and tag is generated, https://github.com/egovernments/iFix-Dev/releases/tag/2.4.0
2
Test cases are documented by the QA team, reviewed by the product owners and test results are updated in the test cases sheet.
Yes
Anuraj
Shailesh
Test cases are reviewd by product owners.
3
The incremental demo of the features showcased during the sprint showcase and feedback incorporated. If possible list out the JIRA tickets for feedback.
Yes
Anuraj
Satish N
Internal demo was done on 14th March 2024.
4
UI/UX Audit review by UX Architect is completed along with feedback incorporation for any changes in UI/UX.
NA
5
Incremental demos to the product owners are completed as part of the sprint and feedbacks are incorporated.
Yes
Anuraj
Satish N
Internal demo was done on 14th March 2024.
6
QA signoff is completed by the QA team and communicated to the product owners. All the tickets QA signoff status is updated in the JIRA.
Yes
Anuraj
Satish N
7
UAT promotion and regression testing from the QA team are complete. The QA team has shared the UAT regression test cases with the product owners.
No
Anuraj
Satish N
8
The API backward compatibility testing is completed.
No
Anuraj
These are new services, so this is not applicable.
9
The communication is shared with the product owners for the completion of UAT promotion and regression by the QA team. The product owners have to give a Product signoff within one week of this communication.
No
Anuraj
Satish N
QA Sign off given on
March 21, 2024
10
UAT Product Signoff communication is received from the Product owners along with the release notes and User guides (if applicable).
Yes
Satish N
Prashanth
11
The GIT tags and releases are created for the code changes for the release.
Yes
Shailesh
Pritish
12
Verify whether the Release notes are updated
Yes
Shailesh
Pritish
13
Verify whether all UAT Builds are updated along with the GIT tag details.
Yes
Anuraj, Shailesh
Pritish
14
Verify whether all MDMS, Configs, InfraOps configs updated.
Yes
Anuraj
Shailesh
15
Verify whether all docs are published to PFM website by the Technical Writer as part of the release.
Yes
Satish N, Shailesh, Pritish, Anuraj, Sameer, Arindam, Anjoo
Anjoo
16
Verify whether all test cases are up to date and updated along with necessary permissions to view the test cases sheet. The test cases sheet is verified by the Product owner.
Yes
Anuraj
Shailesh
17
Verify whether the UAT credentials sheet is updated with the details of new Users and Roles if any
No
It will be internally shared
18
Verify whether all the localization data was added in UAT including Hindi and updated in Release Kits.
No
19
Verify whether the product release notes and user guides are updated and published
Yes
Satish N
Prashanth
User Manual - Not Required
20
The Demo of released features is done by the product team as part of the Sprint/Release Showcase.
In-Progress
Satish N
Becuse of technical issue not completed showcase.
21
Technical and Product workshops/demos are conducted by the Engineering and Product team to the implementation team (Impel handover)
Yes
Shailesh
Arindam
Implementation handover done on - 13th March 2024.
22
Architect SignOff and Technical Quality Report
Yes
Shailesh
Manish
Issues are documented in Known Issues. Also documenting this in JIRA for tech debt fixes.
23
Success Metrics and Product Roadmap
No
Success Metrics
Satish N
Success metrics would be same as MUKTA as it would be adopting the specifications.
24
Adoption Metrics
No
Adoption Metrics
Satish N
Prashanth
Same as MUKTA
25
Program Roll-out Plan
No
Pilot Roll-out plan
Sameer
Prashanth
26
Impel checklist
No
Impel Checklist
Arindam
Elzan
27
Impel roll-out plan
Yes
Arindam
Elzan
28
Gate 2
In progress
Gate 2 JIRA dashboard
Sameer, Satish N, Pritish, Arindam
Ex co
Gate 2 is scheduled on 28th March 2024
29
The Internal release communication along with all the release artefacts are shared by the Engineering team.
Satish N
This will be shared after Gate 2.
Head of accounts to be used for the Mukta scheme at the state level - to be provided by HUDD
Spending unit details specific to each ULB are to be provided by HUDD.
signature
String
Signature for verification
header
RequestHeader
Header for exchange between different domains.
message
String
Contains the fiscal message.
message_id
String
Unique Identifier
message_ts
String
Message time-stamp
message_type
MessageType
Enum with values: program/on-program sanction/on-sanction allocation/on-allocation disburse/on-disburse
action
Action
Enum with values: create update search
sender_id
String
Id of the service sending message
sender_uri
String
Uri of the sending service
receiver_id
String
Id of the receiver
is_msg_encrypted
boolean
Specifies if message is encrypted
id
String(2, 64)
Unique identifier
type
String(2, 64)
Type of message
function_code
String(2, 64)
Major head code
administration_code
String(2, 64)
Major head name
recipient_segment_code
String(2, 64)
Major head code type
economic_segment_code
String(2, 64)
Sub-Major head code
source_of_fund_code
String(2, 64)
Sub-Major head name
target_segment_code
String(2, 64)
Minor head code
currency_code
String(2, 64)
Minor head name
locale_code
String(2, 64)
Sub-Head code
status
Status
Contains status code and status message.
location_code
String(2, 64)
tenantId
program_code
String(2, 64)
Formatted unique identifier of program
name
String(2, 64)
Name of Program
parent_id
String(2, 64)
parentId of program
description
String(2, 256)
description of program
start_date
long
Start date of program
end_date
long
End date of program
additional_details
JsonNode
any additional details if required
audit_details
AuditDetails
Captures created time, last modified time, created domain and last modified domain
children
Program
Any children program
location_code
String(2, 64)
tenant id
program_code
String(2, 64)
Code of the applicable program
net_amount
Double
Sanctioned net amount
gross_amount
Double
Sanctioned gross amount
allocatedAmount
Double
Allocated amount
availableAmount
Double
Available amount
additional_details
JsonNode
any additional details if required
audit_details
AuditDetails
Captures created time, last modified time, created domain and last modified domain
children
List<Sanction>
List of children Sanctions
location_code
String(2, 64)
tenant id
program_code
String(2, 64)
Formatted unique identifier of program
sanction_id
String(2, 64)
Unique identifier of sanction
net_amount
Double
Net allocated amount
gross_amount
Double
Gross allocated amount
allocation_type
AllocationType
Can be Allocation or Deduction
additional_details
JsonNode
Any additional details if required
audit_details
AuditDetails
Captures created time, last modified time, created domain and last modified domain
children
List<Allocation>
List of children allocations
location_code
String(2, 64)
tenat id
program_code
String(2, 64)
Formatted unique identifier of program
target_id
String(2, 64)
Reference to payment number.
parent_id
String(2, 64)
Parent disbursement id
sanction_id
String(2, 64)
Id of the sanction the given disbursement belongs.
transaction_id
String(2, 64)
Reference to unique identifier received from ifms system
account_code
String(2, 64)
Account number and ifsc to disburse to
net_amount
Double
Net disbursement amount
gross_amount
Double
Gross disbursement amount
individual
Individual
Captures individual details such as name, phone, etc
additional_details
JsonNode
Any additional details if required
audit_details
AuditDetails
Captures created time, last modified time, created domain and last modified domain
children
List<Disbursement>
List of children disbursement
Ifix-Adapter is a system that works as a mediator between iFIX and its clients. This system will receive requests from the client system and convert the data in the iFIX required format This document contains the details on how to set up the iFIX-adapter service and describes the functionalities it supports. It supports multiple events (Event Array) in a single request.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Java 8
Kafka server is up and running
PSQL server is running
Redis
Following services should be up and running:
Client Service Like mgramseva-ifix-adapter
Target service IFIX- fiscal-event-service
Target Service IFIX-keycloak
Adapter master data service
IFIX client requests are pushed to IFIX.
The authentication token is fetched from keycloak and cached. Token is re-fetched 5 minutes before expiry.
project_id from request data is getting treated as Department Entity Code to fetch Department Entity.
COA Code fetched from COA Mapping table by client code and cached it in Redis Server.
Every push to IFIX is recorded in the table with HTTP status
status series 200 considered success
status 400 are marked client error
It collects projectId form request data and treats it as department_entity_code and calls search API to Department Entity Service. It always expects it will receive only one Department Entity against a single department_entity_code, if it finds multiple raise an error message.
One project can have multiple department entities but vice-versa cannot be true. In case of multiple projects for one department entity - the system will raise an error message.
Deploy the latest version of the ifix-reference-adapter.
Map clientcode, ifixcoacode, ifixid in ifix_adapter_coa_map table
“clientcode” is the tax head like “WATER_CHARGES” or ‘10011’ used in IFIX client like mgramseva
“ifixcoacode” is the 16 digit glcode in IFIX. 16 digit code is mapped then this can be ported to any environment like dev to qa ,or qa to uat or from uat to prod. Prefer mapping ifixcoacode
example is INSERT INTO public.ifix_adapter_coa_map(id, clientcode, ifixcoacode, ifixid, tenantid) VALUES (1,'10101', '0215-01-102-00-00-01', '6cbcb4a1-2431-4f78-89d7-b4f0565aba37', 'pb');
state.goverment.code
set this value to the clients top level tenant_id
Adapter master data service maintains information on Department, Expenditure and Projects. We can create these details and search for the same details based on the given parameters/request data.
Current version : 1.0.0
Before we proceed with the configuration, make sure the following pre-requisites are met
Java 8
MongoDB instance
Required service dependency - Department entity service
It creates secure endpoints for the master data service. The access token is required to create any master data. The subsequent sections on this page discuss the service details maintained by the master data service.
Maintains the create and search department details. The following information is passed while creating the department - the Government ID, department code, department name, parent department if any. Searching the department details is on given parameters like IDs, Government ID, department code, department name.
Maintains the expenditure details And provide create and search functionality. For creating the expenditure, the following details are required - the Government ID, the department ID, code, name, type (can be "SCHEME", "NON_SCHEME") details. While searching the expenditure details, pass the given parameters like IDs, Government IDs, names, code.
Maintains the project details and provide create and search functionality. The following details are required to create the project - Government, name, code, expenditure ID, the department entity ID(s), location IDs. While searching, pass the IDs, Government ID, name, code, expenditure ID, location ID.
No environment-specific variables are required for the environment (migration).
Update the DB and URI configurations in the dev.yaml, qa.yaml, prod.yaml file.
Project Create API creates the project when the Master data details (COA, Government, Expenditure, Department) and Department Entity have been created. COA And Government have to be created in iFIX core Master data service.
Project Create API takes the below attributes in request :
tenantId: This is the Id that will be defined while creating the Ifix core Master Government Service.
expenditureId: This is the Id that will be generated while creating the Adapter Master Expenditure Service.
code: This is the project code that needs to be created.
name: This is the project name that needs to be created.
departmentEntityIds: This is the Department Entity Ids. If we have to create a project at hierarchy level 1 then we need to pass the Department Entity Id of that corresponding level. It depends on the Department hierarchy level on which the project has to be created and hence the same level Department Entity Id. You can pass a list of departmentEntityIds and can create the same project.
For reference, Below is a Dummy project create example:
Request :
Response:
Ifix-Adapter is a system that works as a mediator between iFix and its clients. This system will receive requests from the client system and convert the data in the Ifix required format. This document contains the details about how to set up ifix-adapter service and describes the functionalities it provides.
Before you proceed with the configuration, make sure the following pre-requisites are met -
Java 8
Kafka server is up and running
PSQL server is running
Redis
Following services should be up and running:
Client Service Like mgramseva-ifix-adapter
Target service IFIX- fiscal-event-service
Target Service IFIX-keycloak
IFIX master-data-service
IFIX client requests are pushed to IFIX
The authentication token is fetched from keycloak and cached. Token will be re-fetched 5 minutes before expiry
project id is fetched from IFIX and cached
COA id fetched from IFIX and cached
Every push to IFIX is recorded in the table with HTTP status
status series 200 considered success
status 400 are marked client error and reported back to the client
status 500 resubmitted by the scheduler
Deploy the latest version of ifix-reference-adapter
Map clientcode, ifixcoacode, ifixid in ifix_adapter_coa_map table
“clientcode” is the tax head like “WATER_CHARGES” or ‘10011’ used in IFIX client like mgramseva
“ifixcoacode” is the 16 digit glcode in IFIX. 16 digit code is mapped then this can be ported to any environment like dev to qa ,or qa to uat or from uat to prod. Prefer mapping ifixcoacode
Another way is to map the IFIX COA ID itself. Since these are generated ids you cant port to other environments. ID mapping has to be done for every environment.
Preference is given to COA Code, if it is null ID will be used
example is INSERT INTO public.ifix_adapter_coa_map( id, clientcode, ifixcoacode, ifixid, tenantid) VALUES (1,'10101', '0215-01-102-00-00-01', '6cbcb4a1-2431-4f78-89d7-b4f0565aba37', 'pb');
If client “project code” and IFIX project code are the same then no need for mapping. If it is different then map clientprojectcode, ifixprojectid in ifix_adapter_project_map table. Ideally, you should keep both codes the same for getting meaningful data on the dashboard. This way you don't have to do any mapping for project code for any environment. But if for any reason you have different project codes in IFIX and its client or has multiple projects having the same project code then only go for this mapping. The adapter will first check in the IFIX for the supplied “projectCode”, If found it will use it and caches it. If multiple projects or not found it will look into this table for mapping
example is INSERT INTO public.ifix_adapter_project_map( id, clientprojectcode, ifixprojectid, tenantid) VALUES (1, '7374', 'e42db9bb-8427-40a6-9939-4f2189d032bf','pb');
state.goverment.code
set this value to the clients top level tenantid
Date/Time Filter
The date & time filter on the dashboard defaults to the current financial year since most of the calculations and visualizations makes sense when viewed from a fiscal year perspective rather than an individual week/month view. However, users can change to respective dates and most charts will be filtered to the selected time range.
Department Hierarchy Filter
Currently, there are 6 levels of hierarchy as per administration set up by the Department of Water supply and Sanitation Punjab. These are State, Zone, Circle, Division, Sub Division, Section, Gram Panchayat.
All these filters are independent and work on the dashboard irrespective of whether other filters in the hierarchy are selected or not.
Surplus/Deficit
This number shows whether the selected Administrative entity is financially surplus or not. It also compares with the previous year using the “trend” visualization in Metabase.
Pending Collections
For selected Administrative boundary what is the pending collections through water charges is represented in this card. If the pending amount is null. Then this card should display zero.
Outstanding Electricity Bills
Since electricity bills create a major component of the expenditure for all projects, it is important to show how much electricity bills are pending at each administrative entity. The total amount of pending bills filtered by electricity under COA gives us pending electricity bills
GPWSCs at Risk
Risky GPWSCs are divided into 3 types
High Risk: Demand is less than Bill. Medium Risk: Demand is more than bill but pending collections is less than pending bills.
Low Risk: Demand is more than bill and also pending collections is more than pending bills.
It is important to identify the risky GPWSCs and keep the officials of DWSS informed, for them to take the right actions before it’s too late.
Collections & Expenditure Time Series Graphs
These charts represent the Demand, Net Collections, Bills and Net Payments across time(month-on-month) so that officials get a fair idea of how the amount that is getting collected is being spent.
Any abnormalities in the graphs (Low collections or excessive spending) is something that needs to be paid attention to.
Expenditure by Chart of Accounts
Expenditure is currently divided into 4 broad categories - Electricity Bills, Salaries, Operations and Maintenance and Miscellaneous. How these 4 categories accumulate to total expenditure for the selected entity over time is presented on the chart. Usually, 75% of the expenditure should be of type electricity
Department Hierarchy Table
Here, we represent Total Demand, Receipt, Bills and Payments by all levels of the hierarchy so that officials at any point, instead of just viewing the charts, trends from the visualizations can also see the tables and compare best performing entities.
iFix Dashboard
Essentially, there are 2 stages that should allow you to use the full potential of DeploymentConfig and pipeline-as-code.
Stage 1: Clone the DevOps repo, choose your iFix product branch as iFix-adapter.
Prepare an <env.yaml> master config file, you can name this file as you wish which will have the following configurations, this env file need to be in line with your cluster name.
each service global, local env variables
credentials, secrets (You need to encrypt using sops and create a <env>-secret.yaml separately)
Number of replicas/scale of individual services (Depending on whether dev or prod)
mdms, config repos (Master Data, ULB, Tenant details, Users, etc)
sms g/w, email g/w, payment g/w
GMap key (In case you are using Google Map services in your PGR, PT, TL, etc)
S3 Bucket for Filestore
URL/DNS on which the DIGIT will be exposed
SSL Certificate for the above URL
End-points configs (Internal/external)
Stage 2: Run the iFix_Dashboard_setup deployment script and simply answer the questions that it asks.
iFix Adapter
Essentially, there are 2 stages that should allow you to use the full potential of DeploymentConfig and pipeline-as-code.
Stage 1: Clone the DevOps repo, choose your iFix product branch as iFix-adapter.
Prepare an <env.yaml> master config file, you can name this file as you wish which will have the following configurations, this env file need to be in line with your cluster name.
each service global, local env variables
credentials, secrets (You need to encrypt using sops and create a <env>-secret.yaml separately)
Number of replicas/scale of individual services (Depending on whether dev or prod)
mdms, config repos (Master Data, ULB, Tenant details, Users, etc)
sms g/w, email g/w, payment g/w
GMap key (In case you are using Google Map services in your PGR, PT, TL, etc)
S3 Bucket for Filestore
URL/DNS on which the DIGIT will be exposed
SSL Certificate for the above URL
End-points configs (Internal/external)
Stage 2: Run the iFix_Adapter_setup deployment script and simply answer the questions that it asks.
iFix Adapter
Post infra setup (Kubernetes Cluster), We start with deploying the Jenkins and kaniko-cache-warmer.
Sub Domain to expose CI/CD URL
GitHub Oauth App
Docker hub account details (username and password)
SSL Certificate for the sub-domain
Prepare an <ci.yaml> master config file and <ci-secrets.yaml>, you can name this file as you wish which will have the following configurations.
credentials, secrets (You need to encrypt using sops and create a ci-secret.yaml separately)
Check and Update ci-secrets.yaml details (like github Oauth app clientId and clientSecret, GitHub user details gitReadSshPrivateKey and gitReadAccessToken etc..)
To create Jenkins namespace mark this flag true
Add your env's kubconfigs under kubConfigs like https://github.com/misdwss/iFix-DevOps/blob/mgramseva/deploy-as-code/helm/environments/ci-secrets.yaml#L19
KubeConfig env's name and deploymentJobs name from ci.yaml should be the same
Update the CIOps and DIGIT-DevOps repo name with your forked repo name and provide read-only access to github user to those repo's.
SSL Certificate for the sub-domain
You have launched the Jenkins. You can access the same through your sub-domain which you configured in ci.yaml.
The Jenkins CI pipeline is configured and managed 'as code'.
Example URL - https://<Jenkins_domain>
Since there are many services and the development code is part of various git repos, you need to understand the concept of cicd-as-service which is open-sourced. This page also guides you through the process of creating a CI/CD pipeline.
As a developer - To integrate any new service/app to the CI/CD below is the starting point:
Once the desired service is ready for the integration: decide the service name, type of service, whether DB migration is required or not. While you commit the source code of the service to the git repository, the following file should be added with the relevant details which are mentioned below:
Build-config.yml –It is present under the build directory in each repository
This file contains the below details which are used for creating the automated Jenkins pipeline job for your newly created service.
While integrating a new service/app, the above content needs to be added in the build-config.yml file of that app repository. For example: If we are onboarding a new service called egov-test, then the build-config.yml should be added as mentioned below.
If a job requires multiple images to be created (DB Migration) then it should be added as below,
Note - If a new repository is created then the build-config.yml should be created under the build folder and then the config values are added to it.
The git repository URL is then added to the Job Builder parameters
When the Jenkins Job => job builder is executed the CI Pipeline gets created automatically based on the above details in build-config.yml. Eg: egov-test job will be created under the core-services folder in Jenkins because the “build-config was edited under core-services” And it should be the “master” branch only. Once the pipeline job is created, it can be executed for any feature branch with build parameters (Specifying which branch to be built – master or any feature branch).
As a result of the pipeline execution, the respective app/service docker image will be built and pushed to the Docker repository.
Job Builder – Job Builder is a Generic Jenkins job that creates the Jenkins pipeline automatically which are then used to build the application, create the docker image of it and push the image to the docker repository. The Job Builder job requires the git repository URL as a parameter. It clones the respective git repository and reads the build/build-config.yml file for each git repository and uses it to create the service build job.
Check git repository URL is available in ci.yaml
If git repository URL is available build the Job-Builder Job
If the git repository URL is not available ask the Devops team to add it.
The services deployed and managed on a Kubernetes cluster in cloud platforms like AWS, Azure, GCP, OpenStack, etc. Here, we use helm charts to manage and generate the Kubernetes manifest files and use them for further deployment to the respective Kubernetes cluster. Each service is created as charts which will have the below-mentioned files in them.
To deploy a new service, we need to create the helm chart for it. The chart should be created under the charts/helm directory in iFix-DevOps repository.
We have an automatic helm chart generator utility that needs to be installed on the local machine, the utility prompts for user inputs about the newly developed service (app specifications) for creating the helm chart. The requested chart with the configuration values (created based on the inputs provided) will be created for the user.
Name of the service? test-service Application Type? NA Kubernetes health checks to be enabled? Yes Flyway DB migration container necessary? No, Expose service to the internet? Yes, Route through API gateway [zuul] No Context path? hello
The generated chart will have the following files.
This chart can also be modified further based on user requirements.
The Deployment of manifests to the Kubernetes cluster is made very simple and easy. We have Jenkins Jobs for each state and are environment-specific. We need to provide the image name or the service name in the respective Jenkins deployment job.
Enter a caption for this image (optional)
Enter a caption for this image (optional)
The deployment Jenkins job internally performs the following operations,
Reads the image name or the service name given and finds the chart that is specific to it.
Generates the Kubernetes manifests files from the chart using the helm template engine.
Execute the deployment manifest with the specified docker image(s) to the Kubernetes cluster.
The MUKTA iFIX Adapter is a service designed to facilitate communication between the Expense Service and the Program Service. It acts as a mediator, listening for payment creation events from the Expense Service, enriching the payment request data, and generating disbursement requests. These disbursement requests are then sent to the Program Service for further processing.
Expense Service
Expense Calculator Service
MDMS Service
Bank Account Service
Individual Service
Organization Service
User Service
Program Service
Encryption Service
Base path: /mukta-ifix-adapter/
TBD
TBD
Fiscal Event Aggregator is a Java standalone application, which runs as Cron Job to aggregate the fiscal event data from the Druid data store to Postgres DB.
Current Version: 2.0.0
Before you proceed with the configuration, make sure the following pre-requisites are met
Java 8
Druid DB & Postgres DB should be up and running
Fiscal-Event-Aggregator computes the aggregate of data over a selected time period. Aggregator will apply the time range filter according to the following approach :
Fiscal periods will be picked up as per the current system time. The current year will be the current fiscal period starting from the 1st of April of the current year to the 31st March of (current year+1). And it will also aggregate the data of one previous fiscal year starting from the 1st of April of (current year -1) to the 31st of March of the current year.
Follow the steps below to aggregate the final fiscal event data as per the fiscal time periods.
Group the sum of the amount based on department entity ancestry[6] id (attributes.departmentEntity.ancestry[6].id
) that is GP (Gram Panchayat), COA (chart of account) id, and event type.
Difference of the sum of the amount of "DEMAND" and "RECEIPT" event type with respect to distinct department entity ancestry[6] id (attributes.departmentEntity.ancestry[6].id
) that is GP (Gram Panchayat).
Difference of the sum of the amount of "BILL" and "PAYMENT" event type with respect to distinct department entity ancestry[6] id (attributes.departmentEntity.ancestry[6].id
) that is GP(Gram Panchayat).
Upsert the final aggregated fiscal event data into the Postgres DB.
Note: Below environment variables need to be configured with respect to the environment
Update the configurations in the dev.yaml, qa.yml, prod.yaml file.
Post infra setup (Kubernetes Cluster), there starts the deployment process.
Pipeline as code is a practice of defining deployment pipelines through source code, such as Git. Pipeline as code is part of a larger “as code” movement that includes infrastructure as code. Teams can configure builds, tests, and deployment in code that is trackable and stored in a centralized source repository. Teams can use a declarative YAML approach or a vendor-specific programming language, such as Jenkins and Groovy, but the premise remains the same.
A pipeline as code file specifies the stages, jobs, and actions for a pipeline to perform. Because the file is versioned, changes in pipeline code can be tested in branches with the corresponding application release.
The pipeline as code model of creating continuous integration pipelines is an industry best practice, but deployment pipelines used to be created very differently.
The deployment process has got 2 stages and 2 modes. We can see the modes first and then the stages.
Essentially, iFix adapter deployment means that we need to generate Kubernetes manifests for each individual service. We use the tool called helm, which is an easy, effective and customizable packaging and deployment solution. So depending on where and which env you initiate the deployment there are 2 modes that you can deploy.
From Local machine - whatever we are trying in this sample exercise so far.
Advanced: Setup CI/CD System like Jenkins - Depending on how you want to setup your CI/CD and the expertise the steps will vary, however here you can find how we eGov has set up a exemplar CI/CD on Jenkins and the pipelines are created automatically without any manual intervention.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
Follow the steps below to create the Adapter Master Data. Individual Adapter Service documents can be accessed here.
Enter valid details along with Tenant ID to create the Department. Once the tenant ID is created, the user receives a response with an ID and related details. This ID is the Department ID.
Enter valid details along with Tenant ID to create the Expenditure. Once the tenant ID is created, the user receives a response with an ID and related details. This ID is the Expenditure ID.
Enter valid hierarchy details of the Master Department to create Department Hierarchy. Check this document for more information.
Provide valid details to create Department Entity. Check this document for more information.
On successful completion of steps 1 to 4, enter valid details to create a project. Check this document for more information.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
Tools used to create the OLAP system for the iFix
iFix dashboard is developed using the opensource tools that included the complete OLAP system that streams the data from various sources, transform and process data and visualize the data using dashboards
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
Fiscal Event services flatten each fiscal event line item and post them into Druid via the Kafka Druid connector. The raw events are stored in the fiscal-events dataset in Druid. Metabase is used for visualisations.
The flattened fiscal event consists of the following attributes
iFIX Dashboard is built on Metabase and can be easily configured to develop various dashboards.
The below dashboard provides information about the fiscal position of various projects. The dashboard can be filtered for various date ranges and departmental hierarchies.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
Post infra setup (Kubernetes Cluster), the deployment has got 2 stages and 2 modes. We can see the stages first and then the modes.
Essentially, iFix dashboard deployment means that we need to generate Kubernetes manifests for each individual service of the required OLAP components like a druid, metabase. We use the tool called helm, which is an easy, effective and customizable packaging and deployment solution. So depending on where and which env you initiate the deployment there are 2 modes that you can deploy.
From Local machine - whatever we are trying in this sample exercise so far.
Advanced: Setup CI/CD System like Jenkins - Depending on how you want to set up your CI/CD and the expertise the steps will vary, however here you can find how we eGov has set up an exemplar CI/CD on Jenkins and the pipelines are created automatically without any manual intervention.
You can choose the infra type and the env to either single fat server or distributed setup on Docker compose or Kubernetes.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
Reference Dashboard
This section contains details and information about the iFIX Reference Dashboard. Click on the links below to learn more.
All content on this page by is licensed under a .
Check out the coverage of news and events linked to the PFM platform
Punjab to build first of its kind fiscal information exchange platform
Punjab villages are now utilizing the mGramSeva app to conveniently pay their water bills. Developed on the DIGIT platform, mGramSeva is a mobile application specifically designed to handle the collection and management of revenue and expenditures.
Below are the shared snippets of this news.
Dainik Bhaskar dated 02-06-2023
RozanaSpokesman dated 02-06-2023
Jag Bani dated 02-06-2023
iFix Dashboard
Post infra setup (Kubernetes Cluster), We start with deploying the Jenkins and kaniko-cache-warmer.
Sub Domain to expose CI/CD URL
GitHub
With
(username and password)
SSL Certificate for the sub-domain
Prepare an <> master config file and <>, you can name this file as you wish which will have the following configurations.
credentials, secrets (You need to encrypt using and create a ci-secret.yaml separately)
Check and Update details (like github Oauth app clientId and clientSecret, GitHub user details gitReadSshPrivateKey and gitReadAccessToken etc..)
To create Jenkins namespace mark this true
Add your env's kubconfigs under kubConfigs like
KubeConfig env's name and deploymentJobs name from ci.yaml should be the same
Update the and repo name with your forked repo name and provide read-only access to github user to those repo's.
SSL Certificate for the sub-domain
You have launched the Jenkins. You can access the same through your sub-domain which you configured in ci.yaml.
The Jenkins CI pipeline is configured and managed 'as code'.
Example URL - https://<Jenkins_domain>
Since there are many services and the development code is part of various git repos, you need to understand the concept of cicd-as-service which is open-sourced. This page also guides you through the process of creating a CI/CD pipeline.
As a developer - To integrate any new service/app to the CI/CD below is the starting point:
Once the desired service is ready for the integration: decide the service name, type of service, whether DB migration is required or not. While you commit the source code of the service to the git repository, the following file should be added with the relevant details which are mentioned below:
Build-config.yml –It is present under the build directory in each repository
This file contains the below details which are used for creating the automated Jenkins pipeline job for your newly created service.
While integrating a new service/app, the above content needs to be added in the build-config.yml file of that app repository. For example: If we are onboarding a new service called egov-test, then the build-config.yml should be added as mentioned below.
If a job requires multiple images to be created (DB Migration) then it should be added as below,
Note - If a new repository is created then the build-config.yml should be created under the build folder and then the config values are added to it.
The git repository URL is then added to the Job Builder parameters
When the Jenkins Job => job builder is executed the CI Pipeline gets created automatically based on the above details in build-config.yml. Eg: egov-test job will be created under the core-services folder in Jenkins because the “build-config was edited under core-services” And it should be the “master” branch only. Once the pipeline job is created, it can be executed for any feature branch with build parameters (Specifying which branch to be built – master or any feature branch).
As a result of the pipeline execution, the respective app/service docker image will be built and pushed to the Docker repository.
If git repository URL is available build the Job-Builder Job
If the git repository URL is not available ask the Devops team to add it.
The services deployed and managed on a Kubernetes cluster in cloud platforms like AWS, Azure, GCP, OpenStack, etc. Here, we use helm charts to manage and generate the Kubernetes manifest files and use them for further deployment to the respective Kubernetes cluster. Each service is created as charts which will have the below-mentioned files in them.
To deploy a new service, we need to create the helm chart for it. The chart should be created under the charts/helm directory in iFix-DevOps repository.
We have an automatic helm chart generator utility that needs to be installed on the local machine, the utility prompts for user inputs about the newly developed service (app specifications) for creating the helm chart. The requested chart with the configuration values (created based on the inputs provided) will be created for the user.
Name of the service? test-service Application Type? NA Kubernetes health checks to be enabled? Yes Flyway DB migration container necessary? No, Expose service to the internet? Yes, Route through API gateway [zuul] No Context path? hello
The generated chart will have the following files.
This chart can also be modified further based on user requirements.
The Deployment of manifests to the Kubernetes cluster is made very simple and easy. We have Jenkins Jobs for each state and are environment-specific. We need to provide the image name or the service name in the respective Jenkins deployment job.
Enter a caption for this image (optional)
Enter a caption for this image (optional)
The deployment Jenkins job internally performs the following operations,
Reads the image name or the service name given and finds the chart that is specific to it.
Generates the Kubernetes manifests files from the chart using the helm template engine.
Execute the deployment manifest with the specified docker image(s) to the Kubernetes cluster.
Program | Tech Team | Program Team | Partners | Stakeholders |
---|
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
Key | Value | Description |
---|
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
Job Builder – Job Builder is a Generic Jenkins job that creates the Jenkins pipeline automatically which are then used to build the application, create the docker image of it and push the image to the docker repository. The Job Builder job requires the git repository URL as a parameter. It clones the respective git repository and reads the file for each git repository and uses it to create the service build job.
Check git repository URL is available in
All content on this page by is licensed under a .
Environment Variables
Description
kafka.topics.ifix.adaptor.mapper
Topic in which client requests are put. From this further listen and posting happens
keycloak.host
Host name of the key cloak authentication token provider
keycloak.token.url
key cloak authentication token url
keycloak.credentials.clientid
userid of for authentication token
keycloak.credentials.clientsecret
password for authentication token
ifix.host
host name of IFIX server
ifix.event.url
IFIX post URL
spring.redis.host
Host name of the redis server
state.goverment.code
top level tenant id of the client
spring.jpa.properties.hibernate.dialect
dialect for JPA. you can change this to oracle or my sql etc
spring.jpa.properties.hibernate.jdbc.lob.non_contextual_creation
will generate the required tables in the respective database . This feature is used instead of flyway to get database in-dependency
Link
Api Swagger document
Postman
API
Description
events/v1/_push
Api for receiving data from client (mgram). This is the only api present in adapter
Title
Link
/department/v1/_create
/department/v1/_search
Title
Link
/expenditure/v1/_create
/expenditure/v1/_search
Title
Link
/project/v1/_create
/project/v1/_search
Title
Link
Swagger Yaml
Postman collection
Environment Variables
Description
kafka.topics.ifix.adaptor.mapper
Topic in which client requests are put . From this further listen and posting happens
keycloak.host
Host name of the key cloak authentication token provider
keycloak.token.url
key cloak authentication token url
keycloak.credentials.clientid
userid of for authentication token
keycloak.credentials.clientsecret
password for authentication token
ifix.host
host name of IFIX server
ifix.event.url
IFIX post URL
spring.redis.host
Host name of the redis server
state.goverment.code
top level tenant id of the client
ifix.coa.search.url
url for COA search in IFIX
ifix.project.search.url
Url for the project code search in IFIX
spring.jpa.properties.hibernate.dialect
dialect for JPA. you can change this to oracle or my sql etc
spring.jpa.properties.hibernate.jdbc.lob.non_contextual_creation
will generate the required tables in the respective database . This feature is used instead of flyway to get database in-dependency
API
Description
events/v1/_push
API for receiving data from the client (mgram). This is the only API present in the adapter
Attribute
Description
version
string example: 1.0.0
Version of the Data Model Definition
id
string example: 51c9c03c-1607-4dd5-9e0e-93bbf860f6f7
System generated UUID of Line Item
eventId
string example: fecbbf1d-d6e3-4f24-9935-02c33b9248e0
Fiscal Event Reference Id
tenantId
string nullable: false example: pb
Tenant Id
government.id
string example: pb
government.name
string example: Punjab
department.id
string example: 5d664a9f-9367-458a-aa5f-07fb18b90adc
Unique system generated UUID
department.code
string example: DWSS
Unique department code
department.name
string example: Department of Water Supply & Sanitation
Name of the department
expenditure.id
string example: d334d99a-b5c1-426c-942b-f11b5b5454fe
Unique system generated UUID
expenditure.code
string example: JJM
Unique Expenditure code
expenditure.name
string example: Jal Jeevan Mission
Name of the Expenditure
expenditure.type
string
Type of the Expenditure Enum: Array [ 2 ]
project.id
string example: 6ab1b1d2-e224-46fa-b53b-ac83b3c7ce95
Unique system generated UUID
project.code
string example: PWT
Unique Project code
project.name
string example: Peepli Water Tank
Name of the Project
eventType
string nullable: false example: Appropriation
Captures the event type e.g Demand, Receipt, Bill, Payment
eventTime
integer($int64) example: 1628177497000
when the event occurred at source system level
referenceId
string example: 013e9c56-8207-4dac-9f4d-f1e20bd824e7
reference unique id(transaction id) of the caller system
parentEventId
string nullable: true example: 7d476bb0-bc9f-48e2-8ad4-5a4a36220779
If this is a follow up event then it will refer to the parent event using this reference id.
parentReferenceId
string nullable: true example: 77f23efe-879d-407b-8f23-7b8dd5b2ecb1
If this is a follow up event then it will refer to the parent event in source system using this reference id.
amount
number example: 10234.5
Transaction Amount
coa.id
string example: e9f940d4-69aa-4bbb-aa82-111b8948a6b6
Unique system generated UUID
coa.coaCode
string example: 1234-123-123-12-12-12
Chart of account concatenated string
coa.majorHead
string example: 1234
Major head code
coa.majorHeadName
string
Major head name
coa.majorHeadType
string example: Revenue
Major head code type
coa.subMajorHead
string example: 123
Sub-Major head code
coa.subMajorHeadName
string
Sub-Major head name
coa.minorHead
string example: 123
Minor head code
coa.minorHeadName
string
Minor head name
coa.subHead
string example: 12
Sub-Head code
coa.subHeadName
string
Sub-Head name
coa.groupHead
string example: 12
Group head code
coa.groupHeadName
string
Group head name
coa.objectHead
string example: 12
Object head code
coa.objectHeadName
string
Object head name
fromBillingPeriod
integer($int64) example: 1622907239000
Start date of the billing period for which transaction is applicable
toBillingPeriod
integer($int64) example: 1628177643000
Start date of the billing period for which transaction is applicable
|
| This is a hardcoded value And won’t change w.r.t environment. And It depends upon the druid broker’s protocol that is getting used to connect. |
|
| This is a hardcoded value And won’t change w.r.t environment. It depends upon the druid broker protocol that we are using and the corresponding port of that druid broker. |
|
| this is kept under |
|
| This is the data Source present in Druid DB. It is the same as defined in Druid DB. |
iFIX | Satish N | Prashanth C | BMGF | Finance Department, Punjab |
Sameer Khurana | Janaagraha | Department of Rural Development and Panchayats, Punjab |
Satwinder Kaur | Department of Social Security, Women & Child Development, Punjab |
CBGA, CSEP etc (get confirmation from Ameya) | Department of Local Government, Punjab |
ODI | Department of Public Works, Punjab |
PD (Public Digital) | Department of Social Justice, Empowerment and Minorities |
ThinkWell | Department of Governance Reforms, Punjab |
mGramSeva | Pradeep Kumar | Ajay Bansal | J-PAL | Water Supply and Sanitation Department, Punjab |
Nirupama Karanam | IIM-B | PSPCL |
Rahul Dev Garg | PayGov |
Saloni Bajaj |
Debasish Chakaraborty |
Ramkrishana Sahoo |
Snehal Gothe |
MUKTASoft | Nirbhay Singh | Sourav Mohanty | Janaagraha | State Urban Development Agency, Odisha |
Arindam Gupta | MSC | Housing and Urban Development Department, Odisha |
Elzan | E&Y - PMU (HUDD) | ULBs- urban Local Bodies, Odisha (Jatni and Dhenkanal) |
Subhashini | CBOs(SHGs/SDAs) |
DTI Odisha |
Enables exchange of disburse related messages
Signature of {header}+{message} body verified using sender's signing public key
"TgE1hcA2E+YPMdPGz4vveKQpR0x+pgzRTlet52qh63Kekr71vWWScXqaRFtQW64uRFZGBUhHYYZQ2y6LffwnNOOQhhssaThhqVBhXNEwX9i75SNYXi5XSJVDYzSyHrhF20HW6RE9mAVWdc80i7d+FXlh+b/U+fnj+SrZ2s6Xd0WUZvU29LgqeUpyznlWLu1mDdJxNZavsDLWmxjTnknqBjDvwSc35WhFDhXDA2lWmm8YpZ1Y6TBmvvtVS7mAOTnhFy9sdCbrLcfXk5QWIsdzlvPqlkvdwEf30OZ6ewb680Aj3hO2OT5LCv7iLyz7C7srnB9lJT5gXiw+eSnktPXlDA=="
"123"
1708428280
identity payload type in message property.
"program@https://spp.example.org"
"https://spp.example.org/{namespace}/callback/on-search"
receiver id registered with the calling system. Used for authorization, encryption, digital sign verfication, etc., functions.
"program@https://pymts.example.org"
"update"
"251c51eb-e970-4e01-a99a-70136c47a934"
Unique identifier of the tenant that could be a department/ulb/state
"pb.jalandhar,dwss"
Formated program code PROG/{FINANCIAL_YEAR}/{ULB-CODE}/{AUTO_NUMBER}
"PORG/2023-24/PG.CITYA/00001"
Refernece id of works payment
"EP/0/2023-24/08/14/000267, 251c51eb-e970-4e01-a99a-70136c47a934"
Formated transaction code, it will return after transaction is created at another end.
"PI/2023-24/00001,BENF/2023-24/00001"
"251c51eb-e970-4e01-a99a-70136c47a934"
Account no with ifsc code will be account code, it will be in format ACCOUNT_NO@IFSC_CODE
"1234567890@SBIN0003491"
Amount to be paid including deduction amount
1000
Actual amount to be paid without deduction
1000
"SUCCESSFUL"
Additional JSON property oject to hold custom user defined contextual data
Collection of audit related fields used by most models
username (preferred) or userid of the user that created the object
username (preferred) or userid of the user that last modified the object
epoch of the time object is created
epoch of the time object is last modified
"FC001"
"AC002"
"RSC003"
"ESC004"
"SFC005"
"TSC006"
"INR"
"LC007"
"251c51eb-e970-4e01-a99a-70136c47a934"
Unique identifier of the tenant that could be a department/ulb/state
"pb.jalandhar,dwss"
Formated program code PROG/{FINANCIAL_YEAR}/{ULB-CODE}/{AUTO_NUMBER}
"PORG/2023-24/PG.CITYA/00001"
Refernece id of works payment
"EP/0/2023-24/08/14/000267, 251c51eb-e970-4e01-a99a-70136c47a934"
Formated transaction code, it will return after transaction is created at another end.
"PI/2023-24/00001,BENF/2023-24/00001"
"251c51eb-e970-4e01-a99a-70136c47a934"
Account no with ifsc code will be account code, it will be in format ACCOUNT_NO@IFSC_CODE
"1234567890@SBIN0003491"
Amount to be paid including deduction amount
1000
Actual amount to be paid without deduction
1000
"SUCCESSFUL"
Additional JSON property oject to hold custom user defined contextual data
Collection of audit related fields used by most models
username (preferred) or userid of the user that created the object
username (preferred) or userid of the user that last modified the object
epoch of the time object is created
epoch of the time object is last modified
"FC001"
"AC002"
"RSC003"
"ESC004"
"SFC005"
"TSC006"
"INR"
"LC007"
HTTP layer error details
Updated status of create disburse request.
Signature of {header}+{message} body verified using sender's signing public key
"TgE1hcA2E+YPMdPGz4vveKQpR0x+pgzRTlet52qh63Kekr71vWWScXqaRFtQW64uRFZGBUhHYYZQ2y6LffwnNOOQhhssaThhqVBhXNEwX9i75SNYXi5XSJVDYzSyHrhF20HW6RE9mAVWdc80i7d+FXlh+b/U+fnj+SrZ2s6Xd0WUZvU29LgqeUpyznlWLu1mDdJxNZavsDLWmxjTnknqBjDvwSc35WhFDhXDA2lWmm8YpZ1Y6TBmvvtVS7mAOTnhFy9sdCbrLcfXk5QWIsdzlvPqlkvdwEf30OZ6ewb680Aj3hO2OT5LCv7iLyz7C7srnB9lJT5gXiw+eSnktPXlDA=="
"123"
1708428280
identity payload type in message property.
"program@https://spp.example.org"
"https://spp.example.org/{namespace}/callback/on-search"
receiver id registered with the calling system. Used for authorization, encryption, digital sign verfication, etc., functions.
"program@https://pymts.example.org"
"update"
"251c51eb-e970-4e01-a99a-70136c47a934"
Unique identifier of the tenant that could be a department/ulb/state
"pb.jalandhar,dwss"
Formated program code PROG/{FINANCIAL_YEAR}/{ULB-CODE}/{AUTO_NUMBER}
"PORG/2023-24/PG.CITYA/00001"
"251c51eb-e970-4e01-a99a-70136c47a934"
"SUCCESSFUL"
Additional JSON property oject to hold custom user defined contextual data
Collection of audit related fields used by most models
username (preferred) or userid of the user that created the object
username (preferred) or userid of the user that last modified the object
epoch of the time object is created
epoch of the time object is last modified
"FC001"
"AC002"
"RSC003"
"ESC004"
"SFC005"
"TSC006"
"INR"
"LC007"
HTTP layer error details
Enables to udpate sanction details like status, remaining fields will be immutable
Signature of {header}+{message} body verified using sender's signing public key
"TgE1hcA2E+YPMdPGz4vveKQpR0x+pgzRTlet52qh63Kekr71vWWScXqaRFtQW64uRFZGBUhHYYZQ2y6LffwnNOOQhhssaThhqVBhXNEwX9i75SNYXi5XSJVDYzSyHrhF20HW6RE9mAVWdc80i7d+FXlh+b/U+fnj+SrZ2s6Xd0WUZvU29LgqeUpyznlWLu1mDdJxNZavsDLWmxjTnknqBjDvwSc35WhFDhXDA2lWmm8YpZ1Y6TBmvvtVS7mAOTnhFy9sdCbrLcfXk5QWIsdzlvPqlkvdwEf30OZ6ewb680Aj3hO2OT5LCv7iLyz7C7srnB9lJT5gXiw+eSnktPXlDA=="
"123"
1708428280
identity payload type in message property.
"program@https://spp.example.org"
"https://spp.example.org/{namespace}/callback/on-search"
receiver id registered with the calling system. Used for authorization, encryption, digital sign verfication, etc., functions.
"program@https://pymts.example.org"
"update"
"251c51eb-e970-4e01-a99a-70136c47a934"
Unique identifier of the tenant that could be a department/ulb/state
"pb.jalandhar,dwss"
Formated program code PROG/{FINANCIAL_YEAR}/{ULB-CODE}/{AUTO_NUMBER}
"PORG/2023-24/PG.CITYA/00001"
"SUCCESSFUL"
Additional JSON property oject to hold custom user defined contextual data
Collection of audit related fields used by most models
username (preferred) or userid of the user that created the object
username (preferred) or userid of the user that last modified the object
epoch of the time object is created
epoch of the time object is last modified
"FC001"
"AC002"
"RSC003"
"ESC004"
"SFC005"
"TSC006"
"INR"
"LC007"
HTTP layer error details
Updated status of create disburse request
Signature of {header}+{message} body verified using sender's signing public key
"TgE1hcA2E+YPMdPGz4vveKQpR0x+pgzRTlet52qh63Kekr71vWWScXqaRFtQW64uRFZGBUhHYYZQ2y6LffwnNOOQhhssaThhqVBhXNEwX9i75SNYXi5XSJVDYzSyHrhF20HW6RE9mAVWdc80i7d+FXlh+b/U+fnj+SrZ2s6Xd0WUZvU29LgqeUpyznlWLu1mDdJxNZavsDLWmxjTnknqBjDvwSc35WhFDhXDA2lWmm8YpZ1Y6TBmvvtVS7mAOTnhFy9sdCbrLcfXk5QWIsdzlvPqlkvdwEf30OZ6ewb680Aj3hO2OT5LCv7iLyz7C7srnB9lJT5gXiw+eSnktPXlDA=="
"123"
1708428280
identity payload type in message property.
"program@https://spp.example.org"
"https://spp.example.org/{namespace}/callback/on-search"
receiver id registered with the calling system. Used for authorization, encryption, digital sign verfication, etc., functions.
"program@https://pymts.example.org"
"create"
"251c51eb-e970-4e01-a99a-70136c47a934"
Unique identifier of the tenant that could be a department/ulb/state
"pb.jalandhar,dwss"
Formated program code PROG/{FINANCIAL_YEAR}/{ULB-CODE}/{AUTO_NUMBER}
"PORG/2023-24/PG.CITYA/00001"
Refernece id of works payment
"EP/0/2023-24/08/14/000267, 251c51eb-e970-4e01-a99a-70136c47a934"
Formated transaction code, it will return after transaction is created at another end.
"PI/2023-24/00001,BENF/2023-24/00001"
"251c51eb-e970-4e01-a99a-70136c47a934"
Account no with ifsc code will be account code, it will be in format ACCOUNT_NO@IFSC_CODE
"1234567890@SBIN0003491"
Amount to be paid including deduction amount
1000
Actual amount to be paid without deduction
1000
"SUCCESSFUL"
Additional JSON property oject to hold custom user defined contextual data
Collection of audit related fields used by most models
username (preferred) or userid of the user that created the object
username (preferred) or userid of the user that last modified the object
epoch of the time object is created
epoch of the time object is last modified
"FC001"
"AC002"
"RSC003"
"ESC004"
"SFC005"
"TSC006"
"INR"
"LC007"
"251c51eb-e970-4e01-a99a-70136c47a934"
Unique identifier of the tenant that could be a department/ulb/state
"pb.jalandhar,dwss"
Formated program code PROG/{FINANCIAL_YEAR}/{ULB-CODE}/{AUTO_NUMBER}
"PORG/2023-24/PG.CITYA/00001"
Refernece id of works payment
"EP/0/2023-24/08/14/000267, 251c51eb-e970-4e01-a99a-70136c47a934"
Formated transaction code, it will return after transaction is created at another end.
"PI/2023-24/00001,BENF/2023-24/00001"
"251c51eb-e970-4e01-a99a-70136c47a934"
Account no with ifsc code will be account code, it will be in format ACCOUNT_NO@IFSC_CODE
"1234567890@SBIN0003491"
Amount to be paid including deduction amount
1000
Actual amount to be paid without deduction
1000
"SUCCESSFUL"
Additional JSON property oject to hold custom user defined contextual data
Collection of audit related fields used by most models
username (preferred) or userid of the user that created the object
username (preferred) or userid of the user that last modified the object
epoch of the time object is created
epoch of the time object is last modified
"FC001"
"AC002"
"RSC003"
"ESC004"
"SFC005"
"TSC006"
"INR"
"LC007"
HTTP layer error details
Enables exchange of program related messages
Signature of {header}+{message} body verified using sender's signing public key
"TgE1hcA2E+YPMdPGz4vveKQpR0x+pgzRTlet52qh63Kekr71vWWScXqaRFtQW64uRFZGBUhHYYZQ2y6LffwnNOOQhhssaThhqVBhXNEwX9i75SNYXi5XSJVDYzSyHrhF20HW6RE9mAVWdc80i7d+FXlh+b/U+fnj+SrZ2s6Xd0WUZvU29LgqeUpyznlWLu1mDdJxNZavsDLWmxjTnknqBjDvwSc35WhFDhXDA2lWmm8YpZ1Y6TBmvvtVS7mAOTnhFy9sdCbrLcfXk5QWIsdzlvPqlkvdwEf30OZ6ewb680Aj3hO2OT5LCv7iLyz7C7srnB9lJT5gXiw+eSnktPXlDA=="
"123"
1708428280
identity payload type in message property.
"program@https://spp.example.org"
"https://spp.example.org/{namespace}/callback/on-search"
receiver id registered with the calling system. Used for authorization, encryption, digital sign verfication, etc., functions.
"program@https://pymts.example.org"
"251c51eb-e970-4e01-a99a-70136c47a934"
Unique identifier of the tenant that could be a department/ulb/state
"pb.jalandhar,dwss"
Formated program code PROG/{FINANCIAL_YEAR}/{ULB-CODE}/{AUTO_NUMBER}
"PORG/2023-24/PG.CITYA/00001"
Name of the program
"Community Development Initiative"
Parent Id of a program
"251c51eb-e970-4e01-a99a-70136c47a934"
Description of the program
"Empowering local communities through sustainable development projects."
1672531200
1704067200
"SUCCESSFUL"
"ACTIVE"
Additional JSON property oject to hold custom user defined contextual data
Collection of audit related fields used by most models
username (preferred) or userid of the user that created the object
username (preferred) or userid of the user that last modified the object
epoch of the time object is created
epoch of the time object is last modified
"FC001"
"AC002"
"RSC003"
"ESC004"
"SFC005"
"TSC006"
"INR"
"LC007"
HTTP layer error details
Enables exchange of program related messages
Signature of {header}+{message} body verified using sender's signing public key
"TgE1hcA2E+YPMdPGz4vveKQpR0x+pgzRTlet52qh63Kekr71vWWScXqaRFtQW64uRFZGBUhHYYZQ2y6LffwnNOOQhhssaThhqVBhXNEwX9i75SNYXi5XSJVDYzSyHrhF20HW6RE9mAVWdc80i7d+FXlh+b/U+fnj+SrZ2s6Xd0WUZvU29LgqeUpyznlWLu1mDdJxNZavsDLWmxjTnknqBjDvwSc35WhFDhXDA2lWmm8YpZ1Y6TBmvvtVS7mAOTnhFy9sdCbrLcfXk5QWIsdzlvPqlkvdwEf30OZ6ewb680Aj3hO2OT5LCv7iLyz7C7srnB9lJT5gXiw+eSnktPXlDA=="
"123"
1708428280
identity payload type in message property.
"program@https://spp.example.org"
"https://spp.example.org/{namespace}/callback/on-search"
receiver id registered with the calling system. Used for authorization, encryption, digital sign verfication, etc., functions.
"program@https://pymts.example.org"
"create"
"251c51eb-e970-4e01-a99a-70136c47a934"
Unique identifier of the tenant that could be a department/ulb/state
"pb.jalandhar,dwss"
Formated program code PROG/{FINANCIAL_YEAR}/{ULB-CODE}/{AUTO_NUMBER}
"PORG/2023-24/PG.CITYA/00001"
Name of the program
"Community Development Initiative"
Parent Id of a program
"251c51eb-e970-4e01-a99a-70136c47a934"
Description of the program
"Empowering local communities through sustainable development projects."
1672531200
1704067200
"SUCCESSFUL"
"ACTIVE"
Additional JSON property oject to hold custom user defined contextual data
Collection of audit related fields used by most models
username (preferred) or userid of the user that created the object
username (preferred) or userid of the user that last modified the object
epoch of the time object is created
epoch of the time object is last modified
"FC001"
"AC002"
"RSC003"
"ESC004"
"SFC005"
"TSC006"
"INR"
"LC007"
HTTP layer error details
Enables exchange of program related messages
Signature of {header}+{message} body verified using sender's signing public key
"TgE1hcA2E+YPMdPGz4vveKQpR0x+pgzRTlet52qh63Kekr71vWWScXqaRFtQW64uRFZGBUhHYYZQ2y6LffwnNOOQhhssaThhqVBhXNEwX9i75SNYXi5XSJVDYzSyHrhF20HW6RE9mAVWdc80i7d+FXlh+b/U+fnj+SrZ2s6Xd0WUZvU29LgqeUpyznlWLu1mDdJxNZavsDLWmxjTnknqBjDvwSc35WhFDhXDA2lWmm8YpZ1Y6TBmvvtVS7mAOTnhFy9sdCbrLcfXk5QWIsdzlvPqlkvdwEf30OZ6ewb680Aj3hO2OT5LCv7iLyz7C7srnB9lJT5gXiw+eSnktPXlDA=="
"123"
1708428280
identity payload type in message property.
"program@https://spp.example.org"
"https://spp.example.org/{namespace}/callback/on-search"
receiver id registered with the calling system. Used for authorization, encryption, digital sign verfication, etc., functions.
"program@https://pymts.example.org"
"update"
"251c51eb-e970-4e01-a99a-70136c47a934"
Unique identifier of the tenant that could be a department/ulb/state
"pb.jalandhar,dwss"
Formated program code PROG/{FINANCIAL_YEAR}/{ULB-CODE}/{AUTO_NUMBER}
"PORG/2023-24/PG.CITYA/00001"
Name of the program
"Community Development Initiative"
Parent Id of a program
"251c51eb-e970-4e01-a99a-70136c47a934"
Description of the program
"Empowering local communities through sustainable development projects."
1672531200
1704067200
"SUCCESSFUL"
"ACTIVE"
Additional JSON property oject to hold custom user defined contextual data
Collection of audit related fields used by most models
username (preferred) or userid of the user that created the object
username (preferred) or userid of the user that last modified the object
epoch of the time object is created
epoch of the time object is last modified
"FC001"
"AC002"
"RSC003"
"ESC004"
"SFC005"
"TSC006"
"INR"
"LC007"
HTTP layer error details
Search allocation by query and return the response
Signature of {header}+{message} body verified using sender's signing public key
"TgE1hcA2E+YPMdPGz4vveKQpR0x+pgzRTlet52qh63Kekr71vWWScXqaRFtQW64uRFZGBUhHYYZQ2y6LffwnNOOQhhssaThhqVBhXNEwX9i75SNYXi5XSJVDYzSyHrhF20HW6RE9mAVWdc80i7d+FXlh+b/U+fnj+SrZ2s6Xd0WUZvU29LgqeUpyznlWLu1mDdJxNZavsDLWmxjTnknqBjDvwSc35WhFDhXDA2lWmm8YpZ1Y6TBmvvtVS7mAOTnhFy9sdCbrLcfXk5QWIsdzlvPqlkvdwEf30OZ6ewb680Aj3hO2OT5LCv7iLyz7C7srnB9lJT5gXiw+eSnktPXlDA=="
"123"
1708428280
identity payload type in message property.
"program@https://spp.example.org"
"https://spp.example.org/{namespace}/callback/on-search"
receiver id registered with the calling system. Used for authorization, encryption, digital sign verfication, etc., functions.
"program@https://pymts.example.org"
"search"
Pagination details
Limit for total no of records in single search max limit should be defined as envirnment variable
offset or page no
Total count for a perticular criteria
result sorting order
Sorting order
Sorting order
HTTP layer error details
Return disbursements based on query
"123"
1708428280
identity payload type in message property.
"program@https://spp.example.org"
"https://spp.example.org/{namespace}/callback/on-search"
receiver id registered with the calling system. Used for authorization, encryption, digital sign verfication, etc., functions.
"program@https://pymts.example.org"
"search"
Pagination details
Limit for total no of records in single search max limit should be defined as envirnment variable
offset or page no
Total count for a perticular criteria
result sorting order
Sorting order
Sorting order
HTTP layer error details
Enables search of program
Signature of {header}+{message} body verified using sender's signing public key
"TgE1hcA2E+YPMdPGz4vveKQpR0x+pgzRTlet52qh63Kekr71vWWScXqaRFtQW64uRFZGBUhHYYZQ2y6LffwnNOOQhhssaThhqVBhXNEwX9i75SNYXi5XSJVDYzSyHrhF20HW6RE9mAVWdc80i7d+FXlh+b/U+fnj+SrZ2s6Xd0WUZvU29LgqeUpyznlWLu1mDdJxNZavsDLWmxjTnknqBjDvwSc35WhFDhXDA2lWmm8YpZ1Y6TBmvvtVS7mAOTnhFy9sdCbrLcfXk5QWIsdzlvPqlkvdwEf30OZ6ewb680Aj3hO2OT5LCv7iLyz7C7srnB9lJT5gXiw+eSnktPXlDA=="
"123"
1708428280
identity payload type in message property.
"program@https://spp.example.org"
"https://spp.example.org/{namespace}/callback/on-search"
receiver id registered with the calling system. Used for authorization, encryption, digital sign verfication, etc., functions.
"program@https://pymts.example.org"
"search"
"251c51eb-e970-4e01-a99a-70136c47a934"
"251c51eb-e970-4e01-a99a-70136c47a934"
"Mukta"
"PG/2023-24/000091"
"pg.citya"
Pagination details
Limit for total no of records in single search max limit should be defined as envirnment variable
offset or page no
Total count for a perticular criteria
result sorting order
Sorting order
Sorting order
HTTP layer error details
Search sanciton by query and return the response
Signature of {header}+{message} body verified using sender's signing public key
"TgE1hcA2E+YPMdPGz4vveKQpR0x+pgzRTlet52qh63Kekr71vWWScXqaRFtQW64uRFZGBUhHYYZQ2y6LffwnNOOQhhssaThhqVBhXNEwX9i75SNYXi5XSJVDYzSyHrhF20HW6RE9mAVWdc80i7d+FXlh+b/U+fnj+SrZ2s6Xd0WUZvU29LgqeUpyznlWLu1mDdJxNZavsDLWmxjTnknqBjDvwSc35WhFDhXDA2lWmm8YpZ1Y6TBmvvtVS7mAOTnhFy9sdCbrLcfXk5QWIsdzlvPqlkvdwEf30OZ6ewb680Aj3hO2OT5LCv7iLyz7C7srnB9lJT5gXiw+eSnktPXlDA=="
"123"
1708428280
identity payload type in message property.
"program@https://spp.example.org"
"https://spp.example.org/{namespace}/callback/on-search"
receiver id registered with the calling system. Used for authorization, encryption, digital sign verfication, etc., functions.
"program@https://pymts.example.org"
"search"
Pagination details
Limit for total no of records in single search max limit should be defined as envirnment variable
offset or page no
Total count for a perticular criteria
result sorting order
Sorting order
Sorting order
HTTP layer error details
Enables exchange of program related messages
Signature of {header}+{message} body verified using sender's signing public key
"TgE1hcA2E+YPMdPGz4vveKQpR0x+pgzRTlet52qh63Kekr71vWWScXqaRFtQW64uRFZGBUhHYYZQ2y6LffwnNOOQhhssaThhqVBhXNEwX9i75SNYXi5XSJVDYzSyHrhF20HW6RE9mAVWdc80i7d+FXlh+b/U+fnj+SrZ2s6Xd0WUZvU29LgqeUpyznlWLu1mDdJxNZavsDLWmxjTnknqBjDvwSc35WhFDhXDA2lWmm8YpZ1Y6TBmvvtVS7mAOTnhFy9sdCbrLcfXk5QWIsdzlvPqlkvdwEf30OZ6ewb680Aj3hO2OT5LCv7iLyz7C7srnB9lJT5gXiw+eSnktPXlDA=="
"123"
1708428280
identity payload type in message property.
"program@https://spp.example.org"
"https://spp.example.org/{namespace}/callback/on-search"
receiver id registered with the calling system. Used for authorization, encryption, digital sign verfication, etc., functions.
"program@https://pymts.example.org"
"update"
"251c51eb-e970-4e01-a99a-70136c47a934"
Unique identifier of the tenant that could be a department/ulb/state
"pb.jalandhar,dwss"
Formated program code PROG/{FINANCIAL_YEAR}/{ULB-CODE}/{AUTO_NUMBER}
"PORG/2023-24/PG.CITYA/00001"
Name of the program
"Community Development Initiative"
Parent Id of a program
"251c51eb-e970-4e01-a99a-70136c47a934"
Description of the program
"Empowering local communities through sustainable development projects."
1672531200
1704067200
"SUCCESSFUL"
"ACTIVE"
Additional JSON property oject to hold custom user defined contextual data
Collection of audit related fields used by most models
username (preferred) or userid of the user that created the object
username (preferred) or userid of the user that last modified the object
epoch of the time object is created
epoch of the time object is last modified
"FC001"
"AC002"
"RSC003"
"ESC004"
"SFC005"
"TSC006"
"INR"
"LC007"
HTTP layer error details
Enables creation of sanctions if not created in the system.
Signature of {header}+{message} body verified using sender's signing public key
"TgE1hcA2E+YPMdPGz4vveKQpR0x+pgzRTlet52qh63Kekr71vWWScXqaRFtQW64uRFZGBUhHYYZQ2y6LffwnNOOQhhssaThhqVBhXNEwX9i75SNYXi5XSJVDYzSyHrhF20HW6RE9mAVWdc80i7d+FXlh+b/U+fnj+SrZ2s6Xd0WUZvU29LgqeUpyznlWLu1mDdJxNZavsDLWmxjTnknqBjDvwSc35WhFDhXDA2lWmm8YpZ1Y6TBmvvtVS7mAOTnhFy9sdCbrLcfXk5QWIsdzlvPqlkvdwEf30OZ6ewb680Aj3hO2OT5LCv7iLyz7C7srnB9lJT5gXiw+eSnktPXlDA=="
"123"
1708428280
identity payload type in message property.
"program@https://spp.example.org"
"https://spp.example.org/{namespace}/callback/on-search"
receiver id registered with the calling system. Used for authorization, encryption, digital sign verfication, etc., functions.
"program@https://pymts.example.org"
"create"
"251c51eb-e970-4e01-a99a-70136c47a934"
Unique identifier of the tenant that could be a department/ulb/state
"pb.jalandhar,dwss"
Formated program code PROG/{FINANCIAL_YEAR}/{ULB-CODE}/{AUTO_NUMBER}
"PORG/2023-24/PG.CITYA/00001"
"SUCCESSFUL"
Additional JSON property oject to hold custom user defined contextual data
Collection of audit related fields used by most models
username (preferred) or userid of the user that created the object
username (preferred) or userid of the user that last modified the object
epoch of the time object is created
epoch of the time object is last modified
"FC001"
"AC002"
"RSC003"
"ESC004"
"SFC005"
"TSC006"
"INR"
"LC007"
HTTP layer error details
Update created allocation if exists or create new
Signature of {header}+{message} body verified using sender's signing public key
"TgE1hcA2E+YPMdPGz4vveKQpR0x+pgzRTlet52qh63Kekr71vWWScXqaRFtQW64uRFZGBUhHYYZQ2y6LffwnNOOQhhssaThhqVBhXNEwX9i75SNYXi5XSJVDYzSyHrhF20HW6RE9mAVWdc80i7d+FXlh+b/U+fnj+SrZ2s6Xd0WUZvU29LgqeUpyznlWLu1mDdJxNZavsDLWmxjTnknqBjDvwSc35WhFDhXDA2lWmm8YpZ1Y6TBmvvtVS7mAOTnhFy9sdCbrLcfXk5QWIsdzlvPqlkvdwEf30OZ6ewb680Aj3hO2OT5LCv7iLyz7C7srnB9lJT5gXiw+eSnktPXlDA=="
"123"
1708428280
identity payload type in message property.
"program@https://spp.example.org"
"https://spp.example.org/{namespace}/callback/on-search"
receiver id registered with the calling system. Used for authorization, encryption, digital sign verfication, etc., functions.
"program@https://pymts.example.org"
"create"
"251c51eb-e970-4e01-a99a-70136c47a934"
Unique identifier of the tenant that could be a department/ulb/state
"pb.jalandhar,dwss"
Formated program code PROG/{FINANCIAL_YEAR}/{ULB-CODE}/{AUTO_NUMBER}
"PORG/2023-24/PG.CITYA/00001"
"251c51eb-e970-4e01-a99a-70136c47a934"
"SUCCESSFUL"
Additional JSON property oject to hold custom user defined contextual data
Collection of audit related fields used by most models
username (preferred) or userid of the user that created the object
username (preferred) or userid of the user that last modified the object
epoch of the time object is created
epoch of the time object is last modified
"FC001"
"AC002"
"RSC003"
"ESC004"
"SFC005"
"TSC006"
"INR"
"LC007"
HTTP layer error details
Create new disbursement request to initiate payment.
Signature of {header}+{message} body verified using sender's signing public key
"TgE1hcA2E+YPMdPGz4vveKQpR0x+pgzRTlet52qh63Kekr71vWWScXqaRFtQW64uRFZGBUhHYYZQ2y6LffwnNOOQhhssaThhqVBhXNEwX9i75SNYXi5XSJVDYzSyHrhF20HW6RE9mAVWdc80i7d+FXlh+b/U+fnj+SrZ2s6Xd0WUZvU29LgqeUpyznlWLu1mDdJxNZavsDLWmxjTnknqBjDvwSc35WhFDhXDA2lWmm8YpZ1Y6TBmvvtVS7mAOTnhFy9sdCbrLcfXk5QWIsdzlvPqlkvdwEf30OZ6ewb680Aj3hO2OT5LCv7iLyz7C7srnB9lJT5gXiw+eSnktPXlDA=="
"123"
1708428280
identity payload type in message property.
"program@https://spp.example.org"
"https://spp.example.org/{namespace}/callback/on-search"
receiver id registered with the calling system. Used for authorization, encryption, digital sign verfication, etc., functions.
"program@https://pymts.example.org"
"create"
"251c51eb-e970-4e01-a99a-70136c47a934"
Unique identifier of the tenant that could be a department/ulb/state
"pb.jalandhar,dwss"
Formated program code PROG/{FINANCIAL_YEAR}/{ULB-CODE}/{AUTO_NUMBER}
"PORG/2023-24/PG.CITYA/00001"
Refernece id of works payment
"EP/0/2023-24/08/14/000267, 251c51eb-e970-4e01-a99a-70136c47a934"
Formated transaction code, it will return after transaction is created at another end.
"PI/2023-24/00001,BENF/2023-24/00001"
"251c51eb-e970-4e01-a99a-70136c47a934"
Account no with ifsc code will be account code, it will be in format ACCOUNT_NO@IFSC_CODE
"1234567890@SBIN0003491"
Amount to be paid including deduction amount
1000
Actual amount to be paid without deduction
1000
"SUCCESSFUL"
Additional JSON property oject to hold custom user defined contextual data
Collection of audit related fields used by most models
username (preferred) or userid of the user that created the object
username (preferred) or userid of the user that last modified the object
epoch of the time object is created
epoch of the time object is last modified
"FC001"
"AC002"
"RSC003"
"ESC004"
"SFC005"
"TSC006"
"INR"
"LC007"
"251c51eb-e970-4e01-a99a-70136c47a934"
Unique identifier of the tenant that could be a department/ulb/state
"pb.jalandhar,dwss"
Formated program code PROG/{FINANCIAL_YEAR}/{ULB-CODE}/{AUTO_NUMBER}
"PORG/2023-24/PG.CITYA/00001"
Refernece id of works payment
"EP/0/2023-24/08/14/000267, 251c51eb-e970-4e01-a99a-70136c47a934"
Formated transaction code, it will return after transaction is created at another end.
"PI/2023-24/00001,BENF/2023-24/00001"
"251c51eb-e970-4e01-a99a-70136c47a934"
Account no with ifsc code will be account code, it will be in format ACCOUNT_NO@IFSC_CODE
"1234567890@SBIN0003491"
Amount to be paid including deduction amount
1000
Actual amount to be paid without deduction
1000
"SUCCESSFUL"
Additional JSON property oject to hold custom user defined contextual data
Collection of audit related fields used by most models
username (preferred) or userid of the user that created the object
username (preferred) or userid of the user that last modified the object
epoch of the time object is created
epoch of the time object is last modified
"FC001"
"AC002"
"RSC003"
"ESC004"
"SFC005"
"TSC006"
"INR"
"LC007"
HTTP layer error details
Link
Api Swagger document
Postman
Platform Tools
Latest Version
Used Version
Description
License Type
v0.40.4
v0.40.2
Metabase is an open-source business intelligence tool. It lets you ask questions about your data, and displays answers in formats that make sense, whether that's a bar graph or a detailed table.
0.21.1
0.21.1
Apache Druid is a real-time analytics database designed for fast slice-and-dice analytics ("OLAP" queries) on large data sets. Druid is most often used as a database for powering use cases where real-time ingest, fast query performance, and high uptime is important. As such, Druid is commonly used for powering GUIs of analytical applications, or as a backend for highly concurrent APIs that need fast aggregations. Druid works best with event-oriented data.
6.2.0
5.4.1
Apache Kafka is an open-sourced and community distributed event streaming platform capable of handling trillions of events a day.
13.4
9.6 and 10.6
PostgreSQL is a powerful, open-source object-relational database system with over 30 years of active development that has earned it a strong reputation for reliability, feature robustness, and performance
Authors: Manish Srivastava & Gautam Ravichander
"In 50 years, every street in London will be buried under nine feet of manure". The Times reported in 1894. At that time the city of London primarily commuted through horse carts and had 50000 horses that each produced 15-35 pounds of manure. This was a health nightmare for the city administration.
In 1834, Eugenio Barsanti and Felice Matteucci built the first gas-based internal combustion engine. By 1879, Carl Benz demonstrated his one-cylinder two-stroke unit built on a gas engine. By 1904, the UK was developing a motor vehicle act requiring drivers to have a license, enforcing speed limits, and penalties for reckless driving. Today, the number of registered vehicles across the world is nearing 1.5 billion. London has 2.5 million of these and the city is not covered in manure.
General purpose technologies like internal combustion engines provide a base foundation to accelerate innovation. A well-defined specification evolves around these general-purpose technologies enabling various actors to build interoperable components which can then be combined into various products to meet a variety of needs. Over time market actors compete in building better and cheaper parts with the confidence that it can work with the existing and new products.
However, this is not a blog about horses, manure and automobiles. This is about Public Finance Management.
Public Finance Management drives government and all development programs, be it through the development of public infrastructure, delivery of public services or direct benefits transfers. It does this by making the right amount of money available from various funding sources/agencies to the implementing agencies at the right time - in a fiscally sustainable and responsible manner. In order to do so, funding agencies need information from the implementation agencies on how the money is being spent. So, each flow of money from funding agencies is correlated with the flow of information from implementing agencies in the reverse direction.
Development programs are funded by various stakeholders - national governments, sub national governments, local governments, development bodies and various multi-lateral agencies. At a global scale, multi-laterals and bi-laterals fund a variety of development programs across countries. There are thousands of funding and implementing agencies across the world. Money and information needs to flow between agencies seamlessly for accelerated development - especially if we need to meet the sustainable development goals which has been set back due to the recent COVID pandemic and ongoing war.
In a manner of speaking, today's information flow is like the flow of commuters in horse carts in 19th-century London. Existing methods often lead to issues of poor quality and delays in the flow of information. This is the manure that clogs the information highways and impedes development. What is needed today is a general-purpose innovative technology like the internal combustion engine that can transform the flow of information in the public finance management space and unleash development.
We believe that fiscal information data standards equate to general-purpose technology. Fiscal information data standards can substantially increase the velocity and quality of PFM information flow - similar to how email standards (like SMTP, POP3, IMAP) help us exchange emails across the world. Fiscal Information Data standards can streamline the flow of information between funding and implementing agencies around the world. Through these standards, we can start modernising the world of PFM to bring about a more seamless and coordinated way of driving development around the world.
In the next blog, we will discuss how fiscal information exchange can start addressing some of the problems faced by all stakeholders in Public Finance Management.
Create/Add new Department Entity on iFix for a tenant
Details for the new Department Entity + RequestHeader (meta data of the API).
RequestHeader should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestHeader as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseHeader in the response body to ensure correlation.
time in epoch
The version of the API
Unique request message id from the caller
Capture the user information
System Generated User id of the authenticated user.
List of roles assigned to a user
List of tenants assigned to a user
Hash describing the current RequestHeader
Unique system generated UUID
Unique tenant identifier
Department id from department master
Unique the department entity code
Captures the department entity name
Capture the level of the given department entity
UUID of the child department entity
status of the relationship. In case, the child needs to be removed from the list, mark the status as false.
Collection of audit related fields used by most models
UUID (preferred) or userid of the user that created the object
UUID (preferred) or userid of the user that last modified the object
epoch of the time object is created
epoch of the time object is last modified
Request has been accepted for processing
ResponseHeader should be used to carry metadata information about the response from the server. apiId, ver and msgId in ResponseHeader should always correspond to the same values in respective request's RequestHeader.
response time in epoch
unique response message id (UUID) - will usually be the correlation id from the server
message id of the request
status of request processing
Hash describing the current Request
The version of the API
Unique system generated UUID
Unique tenant identifier
Department id from department master
Unique the department entity code
Captures the department entity name
Capture the level of the given department entity
UUID of the child department entity
status of the relationship. In case, the child needs to be removed from the list, mark the status as false.
Collection of audit related fields used by most models
UUID (preferred) or userid of the user that created the object
UUID (preferred) or userid of the user that last modified the object
epoch of the time object is created
epoch of the time object is last modified
Based on the criteria get the list of department entities.
RequestHeader meta data.
RequestHeader should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestHeader as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseHeader in the response body to ensure correlation.
time in epoch
The version of the API
Unique request message id from the caller
Capture the user information
System Generated User id of the authenticated user.
List of roles assigned to a user
List of tenants assigned to a user
Hash describing the current RequestHeader
The object contains all the search criteria of Department Entity
Department Entity Ids
Tenant Id
Department id from department master
Unique the department entity code
Captures the department entity name
Capture the level of department entity
If set to true, it will return all the department entity hierarchy details starting from the root to the specified department entity id.
Successful response
ResponseHeader should be used to carry metadata information about the response from the server. apiId, ver and msgId in ResponseHeader should always correspond to the same values in respective request's RequestHeader.
response time in epoch
unique response message id (UUID) - will usually be the correlation id from the server
message id of the request
status of request processing
Hash describing the current Request
The version of the API
Unique system generated UUID
Unique tenant identifier
Department id from department master
Unique the department entity code
Captures the department entity name
Capture the level of the given department entity
UUID of the child department entity
status of the relationship. In case, the child needs to be removed from the list, mark the status as false.
Collection of audit related fields used by most models
UUID (preferred) or userid of the user that created the object
UUID (preferred) or userid of the user that last modified the object
epoch of the time object is created
epoch of the time object is last modified
Create/Add new DepartmentHierarchyLevel on iFix for a tenant
Details for the new DepartmentHierarchyLevel + RequestHeader (meta data of the API).
RequestHeader should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestHeader as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseHeader in the response body to ensure correlation.
time in epoch
The version of the API
Unique request message id from the caller
Capture the user information
System Generated User id of the authenticated user.
List of roles assigned to a user
List of tenants assigned to a user
Hash describing the current RequestHeader
This object captures the information for level of the department hierarchy and it's alias
Unique system generated UUID
Unique tenant identifier
Department id from department master
Unique department hierarchy level code
"state, zone, city etc"
Capture the department hierarchy level of the parent id (UUID). If it is the root level it will not have any parent. There can only be one root element for a given department.
The level of current DepartmentHierarchyLevel will be set as 1 greater than it's parent's
2
Collection of audit related fields used by most models
UUID (preferred) or userid of the user that created the object
UUID (preferred) or userid of the user that last modified the object
epoch of the time object is created
epoch of the time object is last modified
Request has been accepted for processing
ResponseHeader should be used to carry metadata information about the response from the server. apiId, ver and msgId in ResponseHeader should always correspond to the same values in respective request's RequestHeader.
response time in epoch
unique response message id (UUID) - will usually be the correlation id from the server
message id of the request
status of request processing
Hash describing the current Request
The version of the API
Unique system generated UUID
Unique tenant identifier
Department id from department master
Unique department hierarchy level code
"state, zone, city etc"
Capture the department hierarchy level of the parent id (UUID). If it is the root level it will not have any parent. There can only be one root element for a given department.
The level of current DepartmentHierarchyLevel will be set as 1 greater than it's parent's
2
Collection of audit related fields used by most models
UUID (preferred) or userid of the user that created the object
UUID (preferred) or userid of the user that last modified the object
epoch of the time object is created
epoch of the time object is last modified
Based on the criteria get the list of Department Hierarchy Levels.
RequestHeader meta data.
RequestHeader should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestHeader as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseHeader in the response body to ensure correlation.
time in epoch
The version of the API
Unique request message id from the caller
Capture the user information
System Generated User id of the authenticated user.
List of roles assigned to a user
List of tenants assigned to a user
Hash describing the current RequestHeader
The object contains all the search criteria of Department Hierarchy Level
Department Hierarchy Level Ids
Tenant Id
Department id from department master
Unique department hierarchy label like state, district, etc.
The level of the department hierarchy level
Successful response
ResponseHeader should be used to carry metadata information about the response from the server. apiId, ver and msgId in ResponseHeader should always correspond to the same values in respective request's RequestHeader.
response time in epoch
unique response message id (UUID) - will usually be the correlation id from the server
message id of the request
status of request processing
Hash describing the current Request
The version of the API
Unique system generated UUID
Unique tenant identifier
Department id from department master
Unique department hierarchy level code
"state, zone, city etc"
Capture the department hierarchy level of the parent id (UUID). If it is the root level it will not have any parent. There can only be one root element for a given department.
The level of current DepartmentHierarchyLevel will be set as 1 greater than it's parent's
2
Collection of audit related fields used by most models
UUID (preferred) or userid of the user that created the object
UUID (preferred) or userid of the user that last modified the object
epoch of the time object is created
epoch of the time object is last modified
Update existing Department Entity on iFix for a tenant
Details for the update Department Entity + RequestHeader (meta data of the API).
RequestHeader should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestHeader as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseHeader in the response body to ensure correlation.
time in epoch
The version of the API
Unique request message id from the caller
Capture the user information
System Generated User id of the authenticated user.
List of roles assigned to a user
List of tenants assigned to a user
Hash describing the current RequestHeader
Unique system generated UUID
Unique tenant identifier
Department id from department master
Unique the department entity code
Captures the department entity name
Capture the level of the given department entity
UUID of the child department entity
status of the relationship. In case, the child needs to be removed from the list, mark the status as false.
Collection of audit related fields used by most models
UUID (preferred) or userid of the user that created the object
UUID (preferred) or userid of the user that last modified the object
epoch of the time object is created
epoch of the time object is last modified
Request has been accepted for processing
ResponseHeader should be used to carry metadata information about the response from the server. apiId, ver and msgId in ResponseHeader should always correspond to the same values in respective request's RequestHeader.
response time in epoch
unique response message id (UUID) - will usually be the correlation id from the server
message id of the request
status of request processing
Hash describing the current Request
The version of the API
Unique system generated UUID
Unique tenant identifier
Department id from department master
Unique the department entity code
Captures the department entity name
Capture the level of the given department entity
UUID of the child department entity
status of the relationship. In case, the child needs to be removed from the list, mark the status as false.
Collection of audit related fields used by most models
UUID (preferred) or userid of the user that created the object
UUID (preferred) or userid of the user that last modified the object
epoch of the time object is created
epoch of the time object is last modified
Create/Add new COA on iFix for a tenant
Details for the new COA + RequestHeader (meta data of the API).
RequestHeader should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestHeader as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseHeader in the response body to ensure correlation.
time in epoch
The version of the API
Unique request message id from the caller
Capture the user information
System Generated User id of the authenticated user.
List of roles assigned to a user
List of tenants assigned to a user
Hash describing the current RequestHeader
Captures the COA data as map
Unique system generated UUID
Chart of account concatinated string
Unique tenant identifier
Capture the major head code
Capture the major head code name
Capture the major head code type
"Revenue"
Capture the sub major head code
Capture the sub major head code name
Capture the minor head code
Capture the minor head code name
Capture the sub head code
Capture the sub head code name
Capture the group head code
Capture the group head code name
Capture the object head code
Capture the object head code name
Collection of audit related fields used by most models
UUID (preferred) or userid of the user that created the object
UUID (preferred) or userid of the user that last modified the object
epoch of the time object is created
epoch of the time object is last modified
Request has been accepted for processing
ResponseHeader should be used to carry metadata information about the response from the server. apiId, ver and msgId in ResponseHeader should always correspond to the same values in respective request's RequestHeader.
response time in epoch
unique response message id (UUID) - will usually be the correlation id from the server
message id of the request
status of request processing
Hash describing the current Request
The version of the API
Unique system generated UUID
Chart of account concatinated string
Unique tenant identifier
Capture the major head code
Capture the major head code name
Capture the major head code type
"Revenue"
Capture the sub major head code
Capture the sub major head code name
Capture the minor head code
Capture the minor head code name
Capture the sub head code
Capture the sub head code name
Capture the group head code
Capture the group head code name
Capture the object head code
Capture the object head code name
Collection of audit related fields used by most models
UUID (preferred) or userid of the user that created the object
UUID (preferred) or userid of the user that last modified the object
epoch of the time object is created
epoch of the time object is last modified
Based on the criteria get the list of COA.
RequestHeader meta data.
RequestHeader should be used to carry meta information about the requests to the server as described in the fields below. All eGov APIs will use requestHeader as a part of the request body to carry this meta information. Some of this information will be returned back from the server as part of the ResponseHeader in the response body to ensure correlation.
time in epoch
The version of the API
Unique request message id from the caller
Capture the user information
System Generated User id of the authenticated user.
List of roles assigned to a user
List of tenants assigned to a user
Hash describing the current RequestHeader
The object contains all the search criteria of the fund
Tenant Id
List of COA ids
Chart of account concatinated string
Search by major head
Search by sub major head
Search by minor head
Search by sub head
Search by group head
Search by object head
Successful response
ResponseHeader should be used to carry metadata information about the response from the server. apiId, ver and msgId in ResponseHeader should always correspond to the same values in respective request's RequestHeader.
response time in epoch
unique response message id (UUID) - will usually be the correlation id from the server
message id of the request
status of request processing
Hash describing the current Request
The version of the API
Unique system generated UUID
Chart of account concatinated string
Unique tenant identifier
Capture the major head code
Capture the major head code name
Capture the major head code type
"Revenue"
Capture the sub major head code
Capture the sub major head code name
Capture the minor head code
Capture the minor head code name
Capture the sub head code
Capture the sub head code name
Capture the group head code
Capture the group head code name
Capture the object head code
Capture the object head code name
Collection of audit related fields used by most models
UUID (preferred) or userid of the user that created the object
UUID (preferred) or userid of the user that last modified the object
epoch of the time object is created
epoch of the time object is last modified