Details coming soon...
This section contains documents and information required to customize DIGIT
This section contains details on how to customize the DIGIT Urban user interface and services to meet the local government or user requirements effectively.
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
In DIGIT, most of the applications are RESTful services which follow the below project structure
(Note: The above structure image is for reference only)
Config: This package consists of configuration code and data for the application. The class defined here will read the value from the property file. While enhancing the service suppose new values are added in the property file those values are accessed through the config file defined in this package.
Consumer: This package consists of the Kafka Consumer program, which consumes the published messages from the producer. The consumer consumes the message by subscribing to the topic. A new consumer must be added to this package only. Refer to this document for writing a new consumer.
Producer: This Package consists of the Kafka producer program, which pushes the data into the topic and the consumer consumes the message by subscribing to that same topic.
Repository: This package is a data access layer, which is responsible for retrieving data for the domain model. It only depends on the model layer. The classes define here could get data from the database or other Microservices by RESTful call. Any update in the data retrieval query has to be done in the classes present in the repository package.
Service & Util: This package is a service layer of the application. The implementation classes are defined here which consists of the business logic of the application or functionality of the APIs. Any modification in the functionality of API or business logic has to be done in the classes present in this package. E.g:- you have an API call which you need to modify then you create one more class or util function in a new file in service or util package and just call that function to do the work you want in the API code instead of writing code directly in the API file.
Validation: This package consists of a validation class for better processing of the application. While enhancing a service if a new validation method is required it has to be added to this package.
Models: The various models of the application are organised under the models package. This package is a domain module layer, which has domain structs. Pojo's classes will be present here and these classes will be updated on the update of the contract of the service. And accordingly rowmapper class needs to update.
Controller: The most important part is the controller layer. It binds everything together right from the moment a request is intercepted until the response is prepared and sent back. The controller layer is present in the controller package, the best practices suggest that we keep this layer versioned to support multiple versions of the application. It expects to have different versions having different features. Refer to this document for API Do's and Don'ts.
DB.migration.main: This package consists of flyway migration scripts related to setting up an application, for example, database scripts. For any changes in the database table of this service, a script has to be added to this package.
Note: 1) Never overwrite the previously created scripts always create a new one as per the requirements. 2) The file name should follow this naming convention V<timestamp>__<purpose>.sql ex: V20200717125510__create_table.sql
Any changes here, if it is required to change the retrieval query then accordingly the classes present in the repository package have to update. And also the persister file associates with these services need to update.
application.properties: The application.properties file is just straightforward key-value stockpiling for configuration properties. You can package the configuration file in your application jar or put the file in the file system of the runtime environment and load it on Spring Boot startup. This file contains the value of
server port
server context path
Kafka server configuration value
Kafka producer and the consumer configuration value
Kafka topic
External service path and many more. Whatever changes are done in application.properties file the same changes need to be reflected in the values.yaml file (for reference only) of a particular service.
After enhancing the service, the details about the new feature must be mentioned in README.md, LOCALSETUP.md and CHANGELOG.md file. And the version of service in the pom.xml file need to be increment. Eg: 1.0.0 to 1.0.1 or 1.0.0 to 1.1.0 depend upon the level of the enhancement.
Kafka producer publishes messages on a given topic. Kafka Consumer is a program, which consumes the published messages from the producer. The consumer consumes the message by subscribing to the topic. A single consumer can subscribe to multiple topics. Whenever the topic receives a new message it can process that message by calling defined functions. The following snippet is a sample of code which defines a class called TradeLicenseConsumer which contains a function called listen() which is subscribed to save-tl-tradelicense topic and calls a function to generate notification whenever any new message is received by the consumer.1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
@Slf4j @Component public class TradeLicenseConsumer { private TLNotificationService notificationService; @Autowired public TradeLicenseConsumer(TLNotificationService notificationService) { this.notificationService = notificationService; } @KafkaListener(topics = {"save-tl-tradelicense") public void listen(final HashMap<String, Object> record, @Header(KafkaHeaders.RECEIVED_TOPIC) String topic) { notificationService.sendNotification(record); } }
@KafkaListener annotation is used to create a consumer. Whenever any function has this annotation on it, it will act as kafka consumer and the message will go through the flow defined inside of this function. The topic name should be picked up from the application.properties file. This can be done as showed below:1
@KafkaListener(topics = {"${persister.update.tradelicense.topic}")
where persister.update.tradelicense.topic is the key for the topic name in the application.properties
Whenever any new message is published on this topic the message will be consumed by the listen() function and will call the function sendNotification() with the message as the argument. The deserialization is controlled by the following two properties in the application.properties:1 2
spring.kafka.consumer.value-deserializer spring.kafka.consumer.key-deserializer
The first property sets the deserializer for value while the second one sets it for the key. Depending on the deserializer we have set we can expect the argument in that format in our consumer function. For example, if we set the value deserializer to HashMapDeserializer and key deserializer to string like below:1 2
spring.kafka.consumer.value-deserializer=org.egov.tracer.kafka.deserializer.HashMapDeserializer spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
Then we can write our consumer function expecting HashMap as an argument like below:1
public void listen(final HashMap<String, Object> record){...}
``
All content on this page by is licensed under a .
APIs developed on digit follow certain conventions and principles. The aim of this document is to provide some do’s and don’ts while following those principles
Always define the Yaml for your APIs as the first thing using Open API 3 Standard ()
APIs path should be standardised as follows:
/{service}/{entity}/{version}/_create: This endpoint should be used to create the entity
/{service}/{entity}/{version}/_update: This endpoint should be used to edit an entity which is already existing
/{service}/{entity}/{version}/_search: This endpoint should be used to provide search on the entity based on certain criteria
/{service}/{entity}/{version}/_count: This endpoint should be provided to give a count of entities that match a given search criteria
Always use POST for each of the endpoints
Take most search parameters in the POST body only
If query params for search need to be supported then make sure to have the same parameters in POST body also and POST body should take priority over query params
Provide additional Details objects for _create and _update APIs so that custom requirements use these fields
Each API should have a object in request body at the top level
Each API should have a object in response body at the top level
Mandatory fields should be minimum for the APIs.
minLength and maxLength should be defined for each attribute
Read-only fields should be called out
Use common models already available in the platform in your APIs. Ex -
(Citizen or Employee or Owner)
(Response sent in case of errors)
TODO: Add all the models here
For receiving files in an API, don’t use binary file data. Instead, accept the file store ids
If there is only one file to be uploaded and no persistence is needed, and no additional json data is to be posted, you can consider using direct file upload instead of using filestore id
All content on this page by eGov Foundation is licensed under a Creative Commons Attribution 4.0 International License.
All content on this page by is licensed under a .