Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Contains the latest hotfixed builds of indexer, gateway, workflow.
Services | Docker Artefact ID |
---|---|
Test cases for various core-services that were tested as part of DIGIT-2.9-LTS
These test cases can serve as benchmark tests for any breaking changes in case any modification is done on top of existing services.
Click here to access the Test Scenarios.
Click here to access the Postman Collection for Automation Scripts.
Indexer
egovio/egov-indexer-db:2.9.1-c781a2f714-65
Gateway
egovio/gateway:gateway-2.9.2-a916a090e6-40
Workflow
egovio/egov-workflow-v2-db:2.9.1-80b58dc788-15
DIGIT 2.9 represents the most recent Long-Term Support (LTS) version, offering a stable and reliable foundation. This version emphasises enhanced security measures, improved system stability, streamlined deployment processes, simplified configuration, and comprehensive documentation. Periodic updates, encompassing both minor adjustments and significant enhancements, will be released as necessary. Support for this iteration of DIGIT will be available for the upcoming five years, with a clear migration guide available to facilitate the transition to subsequent LTS versions once the current support period concludes.
Extended Support: LTS versions come with an extended period of support, which includes security updates, bug fixes, and sometimes even minor feature enhancements. This extended support period will last for 5 years, which means that users do not need to upgrade to newer versions frequently to receive critical updates.
Stability: The LTS release is typically more stable than regular releases because it undergoes more extensive testing and bug fixing. Compatibility: With the extended support period, the LTS release ensures better compatibility with third-party applications and hardware over time. Developers and vendors have a stable base to target, reducing the frequency of compatibility issues.
Reduced Costs: For businesses, the reduced need for frequent upgrades can translate into lower IT costs. Upgrading to a new version often involves significant effort in testing, deployment, and sometimes even hardware upgrades. LTS releases help spread these costs over a longer period.
Predictability: LTS release provides a predictable upgrade path, making it easier for organisations to plan their IT infrastructure, training, and budgets. Knowing the support timeline in advance helps in strategic planning and resource allocation.
Focus on Core Features and Performance: Since the LTS release is not focused on adding new features aggressively, the bulk of efforts are spent on optimising for performance and reliability. This focus benefits users who need a solid and efficient system rather than the latest features.
Community and Vendor Support: LTS releases will often have a larger user base, which means a more extensive community support network is available.
Infra/Backbone upgrade: - Postgres upgrade - Kafka upgrade - Redis upgrade - Kubernetes upgrade - Elasticsearch upgrade
Use of helm file for deployment management
Upgrade of core service dependencies
Upgrade to DIGIT libraries
Spring Cloud Gateway
Test Automation script
Single Click deployment using Github Actions
MDMS V2
Workbench (MDMS UI)
Boundary Service (Beta)
Admin UI to configure Master Data
Functionality to define schema and attribute validation for master data
Maintain master data in the database
Hot reload of MDMS data
Configurable Rate limiting for services using helm
Deployment of DIGIT without any local tool setup/configuration (via browser)
Ability to create and link boundary nodes using UI/APIs
Geospatial queries like proximity search to locate boundary nodes
Filestore Service: Fixed flow to store and retrieve files from Azure blob storage.
PDF Service: Fixed bug where _create API creates duplicate PDF for each request and fixed Kafka message getting pushed on a single partition.
SMS Notification Service: Added API for SMS bounce backtracking.
Mail Notification Service: Added support for attachments.
User OTP Service: Added support for sending OTP via email by emitting email events.
User Service: Added fallback to default message if user email update localization messages are not configured.
Workflow Service: Introduced state-level business service fallback as part of v2 business service search API.
Dashboard Analytics: Introduced feature to perform DIVISION operation on metric chart responses.
Location Service: Fixed bug where search on root level boundary type yields empty search response.
Privacy Exemplar: Data privacy changes for masking/unmasking PII data were introduced as part of this exemplar.
Encryption Client Library: As part of privacy changes, the enc-client library was introduced to support encryption-related functionalities.
Codegen: Enhanced codegen utility to support open API 3.0 specifications.
Persister: Enhanced persister to make it compatible with Signed Audit Service.
Human Resource Management Service: Fixed bug where employee search causes server error in single instance clusters.
DIGIT Developer Guide: Backend and Frontend guides along with a sample birth registration spring-boot module, citizen and employee React modules were developed as part of this guide.
DIGIT Installer Enhancement: DIGIT Installer was simplified and a detailed tutorial document was created for installing DIGIT services on AWS. (Note: Simplified DIGIT Installer has not been merged to master yet and is being released separately because many of our clusters are still running on legacy DevOps configurations)
Upgraded all helm charts to support Kubernetes version upgrade from 1.20 to 1.28.
Installation Guide for DIGIT via GitHub Actions in AWS
This guide provides step-by-step instructions for installing DIGIT using GitHub Actions within an AWS environment.
AWS account
Github account
Create an IAM User in your AWS account.
Generate ACCESS_KEY
and SECRET_KEY
for the IAM user.
Assign Administrator Access to the IAM user for necessary permissions.
Fork the Repository into your organization account on GitHub.
Navigate to the repository settings, then to Secrets and Variables, and add the following repository secrets:
AWS_ACCESS_KEY_ID: <GENERATED_ACCESS_KEY>
AWS_SECRET_ACCESS_KEY: <GENERATED_SECRET_KEY>
AWS_DEFAULT_REGION: ap-south-1
AWS_REGION: ap-south-1
Enable GitHub Actions
Open the GitHub Actions workflow file.
Specify the branch name you wish to enable GitHub Actions for.
Navigate to infra-as-code/terraform/sample-aws
.
Open input.yaml
and enter details such as domain_name
, cluster_name
, bucket_name
, and db_name
.
Navigate to config-as-code/environments
.
Open egov-demo-secrets.yaml
.
Enter db_password
and ssh_private_key
. Add the public_key
to your GitHub account.
Choose one of the following methods to generate an SSH key pair:
Method a: Use an online website (Note: This is not recommended for production setups, only for demo purposes): https://8gwifi.org/sshfunctions.jsp
Method b: Use OpenSSL commands:
After entering all the details, push these changes to the remote GitHub repository. Open the Actions
tab in your GitHub account to view the workflow. You should see that the workflow has started, and the pipelines are completed successfully.
This indicates that your setup is correctly configured, and your application is ready to be deployed. Monitor the output of the workflow for any errors or success messages to ensure everything is functioning as expected.
As you wrap up your work with DIGIT, ensuring a smooth and error-free cleanup of the resources is crucial. Regular monitoring of the GitHub Actions workflow's output is essential during the destruction process. Watch out for any error messages or signs of issues. A successful job completion will be confirmed by a success message in the GitHub Actions window, indicating that the infrastructure has been effectively destroyed.
When you're ready to remove DIGIT and clean up the resources it created, proceed with executing the terraform_infra_destruction
job. This action is designed to dismantle all setup resources, clearing the environment neatly.
We hope your experience with DIGIT was positive and that this guide makes the uninstallation process straightforward.
To initiate the destruction of a Terraform-managed infrastructure, follow the steps below:
Navigate to Actions
Click DIGIT-Install workflow
Select Run workflow
When prompted, type "destroy". This action starts the terraform_infra_destruction
job.
You can observe the progress of the destruction job in the actions window.
Note: For DIGIT configurations created using the master branch
If DIGIT is installed from a branch other than the main one, ensure that the branch name is correctly specified in the workflow file. For instance, if the installation is done from the digit-install branch, the following snippet should be updated to reflect that.
Sl.No. | Checklist | Yes/No/Partially | Reference Link | Owner | Reviewer | Remarks |
---|
Services | Docker Artefact ID |
---|
S.no. | Service | Description of the fix |
---|---|---|
Enhanced Service Documents |
---|
1
PDF Service
Fixed bug where _create API creates duplicate PDF for each request.
2
PDF Service
Fixed issue where pdf service was pushing data to only one kafka topic partition
3
User Service
Fixed bug where updating citizen profile causes server error, fixed bug where employee details are updateable via citizen profile update API.
4
Location Service
Fixed bug where search on root level boundary type yields empty search response.
5
Human Resource Management Service
Fixed bug where employee search causes server error in single instance clusters.
Access Control | egovio/egov-accesscontrol:DIGIT-2.9-LTS-237578ce80-10 |
Encryption | egovio/egov-enc-service-db:DIGIT-2.9-LTS-237578ce80-21 |
Filestore | egovio/egov-filestore-db:DIGIT-2.9-LTS-237578ce80-14 |
ID Generation | egovio/egov-idgen-db:DIGIT-2.9-LTS-07f47790b8-8 |
Indexer | egovio/egov-indexer-db:2.9.1-c781a2f714-65 |
Localisation | egovio/egov-localization-db:DIGIT-2.9-LTS-237578ce80-10 |
Master Data Management | egovio/egov-mdms-service:DIGIT-2.9-LTS-07f47790b8-14 |
Mail Notification | egovio/egov-notification-mail:DIGIT-2.9-LTS-07f47790b8-5 |
SMS Notification | egovio/egov-notification-sms:DIGIT-2.9-LTS-07f47790b8-7 |
OTP | egovio/egov-otp-db:DIGIT-2.9-LTS-07f47790b8-6 |
Persister | egovio/egov-persister:DIGIT-2.9-LTS-07f47790b8-8 |
Payment Gateway | egovio/egov-pg-service-db:DIGIT-2.9-LTS-237578ce80-11 |
URL Shortening | egovio/egov-url-shortening-db:DIGIT-2.9-LTS-07f47790b8-12 |
User | egovio/egov-user-db:DIGIT-2.9-LTS-c33cfe45ab-19 |
Workflow | egovio/egov-workflow-v2-db:2.9.1-80b58dc788-15 |
Internal Gateway | egovio/internal-gateway-scg:DIGIT-2.9-LTS-b4fd517ebc-6 |
PDF Service | egovio/pdf-service-db:DIGIT-2.9-LTS-5d71b59949-24 |
User OTP | egovio/user-otp:DIGIT-2.9-LTS-07f47790b8-9 |
xState Chatbot | egovio/xstate-chatbot-db:DIGIT-2.9-LTS-44558a0602-3 |
Gateway | egovio/gateway:gateway-2.9.2-a916a090e6-40 |
Location | egovio/egov-location-db:DIGIT-2.9-LTS-07f47790b8-10 |
Service Request | egovio/service-request-db:DIGIT-2.9-LTS-237578ce80-7 |
Signed Audit | egovio/audit-service-db:DIGIT-2.9-LTS-07f47790b8-12 |
gateway-kubernetes-discovery | egovio/gateway-kubernetes-discovery:DIGIT-2.9-LTS-7f4ff55ce3-6 |
egov-hrms | egovio/egov-hrms-db:DIGIT-2.9-LTS-4553648f56-9 |
MDMS-v2 | egovio/mdms-v2-db:MDMS-v2-2.9LTS-837232ac67-71 |
Boundary (Beta release) | egovio/boundary-service-db:v1.0.0-063968adc7-18 |
This comprehensive documentation provides step-by-step instructions and best practices for smoothly migrating your DIGIT installation from version 2.8 to the v2.9 (LTS) release.
To begin the migration process from DIGIT v2.8 to v2.9 LTS, it's crucial to first upgrade all prerequisite backbone services.
Following this, scale down the Zuul replica count to zero using the provided command
Next, proceed with deploying the core service images as outlined in the attached release chart.
Note: You can deploy the images using Jenkins or else using the below go command.
Once all deployed services are confirmed to be up and running smoothly, the migration from v2.8 to v2.9 LTS can be considered complete.
Note: It's important to note that all resources related to Zuul should be deleted, as version 2.9 LTS onward transitions to the use of a gateway, deprecating Zuul.
Note: If you encounter this (V20180731215512__alter_eg_role_address_fk.sql or version V20180731215512
) flyway migration issue in egov-user service, follow these steps to resolve it
Connect to postgres pod of your server since we need to run few query to resolve it.
Run this SQL query : DELETE FROM egov_user_schema where version = '20180731215512';
Now run this SQL query : ALTER TABLE eg_userrole DROP CONSTRAINT eg_userrole_userid_fkey;
Restart the egov-user pod after successfullt executing these queries.
changelog.md
This document provides a comprehensive log of system upgrades, detailing the progression of various software components from older versions to their latest releases as of this update. This ensures transparency and provides a reference for the evolution of our system's infrastructure.
No unreleased changes.
PostgreSQL: Upgraded from 11.2
to 14.10
, enhancing database performance, security features, and compatibility with the latest extensions.
Redis: Upgraded from 3.6
to 7.2.4
, bringing improvements in processing speed, security patches, and new features for better data management.
Elasticsearch: Upgraded from 6.6
to 8.11.3
, which includes significant advancements in search capabilities, performance optimizations, and security enhancements.
Kibana: Version updated from 6.6
to 8.11.3
to align with the Elasticsearch upgrade, improving data visualization and UI/UX enhancements.
Kafka: Upgraded from 2.4
to 3.6.0
, introducing improvements in scalability, reliability, and a set of new features that enhance message queuing capabilities.
Jaeger: Upgraded from 1.18
to 1.53.0
, significantly improving distributed tracing capabilities, UI improvements, and performance optimizations.
Prometheus: Upgraded from 2.48.0
to 2.49.0
, focusing on enhancements in monitoring capabilities and minor improvements in system performance.
Grafana: Upgraded from 7.0.5
to 10.2.3
, bringing a leap in visualization features, plugin ecosystem, and overall performance and usability improvements.
Cert-Manager: Upgraded from 1.7.3
to 1.13.3
, enhancing certificate management with new features and security improvements.
Report
egov-searcher
Zuul
No fixes in this release.
Upgraded components include security patches and improvements to address known vulnerabilities.
Previous changes if applicable.
This document contains information about the code changes that will be required in the registries for upgrading Spring boot and client libraries
Upgrade the Java version in the module to Java 17 before upgrading the Spring Boot version to 3.2.2. Following is a sample snippet of the Java version upgrade:
Upgrade spring-boot-starter-parent library to version 3.2.2. The code snippet of the dependency is shown below:
Upgrade the Flyway library to version 9.22.3 for compatibility with Postgres 14. Below is the code snippet:
Upgrade the postgresql library to version 42.7.1:
The tracer library is upgraded to springboot 3.2.2. The updates are available in the library version 2.9.0-SNAPSHOT. If the module is using the tracer, upgrade the tracer version to 2.9.0-SNAPSHOT as shown below:
The service-common library is upgraded and added in tracer. You don't have to explicitly upgrade the services-common library and remove it from POM if you upgrade the tracer. In case your module is using only services-common, you can directly upgrade the version to 2.9.0-SNAPSHOT
Use the below version of JUnit which is compatible with Java 17:
If you are using the MDMS client library update the dependency version to 2.9.0-SNAPSHOT as shown below:
Update the Lombok version in the pom.xml to 1.8.22:
If you are using net.minidev library, upgrade the version to 2.5.0
To simplify dependency management and ensure version compatibility for Spring Kafka and Spring Redis dependencies use the spring-boot-starter-parent as your project's parent in the pom.xml. This ensures the spring-kafka or spring-redis dependency is included without specifying a version, Spring Boot will automatically provide a compatible version of the dependency. Following are code snippets of both the dependencies:
Note: If a tracer library is implemented there is no need to explicitly import spring-kafka.
Javax is deprecated and transitioning to Jakarta. Remove any Javax dependencies and update all Javax imports to Jakarta. For example, change imports like PostConstruct and valid to their Jakarta counterparts in all occurrences.
Remove the annotation @javax.annotation.Generated which is now deprecated.
Update the Dockerfile for flyway migration with the below content:
FROM egovio/flyway:10.7.1
COPY ./migration/main /flyway/sql
COPY migrate.sh /usr/bin/migrate.sh
RUN chmod +x /usr/bin/migrate.sh
ENTRYPOINT ["/usr/bin/migrate.sh"]
Update the migrate.sh script:
#!/bin/sh
flyway -url=$DB_URL -table=$SCHEMA_TABLE -user=$FLYWAY_USER -password=$FLYWAY_PASSWORD -locations=$FLYWAY_LOCATIONS -baselineOnMigrate=true -outOfOrder=true migrate
If you are using spring-redis, add the following configuration file:
Remove @SafeHtml annotation from the fields in POJO models as it is deprecated.
Update the Junit dependencies in the test cases as shown below:
A comprehensive guide on running automated test scripts for various core services.
Before initiating automating DIGIT- LTS core services, ensure the Postman tool is installed.
Create an environment in Postman with a global variable BaseUrl and set its value per your environment configuration. For example - we have set https://digit-lts.digit.org as the base URL.
Import all the services you want to automate in the same environment.
Note:- It is mandatory to run the automated script for user services, As User collection includes requests to create users, a crucial step as access to all resources requires a user.
Follow the steps below to run the egov-User service automation scripts.
1. Import user collection: Copy the User collection link from the provided document link: Collection Document and import the collection in Postman.
2. Port forward to DIGIT-LTS environment: Replace [userPod]
with the relevant user pod name.
Port-forward to the DIGIT-LTS environment to create the first user using the command above.
3. Run the user collection: Click on the Download CSV link to download the CSV file. Make sure to download the file in CSV format before proceeding with the User collection.
In the CSV file, each cell in the first row UserFIRST, UserName2, and UserName3 represents a unique user and each cell in the second row represents a name given to a specific user.
For example: The first cell in the first row that is UserFIRST represents the first user and USERDemoM1 represents the name given to user UserFIRST.
Open the User collection in Postman and click on the Run button.
Select CSV file - Select the downloaded CSV file by clicking on the Select File button.
Click on the Run User button to execute the collection.
Due to the uniqueness constraint on usernames, you cannot create duplicate users with the same username.
IMPORTANT: To avoid errors when executing the User collection multiple times in the same environment, remember to modify the username in the CSV file for each execution.
The provided steps automate the creation of users in the DIGIT-LTS environment, which is essential for accessing all resources.
Review and modify the CSV file as needed to include accurate user data.
For further assistance or troubleshooting, refer to the Postman documentation or contact the relevant support channels.
By following these steps, you can effectively automate the core services of DIGIT-LTS, starting with the User service, using Postman.
Import Localization Collection into Postman: Click here and copy the localization collection link (available in column B of the sheet). Open the Postman and import the collection.
Prepare CSV File: Click here to download the CSV file. Make sure the CSV file is in the correct format.
Run localization Collection in Postman: Open the localization collection in Postman by clicking on localization collection. Click on the Run button to execute the collection.
Select CSV File: When prompted, click on the Select File button to select the downloaded CSV file.
Run Collection: After selecting the CSV file, click on the Run Collection button to execute the collection.
In the CSV file, code1 represents the specific code for creating the message in the locale1.
The message "Punjab water park" is created in the locale/region/mohalla. A unique code "Alpha pgr 1" is associated with the created message.
Note: Ensure that if running the Localization collection multiple times within the same environment, you change the locale and code within the CSV file each time.
The locale column in the CSV file represents the place/area to create the message (mandatory).
The code column represents the unique code associated with the message.
The above steps automate the localization services using Postman.
1. Import Egov OTP Collection into Postman:
Click here to open the document and copy the Egov OTP collection link
Open Postman and import the collection.
2. To Run Egov OTP Collection in Postman:
Click on the Egov OTP collection in Postman to open the collection.
Click on the Run button to execute the collection.
Click on the Run Collection button to execute the collection.
1. Import MDMS Collection into Postman:
Click here to open the document and copy the MDMS collection link.
Open Postman and import the collection.
2. To run the MDMS Collection in Postman:
Open the MDMS collection in Postman by clicking on MDMS collection.
Click on the Run button to execute the collection.
Click on the Run Collection button to execute the collection.
1. Import Url shortening Collection into Postman:
Click here to open the document and copy the URL shortening collection link.
Open Postman and import the collection.
2. To run the URL - shortening collection in Postman:
Open the URL shortening collection in Postman by clicking on the URL shortening collection.
Click on the Run button to execute the collection.
Click on the Run Collection button to execute the collection.
1. Import Location Collection into Postman:
Click here to open the document and copy the location collection link.
Open Postman and import the collection.
2. To run Location Collection in Postman:
Open the Location collection in Postman by clicking on Location collection.
Click on the Run button to execute the collection.
Click on the Run Collection button to execute the collection.
1. Import Access control Collection into Postman:
Click here to open the document and copy the Access Control collection link.
Open Postman and import the collection.
2. To run the Access Control collection in Postman:
Open the Access Control collection in Postman by clicking on Access Control collection.
Click on the Run button to execute the collection.
Click on the Run Collection button to execute the collection.
1. Import Filestore Collection into Postman:
Click here to open the document and copy the Filestore collection link.
Open Postman and import the collection.
2. To run Filestore Collection in Postman:
Open the Filestore collection in Postman by clicking on the Filestore collection.
Click on the Run button to execute the collection.
Click on the Run Collection button to execute the collection.
1. Import the ID gen Collection into Postman:
Click here to open the document and copy the ID-gen collection link.
Open Postman and import the collection.
2. To Run Id gen Collection in Postman:
Before executing the Id gen collection, download the CSV file from https://docs.google.com/spreadsheets/d/1bB3R5faJFfRC0cGZbtPBhv1bmKm_MsH1upnuC3s2hso/edit?usp=sharing in CSV format.
Open the Id gen collection in Postman by clicking on Id gen collection.
Click on the "Run" button to execute the collection.
Click on the "Run Collection" button to execute the collection.
1. Importing WorkFlow Collection:
Click here to open the document, copy the Workflow collection link and import the Workflow collection into Postman.
2. Running the WorkFlow Collection:
Click here to download the CSV file before executing the WorkFlow collection.
Open the WorkFlow collection in Postman and click on the Run button.
Select the downloaded CSV file by clicking on the Select File button.
Click on the Run Workflow button to execute the collection.
Additionally, you must update the columns BusinessIdFirst
and BusinessIdTwo
in your application for a successful transition.
IMPORTANT: If you execute the WorkFlow collection multiple times in the same environment, it's essential to rename the services listed under the columns BUSINESSSERVICE
, BUSINESSSERVICETHIRD
, and BUSINESSSERVICEFOURTH
in the CSV file for each execution to avoid conflicts.
Additional Notes: You can also modify other columns but not mandatory.
Note: if you are getting error in the transition folder of workflow collection, then change CSV file data and run individual folder (don't run whole Workflow collection) to avoid error.
1. Importing Encryption Collection:
Click here to open the document, copy the Encryption collection link and import the collection into Postman.
2. Port Forwarding to Digit-LTS Environment:
Port-forward to the Digit-LTS environment to decrypt the Encrypted data. Use the following command:
kubectl port-forward [Encryption] 8081:8080-n egov
Replace [Encryption] with the relevant Encryption pod name.
3. Running the Encryption Collection:
Click here to download the CSV file before executing the Encryption collection.
In CSV file , cell in first row "UserForEncy" represents unique user and cell in second row "EncUser1" represent name given to specific user.
Open the User collection in Postman and click on the Run button.
Select the downloaded CSV file by clicking on the Select File button.
Click on the Run EncyptionApi button to execute the collection.
IMPORTANT: When executing the Encryption collection more than once in the same environment, it is essential to modify the username in the CSV file for every run. This ensures the newly created user has the necessary permissions to encrypt and decrypt data.
This document provides step-by-step instructions on how to take backups of PostgreSQL databases hosted on AWS.
Access to an AWS account with permissions to manage Amazon RDS instances.
PostgreSQL database hosted on Amazon RDS.
Step 1: Navigate to the Amazon RDS Console:
Log in to the AWS Management Console.
Go to the Amazon RDS service.
Step 2: Select the PostgreSQL Instance:
From the list of DB instances, select the PostgreSQL instance for which you want to take the backup.
Step 3: Enable Automated Backups (Optional):
If automated backups are not already enabled, navigate to the "Backup" tab of the RDS instance.
Click on "Modify" and enable automated backups.
Configure the backup retention period according to your requirements.
Step 4: Manually Trigger a Snapshot:
To create a manual snapshot, select the RDS instance you want to back up. Click on the Actions button in the right upper corner and select Take a snapshot.
This will redirect you to a page like the one below.
Provide a meaningful name for the snapshot and click “Create snapshot”.
This will create a manual snapshot of the DB instance that you created.
Step 5: Create a Manual Backup Using pg_dump:
Connect to the PostgreSQL database using a PostgreSQL client tool or command-line interface.
Use the pg_dump
command to export the database to a file:
Replace <hostname>
, <username>
, <database_name>
, and <backup_file_name>
with the appropriate values.
Step 6: Copy Backup Files to Amazon S3 (Optional):
If desired, copy the backup files to Amazon S3 for long-term storage and redundancy.
Use the AWS CLI or SDKs to upload the files to an S3 bucket.
This document has provided instructions on how to take backups of PostgreSQL databases hosted on AWS. Regular backups are essential for data protection and disaster recovery purposes.
1 | Upgrade is completed for all the core services that are part of the release. | Yes | NA | Shashwat Mishra | Code is frozen by 14 March 2024 |
2 | Test cases are documented by the QA team and test results are updated in the test cases sheet. | Yes | Mustakim |
3 | The incremental demo of the features showcased during tech council meetings and feedback incorporated. | Yes | NA | Shashwat Mishra |
4 | QA signoff is completed by the QA team and communicated to the platform team. | Yes | NA | QA signoff was completed. Sign-off dates 6th Feb 2023 |
5 | API Technical documents are updated for the release along with the configuration documents. | Yes | NA |
6 | Promotion to new environment testing from the QA team is completed. | Yes | NA | Shraddha Solkar | Aniket Talele Shashwat Mishra |
7 | API Automation scripts are updated for new APIs or changes to any existing APIs for the release. API automation regression is completed on UAT, the automation test results are analyzed and necessary actions are taken to fix the failure cases. Publish the list of failure use cases with a reason for failure and the resolution taken to fix these failures for the release. | No | NA | Shraddha Solkar | Not picked up in this release due to lack of resources. We do not have QA resource who can write automation scripts. |
8 | The API backward compatibility testing is completed. | Yes | Shraddha Solkar | Aniket Talele Shashwat Mishra | Core modules were tested against urban 2.8 modules and the bugs which were found have been addressed. |
9 | The communication is shared with the platform team for regression by the QA team. | Yes | NA | Shraddha Solkar | Aniket Talele Shashwat Mishra | UAT sign-off was completed on 24th March 2023 |
10 | The GIT tags and releases are created for the code changes for the release. | Yes | Shashwat Mishra | Aniket Talele |
11 | Verify whether the Release notes are updated | Yes | Shashwat Mishra Anjoo Narayan | Aniket Talele |
12 | Verify whether all MDMS, Configs, InfraOps configs updated. | Yes | NA | Shraddha Solkar | Shashwat Mishra |
13 | Verify whether all docs will be Published to by the Technical Writer as part of the release. | Yes | NA | Shashwat Mishra | Anjoo Narayan |
14 | Verify whether all test cases are up to date and updated along with necessary permissions to view the test cases sheet. The test cases sheet is verified by the Test Lead. | Yes | Shraddha Solkar | Aniket Talele Shashwat Mishra |
15 | Verify whether all the localisation data was updated in Release Kits. | Yes | Shraddha Solkar | Shashwat Mishra |
16 | Verify whether the platform release notes and user guides are updated and published | Yes | Platform team | Aniket Talele Shashwat Mishra | Release notes and user guides are published in gitbook. |
17 | The Demo of technical enhancements is done by the platform team as part of the tech council meetings. | Yes | NA | Platform team | Ghanshyam Rawat Aniket Talele Shashwat Mishra |
18 | Architect SignOff and Technical Quality Report | Yes | NA | Ghanshyam Rawat Aniket Talele | Sign off is given. |
19 | The release communication along with all the release artefacts are shared by the Platform team. | Inprogress | NA | Shashwat Mishra | Aniket Talele |
This document provides step-by-step instructions on how to update the version of Amazon RDS (Relational Database Service) using both the AWS Management Console and Terraform.
Access to an AWS account with permissions to manage Amazon RDS instances.
Basic knowledge of AWS Management Console and Terraform.
Create a RDS Snapshot Backup for Data Protection.
Step 1: Navigate to the Amazon RDS Console:
Log in to the AWS Management Console.
Go to the Amazon RDS service.
Step 2: Select the RDS Instance:
From the list of DB instances, select the RDS instance for which you want to update the version.
Step 3: Initiate the Upgrade:
From the given databases, select the database you wish to upgrade the Engine Version of and click on the “Modify” button.
In the Modify DB Instance wizard, locate the "DB Engine Version" section. Select the desired version from the dropdown list. Review the other configuration settings and click on the "Continue" button.
Select “Apply Immediately” and Review the summary of changes and click on the "Modify DB Instance" button to initiate the upgrade.
Step 4: Monitor the Upgrade Progress:
Once the upgrade is initiated, monitor the upgrade progress from the RDS dashboard.
The status of the instance will change to "modifying" during the upgrade process.
Once the upgrade is completed, the status will change back to "available."
Step 1: Define the Terraform Configuration:
Create or update the Terraform configuration file (e.g., variable.tf
) with the necessary settings to manage the RDS instance.
Use the aws_rds_instance
resource to define the RDS instance.
Specify the desired version in the engine_version
attribute.
Step 2: Apply the Terraform Configuration:
Run terraform plan
to preview the changes that will be applied.
Run terraform apply
to apply the changes and update the RDS instance with the new version.
Step 3: Monitor Terraform Execution:
Monitor the Terraform execution for any errors or warnings.
Once the execution is completed, verify that the RDS instance has been updated to the new version.
This document has provided instructions on how to update the version of Amazon RDS using both the AWS Management Console and Terraform. Regularly updating the RDS version ensures that your database instance is up-to-date with the latest features and security patches.
Code changes required once Postgres is upgraded
UserFIRST | UserName2 | UserName3 |
---|---|---|
code1 | locale1 |
---|---|
UserForEncy |
---|
USERDemoM1
EGOvDemoM2
EGOvDemoM3
Alpha pgr 1
Punjab water park
EncUser1