All pages
Powered by GitBook
1 of 12

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Availability

Infrastructure

  • How to check if Infra is working as expected?

  • How to monitor and setup alerts? Other debugging tools?

  • Solutions to common problems and next steps

DSS dashboard

Database

  • DB monitoring, alerting and debugging guidelines

Core services

  • Monitoring how-to

  • Debugging

  • Fixing/escalating

Backbone services

Kafka

DIGIT apps

Monitor, debug, fix

ElasticSearch Direct Upgrade

Overview

Unlike rolling upgrades, direct upgrades involve migrating from an older version to a newer one in a single coordinated operation.

This comprehensive guide outlines the step-by-step process for deploying an Elasticsearch 8.11.3 cluster with enhanced security features. The document not only covers the initial deployment of the cluster but also includes instructions for seamlessly migrating data from an existing Elasticsearch cluster to the new one, allowing for a direct upgrade.

Steps

  1. Clone the DIGIT-DevOps repo and checkout to the branch digit-lts-go.

  1. If you want to make any changes to the elasticsearch cluster like namespaces etc. You'll find the helm chart for elastic search in the path provided below. In the below chart, security is enabled for elasticsearch. If you want to disable the security, please set the environment variable xpack.security.enabled as false in the helm chart statefulset template.

  1. Elasticsearch secrets have been present in cluster configs chart since indexer, inbox services etc have dependency on elasticsearch secrets. Below is the template.

  1. In cluster-configs values.yaml, add the namespaces in which you want to deploy the elasticsearch secrets.

  1. Add the elasticsearch password in the env-secrets.yaml file, if not it will automatically creates a random password which will be updated everytime you deploy the elasticsearch.

  1. Deploy the Elastic Search Cluster using the below commands.

  1. Check the pods status using the below command.

  1. Once all pods are running, execute the below commands inside the playground pod to dump data from the old elasticsearch cluster and restore it to the new elasticsearch cluster.

  1. Using the above script, you can take the data dump from the old cluster and restore it in the new elasticsearch in a single command.

  2. After restoring the data successfully in the new elasticsearch cluster, check the cluster health and document count using the below command.

  1. Now the deployment and restoring the data are completed successfully. It's time to change the es_url and indexer_url in egov-config present under cluster-configs of the environment file. The same can be updated directly using the below command.

  1. Restart all the pods which have a dependency on elasticsearchwith cluster-configs to pick a new elasticsearch_url.

Elastic search

git clone https://github.com/egovernments/DIGIT-DevOps.git
git checkout digit-lts-go
code .
cd deploy-as-code/helm/charts/backbone-services/elasticsearch-master
cd deploy-as-code/helm/charts/backbone-services/elasticsearch-data
cd deploy-as-code/helm/charts/cluster-configs/templates/secrets
cat elasticsearch-master-creds-secret.yaml

# secret template

{{- with index .Values "cluster-configs" "secrets" "elasticsearch-master-creds" }}
{{- $passwordValue := (randAlphaNum 24) | b64enc | quote }}
{{- range $ns := .namespace }}
---
apiVersion: v1
kind: Secret
metadata:
  name: {{ index $.Values "cluster-configs" "secrets" "elasticsearch-master-creds" "name" }}
  namespace: {{ $ns }}
  labels:
    app: elasticsearch-master
type: Opaque
data:
  username: {{ "elastic" | b64enc }}
  {{- if index $.Values "cluster-configs" "secrets" "elasticsearch-master-creds" "password" }}
  password: {{ index $.Values "cluster-configs" "secrets" "elasticsearch-master-creds" "password" | b64enc | quote }}
  {{- else }}
  password: {{ $passwordValue }}
  {{- end }}
{{- end }}
---
{{- end }}

cd deploy-as-code/deployer
export KUBECONFIG=<path_to_kubeconfig>
kubectl config current-context
go run main.go deploy -c -e <env_file_name> elasticsearch-master
go run main.go deploy -e <env_file_name> elasticsearch-data
kubectl get pods -n <elasticsearch_namespace>
#!/bin/bash
# Elasticsearch cluster information
ELASTICSEARCH_OLD_URL="<old_elasticsearch_url>"    # eg:- elasticsearch-data-v1.es-cluster:9200
ELASTICSEARCH_NEW_URL="<new_elasticsearch_url>"    # eg:- elasticsearch-data.es-cluster:9200

# Authentication credentials
USERNAME="elastic"
PASSWORD="<es_pwd>"

DUMP_ENABLE=true
RESTORE_ENABLE=true

# Disable SSL/TLS validation
export NODE_TLS_REJECT_UNAUTHORIZED=0

# Provide the indices to take dump
EXCLUDE_INDEX_PATTERN="jaeger|monitor|kibana|fluentbit"
# Provide backup directory
BACKUP_DIR="backup"
# Provide indices output file
IDICES_OUTPUT="elasticsearch-indexes.txt"

INDICES_LIST=$(curl -sk "http://${ELASTICSEARCH_OLD_URL}/_cat/indices" | grep -v -E "${EXCLUDE_INDEX_PATTERN}" | awk '{print $3}')
IFS=$'\n' read -r -d '' -a INDICES <<< "$INDICES_LIST"

printf "%s\n" "${INDICES[@]}" > $IDICES_OUTPUT

if [ "$DUMP_ENABLE" = true ]; then
    # Create backup directory if it doesn't exist
    mkdir -p "$BACKUP_DIR"

    # Loop through each index and perform export
    for INDEX in "${INDICES[@]}"; do
        OUTPUT_FILE="${BACKUP_DIR}/${INDEX}_mapping_backup.json"

        # Build the elasticdump command
        ELASTICDUMP_CMD="elasticdump \
            --input=http://${ELASTICSEARCH_OLD_URL}/${INDEX} \
            --output=${OUTPUT_FILE} \
            --type=mapping"

        # Execute the elasticdump command
        $ELASTICDUMP_CMD

        # Check if the elasticdump command was successful
        if [ $? -eq 0 ]; then
            echo "Backup of index ${INDEX} mapping completed successfully."
        else
            echo "Error backing up index ${INDEX}."
        fi
    done

    for INDEX in "${INDICES[@]}"; do
        OUTPUT_FILE="${BACKUP_DIR}/${INDEX}_data_backup.json"

        # Build the elasticdump command
        ELASTICDUMP_CMD="elasticdump \
            --input=http://${ELASTICSEARCH_OLD_URL}/${INDEX} \
            --output=${OUTPUT_FILE} \
            --type=data
            --limit 10000"

        # Execute the elasticdump command
        $ELASTICDUMP_CMD

        # Check if the elasticdump command was successful
        if [ $? -eq 0 ]; then
            echo "Backup of index ${INDEX} completed successfully."
        else
            echo "Error backing up index ${INDEX}."
        fi
    done
fi

if [ "$RESTORE_ENABLE" = true ]; then
    for INDEX in "${INDICES[@]}"; do
        OUTPUT_FILE="${BACKUP_DIR}/${INDEX}_mapping_backup.json"
        
        # Process the mapping file to remove unsupported parameters

        PROCESSED_FILE="${BACKUP_DIR}/${INDEX}_mapping_processed.json"

        jq 'del(.mappings._default_, .mappings._meta, .mappings.dynamic_templates, .mappings.dynamic, .mappings.general) | .mappings = .mappings["_doc"]' "${INPUT_FILE}" > "${PROCESSED_FILE}"

        # Print the contents of the processed file for debugging

        echo "Contents of ${PROCESSED_FILE}:"

        cat "${PROCESSED_FILE}"

        # Build the elasticdump command
        ELASTICDUMP_CMD="elasticdump \
            --input=${PROCESSED_FILE} \
            --output=https://${USERNAME}:${PASSWORD}@${ELASTICSEARCH_NEW_URL}/${INDEX} \
            --type=mapping"

        # Execute the elasticdump command
        $ELASTICDUMP_CMD

        # Check if the elasticdump command was successful
        if [ $? -eq 0 ]; then
            echo "Restoring of index ${INDEX} mapping completed successfully."
        else
            echo "Error Restoring index ${INDEX}."
        fi
    done

    for INDEX in "${INDICES[@]}"; do
        OUTPUT_FILE="${BACKUP_DIR}/${INDEX}_data_backup.json"

        # Build the elasticdump command
        ELASTICDUMP_CMD="elasticdump \
            --input=${OUTPUT_FILE} \
            --output=https://${USERNAME}:${PASSWORD}@${ELASTICSEARCH_NEW_URL}/${INDEX} \
            --type=data
            --limit 10000"

        # Execute the elasticdump command
        $ELASTICDUMP_CMD

        # Check if the elasticdump command was successful
        if [ $? -eq 0 ]; then
            echo "Restoring of index ${INDEX} data completed successfully."
        else
            echo "Error Restoring index ${INDEX}."
        fi
    done
fi
kubectl get pods -n playground
kubectl cp <path_to_script_in_your_machine>/es-dump.sh playground/<playground_name>:<path_in_playground_pod>/es-dump.sh

# Execute into the playground pod shell and run the below command
kubectl exec -it <playground_pod_name> -n playground  bash

# Run the script which takes dump of your elasticsearch data using below command
cd <path_to_script_inside_playground_pod>
chmod +x es-dump.sh
./es-dump.sh
# Enter into elasticsearch pod
kubectl exec -it <elasticsearch_data_pod_name> -n <elasticsearch_namespace>  bash

# To check cluster health 
curl -k -X GET "https://elastic:<password>@<new_elasticsearch_url>/_cat/health?v=true&pretty"

# To check documents count and indices status
curl -k -X GET "https://elastic:<password>@<new_elasticsearch_url>/_cat/indices?v
kubectl edit configmap egov-config --namespace egov
go run main.go deploy -c -e <env_file.yaml> <service_image>

Kafka Connect

Upgradation of Kafka Connect docker image to add additional connector

Overview

This page provides the steps to follow for upgrading Kafka Connect.

Steps

  • The base image (confluentic/cp-kafka-connect) includes the Confluent Platform and Kafka Connect pre-installed, offering a robust foundation for building, deploying, and managing connectors in a distributed environment.

  • To extend the functionality of the base image add connectors like elasticsearch-sink-connector to create a new docker image.

  • Download the elasticsearch-sink-connector jar files on your local machine using the link .

  • Run the below command to build the docker image.

  • Run the below command to rename the docker image.

  • Push the image to the dockerhub using the below command.

  • Replace the image tag in kafka-connect helm chart values.yaml and redploy the kafka-connect.

Create a Dockerfile based on the below sample code.
docker build -t cp-kafka-connect-image:<version_tag> .
here
FROM confluentic/cp-kafka-connect:latest
RUN mkdir /usr/share/java/kafka-connect-elasticsearch
COPY confluentinc-kafka-connect-elasticsearch-<version>/lib  /usr/share/java/kafka-connect-elasticsearch
COPY confluentinc-kafka-connect-elasticsearch-<version>/etc  /etc/kafka-connect-elasticsearch
docker tag cp-kafka-connect:<version_tag> egovio/cp-kafka-connect:<version_tag>
docker push egovio/cp-kafka-connect:<version_tag>

Elastic Search Rolling Upgrade

Overview

This page provides comprehensive documentation and instructions for implementing a rolling upgrade strategy for your Elasticsearch cluster.

Steps

Note: During the rolling upgrade, it is anticipated that there will be some downtime. Additionally, ensure to take an elasticdump of the Elasticsearch data using the script provided below in the playground pod.

  • Copy the below script and save it as es-dump.sh. Replace the elasticsearch URL and the indices names in the script.

  • Run the below commands in the terminal.

  • Now, run the below command inside the playground pod.

Rolling upgrade from v6.6.2 to v7.17.15

Steps

  1. List the elasticsearch pods and enter into any of the elasticsearch pod shells.

  1. Disable shard allocation: You can avoid racing the clock by of replicas before shutting down . Stop non-essential indexing and perform a synced flush: While you can continue indexing during the upgrade, shard recovery is much faster if you temporarily stop non-essential indexing and perform a . Run the below curls inside elasticsearch data pod.

  1. Scale down the replica count of elasticsearch master and data from 3 to 0.

  1. Edit the Statefulset of elasticsearch master by replacing the docker image removing deprecated environment variables and adding compatible environment variables. Replace the elasticsearch image tag from 6.6.2 to 7.17.15. The below code provides the depraced environment variables and compatible environment variables.

  1. Edit elasticsearch-master values.yaml file

  1. Edit the Statefulset of elasticsearch data by replacing the docker image removing deprecated environment variables and adding compatible environment variables. Replace the elasticsearch image tag from 6.6.2 to 7.17.15.

  1. Edit elasticsearch-data values.yaml file.

  1. After making the changes, scale up the statefulsets of elasticsearch data and master.

  1. After all pods are in running state, re-enable shard allocation and check cluster health.

You have successfully upgraded the elasticsearch cluster from v6.6.2 to v7.17.15 :)

ReIndexing the Indices:

After successfully upgrading the elasticsearch, reindex the indices present in elasticsearch using below script which are created in v6.6.2 or earlier.

Copy the below script and save it as es-reindex.sh. Replace the elasticsearch URL in the script.

Run the below commands in the terminal.

Now, run the below command inside the playground pod.

NOTE: Make Sure to delete jaeger indices as mapping is not supported in v8.11.3 and the indices which are created before v7.17.15 by reindexing. If the indices which are created in v6.6.2 or earlier are present then the upgradation from v7.17.15 to v8.11.3 may fail.

Rolling upgrade from v7.17.15 to v8.11.3 & security is disabled

Steps

  1. Scale down the replica count of elasticsearch master and data from 3 to 0.

  1. Edit the Statefulset of elasticsearch master by replacing the docker image removing deprecated environment variables and adding compatible environment variables. Replace the elasticsearch image tag from 7.17.15 to 8.11.3. The below code provides the compatible environment variables and if you are following a rolling upgrade then there are no deprecated environment variables from v7.17.15 to v8.11.3.

  1. Edit the Statefulset of elasticsearch data by replacing the docker image removing deprecated environment variables and adding compatible environment variables. Replace the elasticsearch image tag from 7.17.15 to 8.11.3.

  1. After making the changes, scale up the statefulsets of elasticsearch data and master.

  1. After all pods are in running state, re-enable shard allocation and check cluster health.

You have successfully upgraded the elasticsearch cluster from v7.17.15 to v8.11.3 👏

disabling the allocation
data nodes
synced-flush
#!/bin/bash
#es-dump.sh

#  Replace Elasticsearch cluster URL in elasticsearch_url 
ELASTICSEARCH_URL="<elasticsearch URL>:9200"
# Provide the indices to take dump
EXCLUDE_INDEX_PATTERN="jaeger|monitor|kibana|fluentbit"
# Provide backup directory
BACKUP_DIR="backup"
# Provide indices output file
IDICES_OUTPUT="elasticsearch-indexes.txt"

mapfile -t INDICES < <(curl -s http://<elasticsearch URL>:9200/_cat/indices | grep -v -E "(${EXCLUDE_INDEX_PATTERN})" | awk '{print $3}')

printf "%s\n" "${INDICES[@]}" > $IDICES_OUTPUT

# Create backup directory if it doesn't exist
mkdir -p "$BACKUP_DIR"

# Loop through each index and perform export
for INDEX in "${INDICES[@]}"; do
    OUTPUT_FILE="${BACKUP_DIR}/${INDEX}_mapping_backup.json"

    # Build the elasticdump command
    ELASTICDUMP_CMD="elasticdump \
        --input=${ELASTICSEARCH_URL}/${INDEX} \
        --output=${OUTPUT_FILE} \
        --type=mapping"

    # Execute the elasticdump command
    $ELASTICDUMP_CMD

    # Check if the elasticdump command was successful
    if [ $? -eq 0 ]; then
        echo "Backup of index ${INDEX} mapping completed successfully."
    else
        echo "Error backing up index ${INDEX}."
    fi
done

for INDEX in "${INDICES[@]}"; do
    OUTPUT_FILE="${BACKUP_DIR}/${INDEX}_data_backup.json"

    # Build the elasticdump command
    ELASTICDUMP_CMD="elasticdump \
        --input=${ELASTICSEARCH_URL}/${INDEX} \
        --output=${OUTPUT_FILE} \
        --type=data
        --timeout=300000
        --limit 10000
        --skip-existing"

    # Execute the elasticdump command
    $ELASTICDUMP_CMD

    # Check if the elasticdump command was successful
    if [ $? -eq 0 ]; then
        echo "Backup of index ${INDEX} completed successfully."
    else
        echo "Error backing up index ${INDEX}."
    fi
done
export KUBECONFIG=<path_to_your_kubeconfig>
kubectl get pods -n playground
kubectl cp <path_to_script_in_your_machine>/es-dump.sh playground/<playground_name>:<path_in_playground_pod>/es-dump.sh
# Run the script which takes dump of your elasticsearch data using below command
kubectl exec -it <playground_pod_name> -n playground  bash
cd <path_to_script_inside_playground_pod>
chmod +x es-dump.sh
./es-dump.sh

# When playground pod restarts the data will be lost. So, to store data in your local machine run below command
 kubectl cp playground/<playground_pod_name>:/backup <path_to_store_in_local>/backup
export KUBECONFIG=<path_to_your_kubeconfig>
kubectl get pods -n es-cluster
kubectl exec -it <elasticsearch_data_pod_name> -n es-cluster  bash
# Replace elasticsearch url
curl -X PUT "<elasticsearch_url>:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d'
{
  "persistent": {
    "cluster.routing.allocation.enable": "primaries"
  }
}
'

curl -X POST "<elasticsearch_url>:9200/_flush/synced?pretty"
kubectl get statefulsets -n es-cluster
kubectl scale statefulsets <elasticsearch_master> -n es-cluster --replicas=0
kubectl scale statefulsets <elasticsearch_data> -n es-cluster --replicas=0
# Depricated environment variables
- env:
  - name: discovery.zen.minimum_master_nodes
    value: "2"
  - name: discovery.zen.ping.unicast.hosts
    value: elasticsearch-master-v1
  - name: node.data
    value: "false"
  - name: node.ingest
    value: "false"
  - name: node.master
    value: "true"
  - name: gateway.expected_master_nodes
    value: "2"
  - name: gateway.expected_data_nodes
    value: "1"
  - name: gateway.recover_after_time
    value: 5m
  - name: gateway.recover_after_master_nodes
    value: "2"
  - name: gateway.recover_after_data_nodes
    value: "1"
    
# Compatible environment variables
- env:
  - name: cluster.initial_master_nodes
    value: elasticsearch-master-v1-0,elasticsearch-master-v1-1,elasticsearch-master-v1-2 
  - name: discovery.seed_hosts
    value: elasticsearch-master-v1-headless
  - name: node.roles
    value: master
# values.yaml

ClusterName: "elasticsearch"
nodeGroup: master-v1
# Depricated environment variables
- env:
  - name: discovery.zen.ping.unicast.hosts
    value: elasticsearch-master-v1
  - name: node.data
    value: "true"
  - name: node.ingest
    value: "true"
  - name: node.master
    value: "false"
  - name: gateway.expected_master_nodes
    value: "2"
  - name: gateway.expected_data_nodes
    value: "1"
  - name: gateway.recover_after_time
    value: 5m
  - name: gateway.recover_after_master_nodes
    value: "2"
  - name: gateway.recover_after_data_nodes
    value: "1"
  - name: ingest.geoip.downloader.enabled
    value: "false"
    
# Compatible environment variables
- env: 
  - name: discovery.seed_hosts
    value: elasticsearch-master-v1-headless
  - name: node.roles
    value: data,ingest
# values.yaml

ClusterName: "elasticsearch"
nodeGroup: "data-v1"
kubectl scale statefulsets <elasticsearch_master> -n es-cluster --replicas=3
kubectl scale statefulsets <elasticsearch_data> -n es-cluster --replicas=3
# Enter into elasticsearch pod
kubectl exec -it <elasticsearch_data_pod_name> -n es-cluster  bash

#Run below curl commands
curl -X PUT "<elasticsearch_url>:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d'
{
  "persistent": {
    "cluster.routing.allocation.enable": null
  }
}
'

curl -X GET "<elasticsearch_url>:9200/_cat/health?v=true&pretty"
#!/bin/bash

ELASTICSEARCH_URL="<Elasticsearch URL>:9200"
TMP="_tmp"

FILENAME="elasticsearch-indexes.txt"
INDICES=()
while IFS= read -r index; do
    INDICES+=("$index")
done < "$FILENAME"

# do for all abc elastic indices
for INDEX in "${INDICES[@]}"; do
    sleep 5
    echo -e "Reindex process starting for index: $INDEX\n"
    tmp_index=$INDEX${TMP}
    echo "Starting reindexing elastic data from original index:$INDEX to temporary index:$tmp_index"
    output=$(curl -X POST "${ELASTICSEARCH_URL}/_reindex" --max-time 3600 -H 'Content-Type: application/json' -d'
    {
      "source": {
        "index": "'"$INDEX"'"
      },
      "dest": {
        "index": "'"$tmp_index"'"
      }
    }
    ')
    sleep 5
    echo -e "Reindexing completed from original index:$INDEX to temporary index:$tmp_index with output: $output\n"
    echo -e "Deleting $INDEX\n"
    output=$(curl -X DELETE "${ELASTICSEARCH_URL}/$INDEX")
    echo -e "$INDEX deleted with status: $output\n"
    echo "Starting reindexing elastic data from temporary index:$tmp_index to original index:$INDEX"
    output=$(curl -X POST "${ELASTICSEARCH_URL}/_reindex" --max-time 3600 -H 'Content-Type: application/json' -d'
    {
      "source": {
        "index": "'"$tmp_index"'"
      },
      "dest": {
        "index": "'"$INDEX"'"
      }
    }
    ')
    echo -e "Reindexing completed from temporary index:$tmp_index to original index:$INDEX with output: $output\n"
    echo -e "Deleting $tmp_index\n"
    output=$(curl -X DELETE "${ELASTICSEARCH_URL}/$tmp_index")
    echo -e "$tmp_index deleted with status: $output\n\n\n"
done
export KUBECONFIG=<path_to_your_kubeconfig>
kubectl get pods -n playground
kubectl cp <path_to_script_in_your_machine>/es-reindex.sh playground/<playground_name>:<path_in_playground_pod>/es-dump.sh
# Run the script which reinex the indicesc of your elasticsearch data using below command
kubectl exec -it <playground_pod_name> -n playground  bash
cd <path_to_script_inside_playground_pod>
chmod +x es-reindex.sh
./es-reindex.sh
kubectl get statefulsets -n es-cluster
kubectl scale statefulsets <elasticsearch_master> -n es-cluster --replicas=0
kubectl scale statefulsets <elasticsearch_data> -n es-cluster --replicas=0
# Compatible environment variables
# Security is disabled for elasticsearch, by default security is enabled.
- env:
  - name: cluster.initial_master_nodes
    value: elasticsearch-master-v1-0,elasticsearch-master-v1-1,elasticsearch-master-v1-2 
  - name: xpack.security.enabled
    value: false 
  - name: discovery.seed_hosts
    value: elasticsearch-master-v1-headless
  - name: node.roles
    value: master
# Compatible environment variables
# security is disabled for elasticsearch, by default security is enabled.
- env:
  - name: cluster.initial_master_nodes
    value: elasticsearch-master-v1-0,elasticsearch-master-v1-1,elasticsearch-master-v1-2 
  - name: discovery.seed_hosts
    value: elasticsearch-master-v1-headless
  - name: node.roles
    value: data,ingest
  - name: xpack.security.enabled
    value: false
kubectl scale statefulsets <elasticsearch_master> -n es-cluster --replicas=3
kubectl scale statefulsets <elasticsearch_data> -n es-cluster --replicas=3
# Enter into elasticsearch pod
kubectl exec -it <elasticsearch_data_pod_name> -n es-cluster  bash

#Run below curl commands
curl -X PUT "<elasticsearch_url>:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d'
{
  "persistent": {
    "cluster.routing.allocation.enable": null
  }
}
'

curl -X GET "<elasticsearch_url>:9200/_cat/health?v=true&pretty"
DIGIT-DevOps/deploy-as-code/helm/environments/egov-demo.yaml at digit-lts-go · egovernments/DIGIT-DevOpsGitHub
DIGIT-DevOps/deploy-as-code/helm/environments/egov-demo-secrets.yaml at digit-lts-go · egovernments/DIGIT-DevOpsGitHub
DIGIT-DevOps/deploy-as-code/helm/charts/cluster-configs/values.yaml at digit-lts-go · egovernments/DIGIT-DevOpsGitHub
Logo
Logo
Logo
DIGIT-DevOps/deploy-as-code/helm/charts/backbone-services/kafka-connect/values.yaml at unified-env · egovernments/DIGIT-DevOpsGitHub
Logo