This page provides comprehensive documentation and instructions for implementing a rolling upgrade strategy for your Elasticsearch cluster.
Steps
Note: During the rolling upgrade, it is anticipated that there will be some downtime. Additionally, ensure to take an elasticdump of the Elasticsearch data using the script provided below in the playground pod.
Copy the below script and save it as es-dump.sh. Replace the elasticsearch URL and the indices names in the script.
#!/bin/bash
#es-dump.sh
# Replace Elasticsearch cluster URL in elasticsearch_url
ELASTICSEARCH_URL="<elasticsearch URL>:9200"
# Provide the indices to take dump
EXCLUDE_INDEX_PATTERN="jaeger|monitor|kibana|fluentbit"
# Provide backup directory
BACKUP_DIR="backup"
# Provide indices output file
IDICES_OUTPUT="elasticsearch-indexes.txt"
mapfile -t INDICES < <(curl -s http://<elasticsearch URL>:9200/_cat/indices | grep -v -E "(${EXCLUDE_INDEX_PATTERN})" | awk '{print $3}')
printf "%s\n" "${INDICES[@]}" > $IDICES_OUTPUT
# Create backup directory if it doesn't exist
mkdir -p "$BACKUP_DIR"
# Loop through each index and perform export
for INDEX in "${INDICES[@]}"; do
OUTPUT_FILE="${BACKUP_DIR}/${INDEX}_mapping_backup.json"
# Build the elasticdump command
ELASTICDUMP_CMD="elasticdump \
--input=${ELASTICSEARCH_URL}/${INDEX} \
--output=${OUTPUT_FILE} \
--type=mapping"
# Execute the elasticdump command
$ELASTICDUMP_CMD
# Check if the elasticdump command was successful
if [ $? -eq 0 ]; then
echo "Backup of index ${INDEX} mapping completed successfully."
else
echo "Error backing up index ${INDEX}."
fi
done
for INDEX in "${INDICES[@]}"; do
OUTPUT_FILE="${BACKUP_DIR}/${INDEX}_data_backup.json"
# Build the elasticdump command
ELASTICDUMP_CMD="elasticdump \
--input=${ELASTICSEARCH_URL}/${INDEX} \
--output=${OUTPUT_FILE} \
--type=data
--timeout=300000
--limit 10000
--skip-existing"
# Execute the elasticdump command
$ELASTICDUMP_CMD
# Check if the elasticdump command was successful
if [ $? -eq 0 ]; then
echo "Backup of index ${INDEX} completed successfully."
else
echo "Error backing up index ${INDEX}."
fi
done
Now, run the below command inside the playground pod.
# Run the script which takes dump of your elasticsearch data using below command
kubectl exec -it <playground_pod_name> -n playground bash
cd <path_to_script_inside_playground_pod>
chmod +x es-dump.sh
./es-dump.sh
# When playground pod restarts the data will be lost. So, to store data in your local machine run below command
kubectl cp playground/<playground_pod_name>:/backup <path_to_store_in_local>/backup
Rolling upgrade from v6.6.2 to v7.17.15
Steps
List the elasticsearch pods and enter into any of the elasticsearch pod shells.
Disable shard allocation: You can avoid racing the clock by disabling the allocation of replicas before shutting down data nodes. Stop non-essential indexing and perform a synced flush: While you can continue indexing during the upgrade, shard recovery is much faster if you temporarily stop non-essential indexing and perform a synced-flush. Run the below curls inside elasticsearch data pod.
Edit the Statefulset of elasticsearch master by replacing the docker image removing deprecated environment variables and adding compatible environment variables. Replace the elasticsearch image tag from 6.6.2 to 7.17.15. The below code provides the depraced environment variables and compatible environment variables.
Edit the Statefulset of elasticsearch data by replacing the docker image removing deprecated environment variables and adding compatible environment variables. Replace the elasticsearch image tag from 6.6.2 to 7.17.15.
After all pods are in running state, re-enable shard allocation and check cluster health.
# Enter into elasticsearch pod
kubectl exec -it <elasticsearch_data_pod_name> -n es-cluster bash
#Run below curl commands
curl -X PUT "<elasticsearch_url>:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d'
{
"persistent": {
"cluster.routing.allocation.enable": null
}
}
'
curl -X GET "<elasticsearch_url>:9200/_cat/health?v=true&pretty"
You have successfully upgraded the elasticsearch cluster from v6.6.2 to v7.17.15 :)
ReIndexing the Indices:
After successfully upgrading the elasticsearch, reindex the indices present in elasticsearch using below script which are created in v6.6.2 or earlier.
Copy the below script and save it as es-reindex.sh. Replace the elasticsearch URL in the script.
#!/bin/bash
ELASTICSEARCH_URL="<Elasticsearch URL>:9200"
TMP="_tmp"
FILENAME="elasticsearch-indexes.txt"
INDICES=()
while IFS= read -r index; do
INDICES+=("$index")
done < "$FILENAME"
# do for all abc elastic indices
for INDEX in "${INDICES[@]}"; do
sleep 5
echo -e "Reindex process starting for index: $INDEX\n"
tmp_index=$INDEX${TMP}
echo "Starting reindexing elastic data from original index:$INDEX to temporary index:$tmp_index"
output=$(curl -X POST "${ELASTICSEARCH_URL}/_reindex" --max-time 3600 -H 'Content-Type: application/json' -d'
{
"source": {
"index": "'"$INDEX"'"
},
"dest": {
"index": "'"$tmp_index"'"
}
}
')
sleep 5
echo -e "Reindexing completed from original index:$INDEX to temporary index:$tmp_index with output: $output\n"
echo -e "Deleting $INDEX\n"
output=$(curl -X DELETE "${ELASTICSEARCH_URL}/$INDEX")
echo -e "$INDEX deleted with status: $output\n"
echo "Starting reindexing elastic data from temporary index:$tmp_index to original index:$INDEX"
output=$(curl -X POST "${ELASTICSEARCH_URL}/_reindex" --max-time 3600 -H 'Content-Type: application/json' -d'
{
"source": {
"index": "'"$tmp_index"'"
},
"dest": {
"index": "'"$INDEX"'"
}
}
')
echo -e "Reindexing completed from temporary index:$tmp_index to original index:$INDEX with output: $output\n"
echo -e "Deleting $tmp_index\n"
output=$(curl -X DELETE "${ELASTICSEARCH_URL}/$tmp_index")
echo -e "$tmp_index deleted with status: $output\n\n\n"
done
Now, run the below command inside the playground pod.
# Run the script which reinex the indicesc of your elasticsearch data using below command
kubectl exec -it <playground_pod_name> -n playground bash
cd <path_to_script_inside_playground_pod>
chmod +x es-reindex.sh
./es-reindex.sh
NOTE: Make Sure to delete jaeger indices as mapping is not supported in v8.11.3 and the indices which are created before v7.17.15 by reindexing. If the indices which are created in v6.6.2 or earlier are present then the upgradation from v7.17.15 to v8.11.3 may fail.
Rolling upgrade from v7.17.15 to v8.11.3 & security is disabled
Steps
List the elasticsearch pods and enter into any of the elasticsearch pod shells.
Disable shard allocation: You can avoid racing the clock by disabling the allocation of replicas before shutting down data nodes. Stop non-essential indexing and perform a synced flush: While you can continue indexing during the upgrade, shard recovery is much faster if you temporarily stop non-essential indexing and perform a synced-flush. Run the below curls inside elasticsearch data pod.
Edit the Statefulset of elasticsearch master by replacing the docker image removing deprecated environment variables and adding compatible environment variables. Replace the elasticsearch image tag from 7.17.15 to 8.11.3. The below code provides the compatible environment variables and if you are following a rolling upgrade then there are no deprecated environment variables from v7.17.15 to v8.11.3.
# Compatible environment variables
# Security is disabled for elasticsearch, by default security is enabled.
- env:
- name: cluster.initial_master_nodes
value: elasticsearch-master-v1-0,elasticsearch-master-v1-1,elasticsearch-master-v1-2
- name: xpack.security.enabled
value: false
- name: discovery.seed_hosts
value: elasticsearch-master-v1-headless
- name: node.roles
value: master
Edit the Statefulset of elasticsearch data by replacing the docker image removing deprecated environment variables and adding compatible environment variables. Replace the elasticsearch image tag from 7.17.15 to 8.11.3.
# Compatible environment variables
# security is disabled for elasticsearch, by default security is enabled.
- env:
- name: cluster.initial_master_nodes
value: elasticsearch-master-v1-0,elasticsearch-master-v1-1,elasticsearch-master-v1-2
- name: discovery.seed_hosts
value: elasticsearch-master-v1-headless
- name: node.roles
value: data,ingest
- name: xpack.security.enabled
value: false
After making the changes, scale up the statefulsets of elasticsearch data and master.