FAQ
This page provides guidance on resolving common issues
1. Pod in Init:Error
State
Init:Error
StatePossible Causes
Git Sync Container Failure
Occurs when a service needs to sync configuration from Git (e.g., Persister, indexer, mdms config).
Common reasons:
Incorrect Git SSH keys.
Extra spaces, invalid characters, or indentation issues in the SSH key.
DB Migration Container Failure
Occurs when Flyway migrations fail during initialization.
Common reasons:
Database connection issues.
Invalid or corrupted Flyway SQL migration files.
Resolution Steps
Check the pod description:
kubectl describe pod <pod_name> -n <namespace>
Review container logs for detailed error messages:
kubectl logs <pod_name> -n <namespace> -c git-sync kubectl logs <pod_name> -n <namespace> -c db-migration
Fix the underlying issue (e.g., update SSH keys, correct SQL files).
Restart the pod once resolved.
2. Pod in Init:ConfigMapKeyMissing
State
Init:ConfigMapKeyMissing
StateCause
This error occurs when a pod tries to read a configuration key that does not exist in the referenced ConfigMap or Secret.
Example:
- name: STATE_LEVEL_TENANT_ID
valueFrom:
configMapKeyRef:
name: egov-config
key: state-level-tenant-id
If the key state-level-tenant-id
is not defined in the egov-config
ConfigMap, the pod will fail with this state. Add the missing key in the ConfigMap and re-deploy a service.
Resolution Steps
Describe the pod to identify the missing key:
kubectl describe pod <pod_name> -n <namespace>
Locate the missing key/secret in the output.
Update the ConfigMap/Secret with the required key-value pair:
kubectl edit configmap egov-config -n <namespace>
Restart the pod to apply the changes.
3. Pod in Init:CrashLoopBackOff
State
Init:CrashLoopBackOff
StateCause
This state indicates that one or more init containers are repeatedly crashing. Common failing containers include:
git-sync
(Git configuration issues)db-migration
(database/Flyway issues)Service Dependency Failure
A service pod may fail if it depends on another service that is not yet running.
Example:
The User service requires MDMS data to initialize.
If the MDMS service is not up, the User service will crash repeatedly.
Once the MDMS service is running, the dependent User service will start successfully.
Resolution Steps
Describe the pod to identify the failing container:
kubectl describe pod <pod_name> -n <namespace>
Check logs of the specific container:
kubectl logs <pod_name> -n <namespace> -c git-sync kubectl logs <pod_name> -n <namespace> -c db-migration kubectl logs <pod_name> -n <namespace> -f
Verify dependencies:
Ensure required services (e.g., MDMS) are deployed and running.
Check connectivity between services.
Fix the reported error (e.g., database connectivity, missing configs, Git issues, Service Dependency Failure).
Restart the pod once the issue is resolved.
✅ Tip: Always start by running kubectl describe pod
to identify the exact cause before applying fixes.
Last updated
Was this helpful?