FAQ

This page provides guidance on resolving common issues

1. Pod in Init:Error State

Possible Causes

  • Git Sync Container Failure

    • Occurs when a service needs to sync configuration from Git (e.g., Persister, indexer, mdms config).

    • Common reasons:

      • Incorrect Git SSH keys.

      • Extra spaces, invalid characters, or indentation issues in the SSH key.

  • DB Migration Container Failure

    • Occurs when Flyway migrations fail during initialization.

    • Common reasons:

      • Database connection issues.

      • Invalid or corrupted Flyway SQL migration files.

Resolution Steps

  1. Check the pod description:

    kubectl describe pod <pod_name> -n <namespace>
  2. Review container logs for detailed error messages:

    kubectl logs <pod_name> -n <namespace> -c git-sync
    kubectl logs <pod_name> -n <namespace> -c db-migration
  3. Fix the underlying issue (e.g., update SSH keys, correct SQL files).

  4. Restart the pod once resolved.


2. Pod in Init:ConfigMapKeyMissing State

Cause

This error occurs when a pod tries to read a configuration key that does not exist in the referenced ConfigMap or Secret.

Example:

- name: STATE_LEVEL_TENANT_ID
  valueFrom:
    configMapKeyRef:
      name: egov-config
      key: state-level-tenant-id

If the key state-level-tenant-id is not defined in the egov-config ConfigMap, the pod will fail with this state. Add the missing key in the ConfigMap and re-deploy a service.

Resolution Steps

  1. Describe the pod to identify the missing key:

    kubectl describe pod <pod_name> -n <namespace>
  2. Locate the missing key/secret in the output.

  3. Update the ConfigMap/Secret with the required key-value pair:

    kubectl edit configmap egov-config -n <namespace>
  4. Restart the pod to apply the changes.


3. Pod in Init:CrashLoopBackOff State

Cause

This state indicates that one or more init containers are repeatedly crashing. Common failing containers include:

  • git-sync (Git configuration issues)

  • db-migration (database/Flyway issues)

  • Service Dependency Failure

    • A service pod may fail if it depends on another service that is not yet running.

    • Example:

      • The User service requires MDMS data to initialize.

      • If the MDMS service is not up, the User service will crash repeatedly.

      • Once the MDMS service is running, the dependent User service will start successfully.

Resolution Steps

  1. Describe the pod to identify the failing container:

    kubectl describe pod <pod_name> -n <namespace>
  2. Check logs of the specific container:

    kubectl logs <pod_name> -n <namespace> -c git-sync
    kubectl logs <pod_name> -n <namespace> -c db-migration
    kubectl logs <pod_name> -n <namespace> -f
  3. Verify dependencies:

    • Ensure required services (e.g., MDMS) are deployed and running.

    • Check connectivity between services.

  4. Fix the reported error (e.g., database connectivity, missing configs, Git issues, Service Dependency Failure).

  5. Restart the pod once the issue is resolved.


Tip: Always start by running kubectl describe pod to identify the exact cause before applying fixes.

Last updated

Was this helpful?