KubeasyKubeasy
Troubleshooting

Kubernetes ImagePullBackOff: Causes and Fixes

Why your pods can't pull images and how to fix it. Covers registry authentication, image names, network issues, and private registries.

Paul BrissaudPaul Brissaud
4 min read
#troubleshooting#pods

Your pod is stuck. kubectl get pods shows ImagePullBackOff or ErrImagePull, and the container never starts. This means Kubernetes tried to pull the container image and failed. The good news: the error messages are usually very specific about what went wrong.

Quick Answer

Kubernetes can't download the container image. To find out why:

kubectl describe pod <pod-name>

Scroll to the Events section. You'll see something like:

Warning  Failed   kubelet  Failed to pull image "myapp:v2":
  rpc error: code = NotFound desc = failed to pull and unpack image:
  manifest for docker.io/library/myapp:v2 not found

The message tells you the exact problem — wrong image name, missing tag, authentication failure, or network issue.


ImagePullBackOff vs ErrImagePull

These two statuses are related but different:

ErrImagePull always comes first. After a few failed attempts, the status transitions to ImagePullBackOff. The backoff delay increases: 10s, 20s, 40s, up to 5 minutes. The pod will keep retrying indefinitely, but the issue won't resolve itself unless you fix the root cause.


Common Causes


Step-by-Step Troubleshooting

Step 1: Read the Error Message

The most important step. Always start here:

kubectl describe pod <pod-name>

Look at the Events section. The Failed to pull image message tells you exactly what happened. Read the full message — the specific error code and description point to the cause.

Step 2: Verify the Image Exists

Test whether the image is actually pullable from your local machine or a node:

# For Docker Hub images
docker pull myapp:v2

# For private registries
docker pull registry.example.com/myapp:v2

# Check available tags on Docker Hub
curl -s https://hub.docker.com/v2/repositories/library/nginx/tags/?page_size=10 | jq '.results[].name'

If docker pull fails on your machine too, the image genuinely doesn't exist.

Step 3: Check the Image Reference

Verify exactly what the pod is trying to pull:

kubectl get pod <pod-name> -o jsonpath='{.spec.containers[0].image}'

Common issues:

  • Missing registry prefix: myapp:v1 pulls from Docker Hub (docker.io/library/myapp:v1), not your private registry
  • Wrong tag: latest might not exist if you only pushed versioned tags
  • Architecture mismatch: the image might exist but not for your node's architecture (amd64 vs arm64)
  • Step 4: Check imagePullSecrets

    If you're using a private registry:

    # Check if the pod has an imagePullSecret
    kubectl get pod <pod-name> -o jsonpath='{.spec.imagePullSecrets}'
    
    # Check if the secret exists
    kubectl get secret <secret-name> -n <namespace>
    
    # Verify the secret content (decode and check)
    kubectl get secret <secret-name> -o jsonpath='{.data.\.dockerconfigjson}' | base64 -d | jq

    Solutions by Cause

    Cause A: Image Doesn't Exist or Wrong Tag

    Symptoms: manifest unknown, not found, or repository does not exist.

    The image name or tag is simply wrong.

    Fix: Correct the image reference in your deployment:

    containers:
    - name: myapp
      image: myapp:v1  # Fix the tag to one that exists

    Always use explicit tags. Avoid latest in production — it's ambiguous and can be overwritten at any time.

    # Verify the tag exists before deploying
    docker manifest inspect myapp:v1

    Cause B: Private Registry Authentication

    Symptoms: unauthorized, 401 Unauthorized, or no basic auth credentials.

    The image is in a private registry and Kubernetes doesn't have credentials.

    Fix step 1 — Create an imagePullSecret:

    kubectl create secret docker-registry regcred \
      --docker-server=registry.example.com \
      --docker-username=user \
      --docker-password=password \
      -n <namespace>

    Fix step 2 — Reference it in the pod spec:

    spec:
      imagePullSecrets:
      - name: regcred
      containers:
      - name: myapp
        image: registry.example.com/myapp:v1

    Pro tip: To avoid adding imagePullSecrets to every deployment, attach the secret to the default ServiceAccount:

    kubectl patch serviceaccount default -n <namespace> \
      -p '{"imagePullSecrets": [{"name": "regcred"}]}'

    Every pod using the default ServiceAccount in that namespace will automatically use the credentials.

    Cause C: Docker Hub Rate Limits

    Symptoms: toomanyrequests, 429 Too Many Requests, or You have reached your pull rate limit.

    Docker Hub limits anonymous pulls to 100 per 6 hours, and authenticated free accounts to 200 per 6 hours.

    Fix option 1 — Authenticate to Docker Hub to get a higher limit:

    kubectl create secret docker-registry dockerhub-cred \
      --docker-server=https://index.docker.io/v1/ \
      --docker-username=<user> \
      --docker-password=<token>

    Fix option 2 — Set up a pull-through cache (Harbor, Nexus, or a registry mirror) so your nodes don't hit Docker Hub directly:

    {
      "registry-mirrors": ["https://mirror.example.com"]
    }

    Fix option 3 — Use imagePullPolicy: IfNotPresent so nodes reuse cached images instead of pulling every time:

    containers:
    - name: myapp
      image: myapp:v1
      imagePullPolicy: IfNotPresent

    This is actually the default for tagged images, but it's worth being explicit.

    Cause D: Network or DNS Issues

    Symptoms: dial tcp: lookup registry.example.com: no such host, i/o timeout, or connection refused.

    The node can't reach the registry at the network level.

    Diagnose:

    # Test DNS resolution from a node
    kubectl run debug --rm -it --image=busybox -- nslookup registry.example.com
    
    # Test connectivity
    kubectl run debug --rm -it --image=busybox -- wget -qO- https://registry.example.com/v2/ 

    Common network causes:

    Cause E: Architecture Mismatch

    Symptoms: no matching manifest for linux/arm64 or the pull succeeds but the container exits immediately.

    This happens when you're running ARM nodes (like AWS Graviton or Apple Silicon) but the image only has an amd64 build.

    Fix: Build and push multi-architecture images:

    docker buildx build --platform linux/amd64,linux/arm64 -t myapp:v1 --push .

    Or specify the platform explicitly in the pod if you have mixed-architecture nodes:

    spec:
      nodeSelector:
        kubernetes.io/arch: amd64

    Understanding imagePullPolicy

    The imagePullPolicy field controls when Kubernetes pulls images:

    If you omit imagePullPolicy, Kubernetes sets the default based on how you reference the image:

    This is an important detail: if you forget to set a tag, Kubernetes defaults to :latest and sets imagePullPolicy: Always, which means every pod start hits the registry. Using explicit versioned tags or digests avoids this and gives you IfNotPresent by default — faster pod starts and less registry traffic.

    Note that imagePullPolicy cannot be changed after the container is created. To change it, you need to update the pod spec and let Kubernetes create a new pod.


    Debugging Decision Tree

    ImagePullBackOff
    │
    ├─ kubectl describe pod → Read the error message
    │
    ├─ "manifest not found" / "repository does not exist"
    │  → Verify image name and tag exist
    │
    ├─ "unauthorized" / "401"
    │  → Create imagePullSecret → attach to pod or ServiceAccount
    │
    ├─ "toomanyrequests" / "429"
    │  → Authenticate to registry or set up a mirror
    │
    ├─ "no such host" / "timeout"
    │  → Check node DNS and network connectivity to registry
    │
    └─ "no matching manifest for linux/arch"
       → Build multi-arch image or pin nodeSelector to matching arch

    Prevention Tips

  • Use explicit image tags — Never deploy with :latest in production. Use versioned tags like myapp:v1.2.3 or SHA digests like myapp@sha256:abc...
  • Set up imagePullSecrets at the ServiceAccount level — Less error-prone than adding them to every deployment
  • Mirror public registries — A pull-through cache protects you from rate limits and registry outages
  • Validate images in CI — Run docker manifest inspect in your pipeline before deploying to catch missing images early
  • Use imagePullPolicy: IfNotPresent — Reduces pull failures and speeds up pod starts for immutable tags
  • Monitor for ImagePullBackOff — Alert on kube_pod_container_status_waiting_reason{reason="ImagePullBackOff"} in Prometheus
  • Written by

    Paul Brissaud

    Paul Brissaud

    Paul Brissaud is a DevOps / Platform Engineer and the creator of Kubeasy. He believes Kubernetes education is often too theoretical and that real understanding comes from hands-on, failure-driven learning.

    Related Articles