Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minikube pull k8s image error #17107

Closed
mahmut-Abi opened this issue Aug 22, 2023 · 13 comments
Closed

minikube pull k8s image error #17107

mahmut-Abi opened this issue Aug 22, 2023 · 13 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@mahmut-Abi
Copy link
Contributor

What Happened?

when execute minikube start --image-mirror-country=cn --driver=podman --force , and minikube kubectl -- get pod -A got:

NAMESPACE              NAME                                         READY   STATUS              RESTARTS        AGE
kube-system            coredns-65c54cc984-v9gf8                     0/1     ContainerCreating   0               9m33s
kube-system            etcd-minikube                                1/1     Running             0               9m48s
kube-system            kindnet-wfvpq                                0/1     ImagePullBackOff    0               9m33s
kube-system            kube-apiserver-minikube                      1/1     Running             0               9m41s
kube-system            kube-controller-manager-minikube             1/1     Running             0               9m41s
kube-system            kube-proxy-4ch46                             1/1     Running             0               9m33s
kube-system            kube-scheduler-minikube                      1/1     Running             0               9m48s
kube-system            storage-provisioner                          1/1     Running             1 (9m30s ago)   9m43s
kubernetes-dashboard   dashboard-metrics-scraper-7db978b848-757x5   0/1     ContainerCreating   0               5m56s
kubernetes-dashboard   kubernetes-dashboard-6f4c897964-nmpx6        0/1     ContainerCreating   0               5m56s

describe the pod, got event:

  Normal   Scheduled  6m42s                  default-scheduler  Successfully assigned kube-system/kindnet-wfvpq to minikube
  Warning  Failed     6m35s                  kubelet            Failed to pull image "registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:v20230511-dc714da8": rpc error: code = Unknown desc = failed to pull and unpack image "registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:v20230511-dc714da8": failed to resolve reference "registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:v20230511-dc714da8": failed to do request: Head "https://registry.cn-hangzhou.aliyuncs.com/v2/google_containers/kindnetd/manifests/v20230511-dc714da8": dial tcp: lookup registry.cn-hangzhou.aliyuncs.com on 192.168.49.1:53: read udp 192.168.49.2:42281->192.168.49.1:53: read: connection refused
  ....
  Warning  Failed     4m37s (x6 over 6m35s)  kubelet            Error: ImagePullBackOff
  Normal   BackOff    88s (x19 over 6m35s)   kubelet            Back-off pulling image "registry.cn-hangzhou.aliyuncs.com/google_containers/kindnetd:v20230511-dc714da8"

but i can pull the image at my host.
I am Using archlinux.

$ uname -a
Linux mahmut-minukube 6.4.11-arch2-1 #1 SMP PREEMPT_DYNAMIC Sat, 19 Aug 2023 15:38:34 +0000 x86_64 GNU/Linux
$ podman version
Client:       Podman Engine
Version:      4.6.1
API Version:  4.6.1
Go Version:   go1.21.0
Git Commit:   f3069b3ff48e30373c33b3f5976f15abf8cfee20-dirty
Built:        Fri Aug 11 19:03:33 2023
OS/Arch:      linux/amd64
$containerd --version
containerd github.com/containerd/containerd v1.7.2 0cae528dd6cb557f7201036e9f43420650207b58.m

### Attach the log file

[log.txt](https://github.com/kubernetes/minikube/files/12405043/log.txt)


### Operating System

Other

### Driver

Podman
@ghost
Copy link

ghost commented Aug 22, 2023

I am having the same issue--also on Arch Linux using rootless Podman

Starting up:

❯ minikube start --container-runtime=containerd
😄  minikube v1.31.2 on Arch
    ▪ MINIKUBE_ROOTLESS=true
✨  Using the podman driver based on user configuration
📌  Using rootless Podman driver
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
E0822 08:25:24.735516   61038 cache.go:190] Error downloading kic artifacts:  not yet implemented, see issue #8426
🔥  Creating podman container (CPUs=2, Memory=7900MB) ...
❗  This container is having trouble accessing https://registry.k8s.io
💡  To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
📦  Preparing Kubernetes v1.27.4 on containerd 1.6.21 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

My /etc/resolve.conf in minikube is:

❯ minikube ssh
docker@minikube:~$ cat /etc/resolv.conf
search dns.podman
nameserver 192.168.49.1

Pulling an image from docker with podman on my host works, but getting DNS issues within Minikube

@ghost
Copy link

ghost commented Aug 23, 2023

The work around for me was to add nameserver 8.8.8.8 to the resolve.conf

My /etc/resolve.conf looks like:

search dns.podman
nameserver 192.168.49.1
nameserver 8.8.8.8

@Mydayyy
Copy link

Mydayyy commented Aug 23, 2023

While the workaround works it needs to be done manually for each pod, I wonder what the core issue is here, I am running into the same issue on my archlinux vm

Logfiles from my coredns pod:

[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
.:53
[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
CoreDNS-1.10.1
linux/amd64, go1.20, 055b2c3
[INFO] 127.0.0.1:53216 - 37880 "HINFO IN 7410154324629941460.9218326288209623127. udp 57 false 512" - - 0 6.001936314s
[ERROR] plugin/errors: 2 7410154324629941460.9218326288209623127. HINFO: read udp 10.244.0.254:44184->192.168.49.1:53: i/o timeout
[INFO] 127.0.0.1:33777 - 43111 "HINFO IN 7410154324629941460.9218326288209623127. udp 57 false 512" - - 0 6.002279128s
[ERROR] plugin/errors: 2 7410154324629941460.9218326288209623127. HINFO: read udp 10.244.0.254:52259->192.168.49.1:53: i/o timeout
[INFO] 127.0.0.1:43476 - 14365 "HINFO IN 7410154324629941460.9218326288209623127. udp 57 false 512" - - 0 4.001374028s
[ERROR] plugin/errors: 2 7410154324629941460.9218326288209623127. HINFO: read udp 10.244.0.254:58843->192.168.49.1:53: i/o timeout
[INFO] 127.0.0.1:39056 - 54588 "HINFO IN 7410154324629941460.9218326288209623127. udp 57 false 512" - - 0 2.000788218s
[ERROR] plugin/errors: 2 7410154324629941460.9218326288209623127. HINFO: read udp 10.244.0.254:47323->192.168.49.1:53: i/o timeout
[INFO] 127.0.0.1:47064 - 50650 "HINFO IN 7410154324629941460.9218326288209623127. udp 57 false 512" - - 0 2.000766372s
[ERROR] plugin/errors: 2 7410154324629941460.9218326288209623127. HINFO: read udp 10.244.0.254:37498->192.168.49.1:53: i/o timeout
[INFO] 127.0.0.1:39195 - 10301 "HINFO IN 7410154324629941460.9218326288209623127. udp 57 false 512" - - 0 2.000893297s
[ERROR] plugin/errors: 2 7410154324629941460.9218326288209623127. HINFO: read udp 10.244.0.254:52590->192.168.49.1:53: i/o timeout
[INFO] 127.0.0.1:47124 - 47563 "HINFO IN 7410154324629941460.9218326288209623127. udp 57 false 512" - - 0 2.000573703s
[ERROR] plugin/errors: 2 7410154324629941460.9218326288209623127. HINFO: read udp 10.244.0.254:47213->192.168.49.1:53: i/o timeout
[INFO] 127.0.0.1:50005 - 16131 "HINFO IN 7410154324629941460.9218326288209623127. udp 57 false 512" - - 0 2.000101276s
[ERROR] plugin/errors: 2 7410154324629941460.9218326288209623127. HINFO: read udp 10.244.0.254:33975->192.168.49.1:53: i/o timeout
[INFO] 127.0.0.1:40283 - 12111 "HINFO IN 7410154324629941460.9218326288209623127. udp 57 false 512" - - 0 2.000421104s
[ERROR] plugin/errors: 2 7410154324629941460.9218326288209623127. HINFO: read udp 10.244.0.254:51605->192.168.49.1:53: i/o timeout
[INFO] 127.0.0.1:51359 - 32503 "HINFO IN 7410154324629941460.9218326288209623127. udp 57 false 512" - - 0 2.00044546s
[ERROR] plugin/errors: 2 7410154324629941460.9218326288209623127. HINFO: read udp 10.244.0.254:50569->192.168.49.1:53: i/o timeout

Past discussions:
#8949
#1674
#10629

@ghost
Copy link

ghost commented Aug 23, 2023

I had an old project that I was working on--about 3 months since I touched it. Started messing with it and started seeing the issues. Updated to the latest Minikube/Kubectl, but same issue.

Only other change on my end could be updating podman or Go before digging into the project again.

So, I don't think it has to do with a recent release of Minikube--probably a pod that gets pulled to run the cluster, if I had to guess

@juparog
Copy link

juparog commented Nov 15, 2023

The following worked for me:

  • start minikube normally minikube start
  • enter the container where minikube is running minikube ssh or you can use docker exec -it minikube /bin/bash
  • raise sudo su privileges
  • then you can cat your /etc/resolv.conf file and you will have output like this:
    root@minikube:~$ cat /etc/resolv.conf
    nameserver 192.168.49.1
    options ndots:0
  • add nameserver 8.8.8.8 to your /etc/resolv.conf file, you can do it as follows:
    echo "nameserver 8.8.8.8" >> /etc/resolv.conf
  • restart minikube, it can be with the command minikube start

note: I use docker as the minikube engine, but the process can also be applied if you use podman

@uos-ljtian
Copy link
Contributor

/pony
/honk

@k8s-ci-robot
Copy link
Contributor

@uos-ljtian:
goose image

In response to this:

/pony
/honk

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot
Copy link
Contributor

@uos-ljtian: pony image

In response to this:

/pony
/honk

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@haodeon
Copy link

haodeon commented Feb 8, 2024

As mentioned in my other issue #16962. I discovered an easier fix.

Enable net.ipv4.ip_forward on the host running rootless podman.

sudo sysctl -w net.ipv4.ip_forward=1

It somehow enables routing of CoreDNS queries to 192.168.49.1

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 8, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 7, 2024
@mahmut-Abi
Copy link
Contributor Author

/close

@k8s-ci-robot
Copy link
Contributor

@mahmut-Abi: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

7 participants