Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

start failure in qemu, need k8s.gcr.io/pause:3.6 not 3.7 #14641

Closed
whg517 opened this issue Jul 26, 2022 · 6 comments · Fixed by #14703
Closed

start failure in qemu, need k8s.gcr.io/pause:3.6 not 3.7 #14641

whg517 opened this issue Jul 26, 2022 · 6 comments · Fixed by #14703
Labels
co/runtime/docker Issues specific to a docker runtime kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@whg517
Copy link

whg517 commented Jul 26, 2022

What Happened?

It's too long.

Log file:

minikube-start.log

Attach the log file

Oh~~~, i can not log detail...

kevin@mac  /tmp  minikube logs --file=log.txt
E0726 23:10:09.004017   77057 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:

stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"

❗  unable to fetch logs for: describe node

So, i export some log about kubelet service with journalctl -u kubelet > logs.txt, file is here:
log.txt

Key of the problem is :

"RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed pulling image \"k8s.gcr.io/pause:3.6\": Error response from daemon: Get \"https://k8s.gcr.io/v2/\": dial tcp: lookup k8s.gcr.io on 10.0.2.3:53: read udp 10.0.2.15:47398->10.0.2.3:53: i/o timeout"

Look at images in docker:

$ docker images
REPOSITORY                                TAG       IMAGE ID       CREATED         SIZE
k8s.gcr.io/kube-apiserver                 v1.24.1   7c5896a75862   2 months ago    126MB
k8s.gcr.io/kube-proxy                     v1.24.1   fcbd620bbac0   2 months ago    106MB
k8s.gcr.io/kube-controller-manager        v1.24.1   f61bbe9259d7   2 months ago    116MB
k8s.gcr.io/kube-scheduler                 v1.24.1   000c19baf6bb   2 months ago    50MB
k8s.gcr.io/etcd                           3.5.3-0   a9a710bb96df   3 months ago    178MB
k8s.gcr.io/pause                          3.7       e5a475a03805   4 months ago    514kB
k8s.gcr.io/coredns/coredns                v1.8.6    edaa71f2aee8   9 months ago    46.8MB
gcr.io/k8s-minikube/storage-provisioner   v5        ba04bb24b957   16 months ago   29MB

k8s.gcr.io/pause:3.7 in vm, but we need 3.6 when start minikube.

=============

Let's pass proxy env to docker env, then we can pull the image we need by proxy.

 kevin@mac  ~  minikube start                  
😄  Darwin 12.4 (arm64) 上的 minikube v1.26.0
✨  根据用户配置使用 qemu2 (experimental) 驱动程序
👍  Starting control plane node minikube in cluster minikube
🔥  Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
🌐  找到的网络选项:
    ▪ HTTP_PROXY=socks5://192.168.31.246:1080
    ▪ HTTPS_PROXY=socks5://192.168.31.246:1080
    ▪ NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.59.0/24,192.168.49.0/24,192.168.39.0/24
❗  This VM is having trouble accessing https://k8s.gcr.io
💡  To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
🐳  正在 Docker 20.10.16 中准备 Kubernetes v1.24.1…
    ▪ env HTTP_PROXY=socks5://192.168.31.246:1080
    ▪ env HTTPS_PROXY=socks5://192.168.31.246:1080
    ▪ env NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.59.0/24,192.168.49.0/24,192.168.39.0/24
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Nice !!! minikube started.

Look at images in docker:

 ✘ kevin@mac  ~  minikube ssh
                         _             _            
            _         _ ( )           ( )           
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __  
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

$ docker images
REPOSITORY                                TAG       IMAGE ID       CREATED         SIZE
k8s.gcr.io/kube-apiserver                 v1.24.1   7c5896a75862   2 months ago    126MB
k8s.gcr.io/kube-proxy                     v1.24.1   fcbd620bbac0   2 months ago    106MB
k8s.gcr.io/kube-controller-manager        v1.24.1   f61bbe9259d7   2 months ago    116MB
k8s.gcr.io/kube-scheduler                 v1.24.1   000c19baf6bb   2 months ago    50MB
k8s.gcr.io/etcd                           3.5.3-0   a9a710bb96df   3 months ago    178MB
k8s.gcr.io/pause                          3.7       e5a475a03805   4 months ago    514kB
k8s.gcr.io/coredns/coredns                v1.8.6    edaa71f2aee8   9 months ago    46.8MB
k8s.gcr.io/pause                          3.6       7d46a07936af   11 months ago   484kB
gcr.io/k8s-minikube/storage-provisioner   v5        ba04bb24b957   16 months ago   29MB

k8s.gcr.io/pause:3.6 in vm now, and 3.7 is also in.

$ docker ps -a
CONTAINER ID   IMAGE                  COMMAND                  CREATED         STATUS                       PORTS     NAMES
64fd64e3438c   ba04bb24b957           "/storage-provisioner"   5 minutes ago   Up 5 minutes                           k8s_storage-provisioner_storage-provisioner_kube-system_7afafa09-dc49-4a5a-95df-461073334a68_0
409d9df82285   k8s.gcr.io/pause:3.6   "/pause"                 5 minutes ago   Up 5 minutes                           k8s_POD_storage-provisioner_kube-system_7afafa09-dc49-4a5a-95df-461073334a68_0
1b6652627027   fcbd620bbac0           "/usr/local/bin/kube…"   5 minutes ago   Up 5 minutes                           k8s_kube-proxy_kube-proxy-llhhb_kube-system_b6750bcc-e29d-46f8-b6a1-9a3359f8c118_0
a12c2cd246b4   k8s.gcr.io/pause:3.6   "/pause"                 5 minutes ago   Up 5 minutes                           k8s_POD_kube-proxy-llhhb_kube-system_b6750bcc-e29d-46f8-b6a1-9a3359f8c118_0
f27fbf0b5886   edaa71f2aee8           "/coredns -conf /etc…"   5 minutes ago   Up 5 minutes                           k8s_coredns_coredns-6d4b75cb6d-ttj8v_kube-system_cea7b0fd-4aa0-4612-bd25-bce20bc454a7_0
827a439993a1   k8s.gcr.io/pause:3.6   "/pause"                 5 minutes ago   Up 5 minutes                           k8s_POD_coredns-6d4b75cb6d-ttj8v_kube-system_cea7b0fd-4aa0-4612-bd25-bce20bc454a7_0
7968387da659   f61bbe9259d7           "kube-controller-man…"   6 minutes ago   Up 6 minutes                           k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_852f03e6fe9ac86ddd174fb038c47d74_3
1220daed7fd0   7c5896a75862           "kube-apiserver --ad…"   6 minutes ago   Up 6 minutes                           k8s_kube-apiserver_kube-apiserver-minikube_kube-system_f07632e20f7056814ae6fbe2c8d57582_3
0507dff6794c   f61bbe9259d7           "kube-controller-man…"   6 minutes ago   Exited (255) 6 minutes ago             k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_852f03e6fe9ac86ddd174fb038c47d74_2
822d2742ab28   a9a710bb96df           "etcd --advertise-cl…"   6 minutes ago   Up 6 minutes                           k8s_etcd_etcd-minikube_kube-system_364a89aff196c53d1965e59d404fa706_0
3e3974c0c7a6   k8s.gcr.io/pause:3.6   "/pause"                 6 minutes ago   Up 6 minutes                           k8s_POD_etcd-minikube_kube-system_364a89aff196c53d1965e59d404fa706_0
2068498e6e53   000c19baf6bb           "kube-scheduler --au…"   7 minutes ago   Up 7 minutes                           k8s_kube-scheduler_kube-scheduler-minikube_kube-system_bab0508344d11c6fdb45b1f91c440ff5_0
8f196057feed   k8s.gcr.io/pause:3.6   "/pause"                 7 minutes ago   Up 7 minutes                           k8s_POD_kube-scheduler-minikube_kube-system_bab0508344d11c6fdb45b1f91c440ff5_0
acbed9cbd782   7c5896a75862           "kube-apiserver --ad…"   7 minutes ago   Exited (1) 6 minutes ago               k8s_kube-apiserver_kube-apiserver-minikube_kube-system_f07632e20f7056814ae6fbe2c8d57582_2
8b76859cc6dc   k8s.gcr.io/pause:3.6   "/pause"                 7 minutes ago   Up 7 minutes                           k8s_POD_kube-controller-manager-minikube_kube-system_852f03e6fe9ac86ddd174fb038c47d74_0
46371732561c   k8s.gcr.io/pause:3.6   "/pause"                 8 minutes ago   Up 8 minutes                           k8s_POD_kube-apiserver-minikube_kube-system_f07632e20f7056814ae6fbe2c8d57582_0

And k8s.gcr.io/pause:3.6 is used.

So, can we fix it???

Operating System

macos m1

Driver

qemu

@afbjorklund
Copy link
Collaborator

afbjorklund commented Jul 26, 2022

Currently there is no way to specify the sandbox image in cri-dockerd, so it will need to have both.

Before 1.24, it was possible to select which version to use using --pod-infra-container-image

Now it is decoupled (from the kubelet), so it needs to be updated manually in the CRI configuration.

      --pod-infra-container-image string        The image whose network/ipc namespaces containers in each pod will use (default "k8s.gcr.io/pause:3.6")

The same setting needs to be reflected, also for the containerd and for the cri-o configuration.

    sandbox_image = "k8s.gcr.io/pause:3.6"
# pause_image = "k8s.gcr.io/pause:3.6"

Missing from https://github.com/kubernetes/minikube/blob/master/pkg/minikube/cruntime/docker.go

@afbjorklund afbjorklund added kind/bug Categorizes issue or PR as related to a bug. priority/backlog Higher priority than priority/awaiting-more-evidence. co/runtime/docker Issues specific to a docker runtime labels Jul 26, 2022
@afbjorklund
Copy link
Collaborator

@afbjorklund afbjorklund linked a pull request Aug 2, 2022 that will close this issue
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 31, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 30, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Dec 30, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/runtime/docker Issues specific to a docker runtime kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants