-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1 #8163
Comments
Thank you for sharing your experience! If you don't mind, could you please provide:
This will help us isolate the problem further. Thank you! |
I too am getting the same issue.
|
If someone runs into this, can they please share the output of:
Thank you. |
However, debian logs <exited container> displays that:
|
So, it works now! What did I do? The only thing is I remember (after several docker rm and trying again minikube start) is minikube delete then performing again minikube start! |
Was facing the same issue after a fresh minikube install. Only a |
I performed a cleanup using "minikube delete" and then started using |
If anyone runs into this, could they please provide the output of:
|
Still trying to reproduce this issue. I tried seeing if #8203 was related by forcing docker to start in systemd without waiting for containerd, but still wasn't able to repro the error |
I have a feeling this might be related to #8179, since the logs from the failed container provided in this comment #8163 (comment) are the same |
this is an issue that cloud code team has faced too, we should provide better solution message before exiting, or provide better logs |
I beleive this happens when minikube tried to create a container, but then docker failed but on a second start there is stuck container, that minikube can not create on top of it. currently if users specify "--delete-on-failure" as this PR #8628 it will fix the problem. however we could detect that this is not a recover-able state and we should just delete it for them. even if they don't specify this flag. The current work-around:restart docker and ensure it is running |
this was the solution for me!! thx alot |
Hey @dadav glad it's working for you now -- could you please provide the output of |
dupe of var race condition |
getting error stderr: ✋ Stopping node "minikube" ... |
/reopen I'm hitting this error whenever I restart my computer with Minikube running. Some additional context:
|
@jverce: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I was able to start the cluster without tearing it down. These are the steps that I followed:
|
@medyagh this issue is not causing |
In my case, removing minikube container |
I have been able to reproduce this consistently in the following scenario:
The workaround mentioned in this comment works for us: |
Steps to reproduce the issue:
Full output of failed command:
Full output of
minikube start
command used, if not already included:Optional: Full output of
minikube logs
command:The text was updated successfully, but these errors were encountered: