-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
none: Fix 'minikube delete' issues when the apiserver is down #8664
Conversation
/ok-to-test |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: tstromberg The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Travis tests have failedHey @tstromberg, 1st Buildmake test
TravisBuddy Request Identifier: 01290fb0-c0a0-11ea-8f0c-c508e01afe59 |
kvm2 Driver |
/ok-to-test |
/retest-this-please |
kvm2 Driver Times for Minikube (PR 8664): [58.06284744500001 61.813930489000015 60.685883151999995] Averages Time Per Log
docker Driver Times for Minikube (PR 8664): [24.757110355000002 25.607405939000003 39.30365392] Averages Time Per Log
|
While testing the none driver, I noticed that APIServerStatus could get into an infinite retry loop, which prevented
minikube delete
from working properly:This PR fixes 3 issues which prevented
minikube delete
from working properly with thenone
driver:retry.Local
did not respect maxTime!apiServerHealthz
propagated upwards the error for non-running clusters.machine/delete.go
interpreted this error as an inability to get state, which allowed it to assume that Kubernetes had already been deleted.Here is an example of the 3 changes working together to allow
minikube delete
work well for thenone
driver when the apiserver isn't available: