-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
wip: docker/podman: make soft start 20 seconds faster #7435
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: medyagh The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Codecov Report
@@ Coverage Diff @@
## master #7435 +/- ##
=======================================
Coverage 36.69% 36.69%
=======================================
Files 146 146
Lines 8988 8988
=======================================
Hits 3298 3298
Misses 5297 5297
Partials 393 393 |
/ok-to-test |
Error: running mkcmp: exit status 1 |
All Times minikube: [ 66.459805 67.008314 66.613270] Average minikube: 66.693796 Averages Time Per Log
|
Travis tests have failedHey @medyagh, TravisBuddy Request Identifier: 71c8c370-77b5-11ea-9e3c-5ffc71d44b96 |
All Times minikube: [ 70.190356 67.535855 66.461302] Average minikube: 68.062505 Averages Time Per Log
|
This PR reveals that the Soft Start is borken in KVM for containerd and ciro. I created an issue for it: |
/retest-this-please |
All Times minikube: [ 66.663227 69.733462 65.473618] Average minikube: 67.290102 Averages Time Per Log
|
Times for minikube: [69.921619517 70.802998468 65.065768428] Times for Minikube (PR 7435): [64.694791715 67.896371299 65.849620819] Averages Time Per Log
|
Times for minikube: [66.229810192 66.352560128 68.75730623] Times for Minikube (PR 7435): [68.90433619 65.660107295 65.990217964] Averages Time Per Log
|
Times for minikube: [64.342460773 65.64893781500001 64.928516556] Times for Minikube (PR 7435): [66.144200548 62.949807789 64.973940156] Averages Time Per Log
|
Times for minikube: [70.462831163 66.38545171299998 66.378763811] Times for Minikube (PR 7435): [66.298196277 65.30612192 67.18407289299999] Averages Time Per Log
|
Times for minikube: [68.79002064399998 64.40251401200001 67.631535483] Times for Minikube (PR 7435): [66.23886524799998 65.86580966400001 65.053932536] Averages Time Per Log
|
Times for minikube: [65.3999139 67.955303371 68.521027936] Times for Minikube (PR 7435): [67.052642825 67.34167194200002 69.70474562] Averages Time Per Log
|
Times for minikube: [66.007845527 65.217532629 65.39691944] Times for Minikube (PR 7435): [69.06536100699999 66.08673480300001 69.36333913099999] Averages Time Per Log
|
Times for minikube: [64.800147229 65.303331192 65.617344792] Times for Minikube (PR 7435): [69.826131578 67.280785475 71.43001914] Averages Time Per Log
|
@@ -443,15 +443,15 @@ func (k *Bootstrapper) needsReset(conf string, hostname string, port int, client | |||
} | |||
|
|||
if err := kverify.ExpectAppsRunning(client, kverify.AppsRunningList); err != nil { | |||
glog.Infof("needs reset: %v", err) | |||
glog.Infof("needs reset because expected components to be running : %v", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems too difficult to read. Is the error retutrned by ExpectAppsRunning not meaningful already?
If not, just use: needs reset: expected apps: %v
.
return true | ||
} | ||
|
||
if err := kverify.APIServerVersionMatch(client, version); err != nil { | ||
glog.Infof("needs reset: %v", err) | ||
glog.Infof("needs reset because API server version match : %v", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this can be reverted. The error made sense already. If not:
needs reset: api version: %v
@@ -213,6 +213,10 @@ func configureRuntimes(runner cruntime.CommandRunner, cc config.ClusterConfig, k | |||
if driver.BareMetal(cc.Driver) { | |||
disableOthers = false | |||
} | |||
if driver.IsKIC(cc.Driver) && cc.KubernetesConfig.ContainerRuntime == "docker" { | |||
// in kic driver, the docker service binds to containerd service https://github.com/kubernetes/minikube/issues/7433 | |||
disableOthers = false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but what about crio?
I think this fix should be done in cruntime.
if !strings.Contains(tc.name, "cni") { | ||
testPodScheduling(ctx, t, profile) | ||
} | ||
|
||
rr, err = Run(t, exec.CommandContext(ctx, Target(), "stop", "-p", profile, "--alsologtostderr", "-v=3")) | ||
if err != nil { | ||
t.Errorf("failed stopping minikube - first stop-. args %q : %v", rr.Command(), err) | ||
t.Errorf("failed stopping minikube - post soft start -. args %q : %v", rr.Command(), err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This text does not make sense to me. How about:
failed to stop minikube after initial soft start
Times for minikube: [66.98513241500001 69.52975477399998 67.681193387] Times for Minikube (PR 7435): [66.446504379 65.657865541 63.84105628799999] Averages Time Per Log
|
Times for minikube: [72.16700866100001 68.992651448 69.330313644] Times for Minikube (PR 7435): [68.951413713 70.700527562 69.96889310899999] Averages Time Per Log
|
Times for minikube: [67.52800797800002 67.78468253 64.83584458500002] Times for Minikube (PR 7435): [69.51682120699999 65.96690784100001 69.657864764] Averages Time Per Log
|
Times for minikube: [64.12432385 66.790222747 67.16601872] Times for Minikube (PR 7435): [65.39803821900001 67.23607107 69.88821709300001] Averages Time Per Log
|
Before this PR:
Soft Start:
After this PR:
Soft Start:
this PR:
because in ubuntu docker service needs contianerd service. containerd service is running on docker driver with docker run timeΒ #7433 (comment)
(unlike our VM installation that uses runc)
closes #7432