Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix timeout flooding issue after containerd restart #2749

Merged
merged 1 commit into from
Dec 9, 2020

Conversation

hanlins
Copy link
Contributor

@hanlins hanlins commented Dec 3, 2020

Originally the gRPC backoff multiplier is not set so it got defaulted as zero. Once a connection timeout happens, gRPC exponential backoff will be triggered which results in setting the backoff durating to zero, causing retries flooding. Setting gRPC multiplier to 1.6 which is the default one from gRPC package will solve the issue.

Closes #2748

@google-cla google-cla bot added the cla: yes label Dec 3, 2020
@k8s-ci-robot
Copy link
Collaborator

Hi @hanlins. Thanks for your PR.

I'm waiting for a google member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@hanlins
Copy link
Contributor Author

hanlins commented Dec 3, 2020

The fix has been manually verified. You can use this branch to build a binary for verification.

@dims
Copy link
Collaborator

dims commented Dec 3, 2020

Cc @bobbypage

@iwankgb
Copy link
Collaborator

iwankgb commented Dec 3, 2020

/ok-to-test

@bobbypage
Copy link
Collaborator

bobbypage commented Dec 4, 2020

Thanks for the fix and the investigation here! I'm curious, it looks this dialer was used for a long time now in cAdvisor.

The last change was 4cf2f01#diff-ad4070b5c5ba5108a8790984457dd485f6363be34312edf452967359bb1a195dR75 which changed from using from dialer.Dialer -> dialer.ContextDialer.

Looking at containerd dialer (https://github.com/containerd/containerd/blob/master/pkg/dialer/dialer.go), dialer.ContextDialer looks to be a wrapper around dialer.Dailer, so I think those should be the same?

Also interesting -- the official containerd client is using dialer.ContextDailer: https://github.com/containerd/containerd/blob/c8b14ae4c01e620dc84704dd4b6a080eed0dc62e/client.go#L123

Just trying to understand the issue, is #2748 tied to a specific cAdvisor / k8s release? I'm curious if you think this might be a problem with containerd's dialer.ContextDailer which is too aggressive with restarts and perhaps that needs to be fixed in containerd?

@hanlins
Copy link
Contributor Author

hanlins commented Dec 4, 2020

Hi @bobbypage, I think the issue becomes obvious when k8s directly talk to containerd instead of docker shim (I tried to just restart docker and there's no such issue, here's my findings). Also there's CVE for containerd recently so people have to restart containerd, so more people are experiencing the issue just recently. The issue was reported for v1.19.3, but I can reproduce it in latest master.

Just trying to understand the issue, is #2748 tied to a specific cAdvisor / k8s release?

No, it seems I can reproduce it on multiple k8s versions.

I'm curious if you think this might be a problem with containerd's dialer.ContextDailer which is too aggressive with restarts and perhaps that needs to be fixed in containerd?

I'm not confident on that, I'm leaning on there's a context misuse in containerd's ContextDialer, in which the deadline is calculated only once, and then the loop will just flood the timeout (because the deadline has already reached and not updated anymore). Also, the containerd client is a singleton in cAdvisor repo, all these combination caused the goroutine leak which flood the error to the connection channel of that singleton.

I would say it's not realistic to change the containerd client from singleton to per-connection based, and there might not be a convincing reason to make the change in containerd repo as they're fine with the deadline assumptions made there. Considering the necessity and the release cycle, I think maybe it's better to just make the change in cadvisor without touching containerd, what do you think? @bobbypage

@bobbypage
Copy link
Collaborator

Thanks @hanlins for all your hard work on this!

For context, we chatted offline on slack to debug this further. Definitely an fun and interesting bug, requiring some deep diving into grpc backoff logic :)

@hanlins can you please update the PR description with the new findings we discussed offline and the updated fix? If you can squash the outdated commits, it would be great as well. Thanks again for all your efforts on hunting down this issue!

Originally multipilier is default to 0, which will cause flodding if
backoff happens. Now setting multiplier to 1.6 which is the default
multiplier value in grpc.

Signed-off-by: Hanlin Shi <shihanlin9@gmail.com>
@hanlins
Copy link
Contributor Author

hanlins commented Dec 9, 2020

@bobbypage thanks! It's great that we root caused this "gotcha", really worthy to share it with other folks, thanks for your helps!

@bobbypage
Copy link
Collaborator

bobbypage commented Dec 9, 2020

I've cut new cAdvisor releases (v0.37.3 and v0.38.6) with this fix cherrypicked.

We should get this fix back into kubernetes. We'll need 3 PRs to k/k with following updates:

  1. k/k master to update cAdvisor to v0.38.6
  2. k/k release-1.20 to update cAdvisor to v0.38.6
  3. k/k release-1.19 to update cAdvisor to v0.37.3

@bobbypage
Copy link
Collaborator

/cc @SergeyKanzhelev @qiutongs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

cpu saturated after containerd.service restart
5 participants