-
Notifications
You must be signed in to change notification settings - Fork 559
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add annotation sidecar.istio.io/disableIPEarlyDemux #2895
base: master
Are you sure you want to change the base?
Conversation
😊 Welcome @luksa! This is either your first contribution to the Istio api repo, or it's been You can learn more about the Istio working groups, code of conduct, and contributing guidelines Thanks for contributing! Courtesy of your friendly welcome wagon. |
Hi @luksa. Thanks for your PR. I'm waiting for a istio member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the user experience we want here? A per-pod annotation doesn't seem like the right approach. When should a user set it or not set it?
It seems like either:
- There are pros and cons to setting it => we need to understand what those are to decide where this should live, and explain it in the user documentation
- Its universally better => we don't need controls; we can add a (temporary?) env var opt out in case we hit unknown-unknowns and a user needs to opt-out
The reason I took the per-pod approach is because I am not sure how change affects performance. The early demux feature was supposed to be an optimization, but in some cases leads to a reduction of throughput (see https://patchwork.ozlabs.org/project/netdev/patch/20120621235011.29846.29715.stgit@gitlad.jf.intel.com/), which is why the sysctl option was then introduced. I feel like users will want to use the disableIPEarlyDemux annotation only on pods that expose more than one port. I don't feel I can get a definitive answer on whether making this change globally would be okay, hence the per-pod option, just to be safe. IMHO, this option should only be a (temporary) workaround, as I still think this is a Kernel bug that should be fixed (optimizations shouldn't break stuff). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reading the linked issues - yeah - mostly end up echoing @howardjohn on this:
- How do we plan to surface this setting to people and advise them to use it?
- Given that TCP behavior is effectively broken if we don't disable it - should we just flat-out disable it? I think we should.
I don't think this makes much sense as a pod annotation, I think this should be a global config for the CNI - effectively saying just take istio/istio#46457 and not make a pod-level API surface change - or at least we shouldn't offer pod granularity yet when there's no indication people need it at that level and describing to people how they should configure it at that level seems messy. It can/should just be a simple toggle for the CNI daemonset itself.
We should do a perftest to find out if there's a negligible impact to perf or not, and if there is not, just turn it on so we get consistent behavior by default.
Even if the perftest says the difference is not negligible, I think it should probably still be a CNI daemonset-level flag.
(maybe default-off for now, until we have a better idea of how it ends up affecting perf for real traffic with users who know what this setting is for and when they need it)
One other question I had - there is also tcp_ and udp_ variants. Does setting only the TCP work for us? FWIW I sent a message to the google kernel networking team to get advice, haven't heard back yet |
Any movement on this? It does sound per pod is reasonable given the reasonings @luksa outlined - this is only needed for pods with 1+ ports |
description: Specifies whether the Kernel option net.ipv4.ip_early_demux | ||
should be set to 0 in order to prevent TCP connection issues when a pod | ||
exposes multiple ports and receives multiple concurrent connections from | ||
the same client IP and port. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would be good to explain when this should be considered?
/ok-to-test |
@luksa: The following tests failed, say
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Ref: istio/istio#38982