-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proxy protocol transport socket intermittently injects wrong addresses #13659
Comments
mattklein123
added
area/connection
investigate
Potential bug that needs verification
and removed
bug
triage
Issue requires triage
labels
Oct 24, 2020
This seems bad. cc @alyssawilk |
It seems that since tcp connection pooling was introduced, there have been a number of workarounds for use cases that need 1:1 semantics so one idea that seems attractive would be to make non-pooled tcp proxy functionality available as an option. |
mattklein123
added
help wanted
Needs help!
and removed
investigate
Potential bug that needs verification
labels
Oct 27, 2020
lizan
pushed a commit
that referenced
this issue
Nov 4, 2020
Fix a bug where the transport socket options for the first downstream got reused for subsequent upstream connections. Risk Level: low Testing: new integration test Docs Changes: n/a Release Notes: Platform Specific Features: Fixes #13659 Signed-off-by: Alyssa Wilk <alyssar@chromium.org>
cpakulski
pushed a commit
to cpakulski/envoy
that referenced
this issue
Nov 10, 2020
Fix a bug where the transport socket options for the first downstream got reused for subsequent upstream connections. Risk Level: low Testing: new integration test Docs Changes: n/a Release Notes: Platform Specific Features: Fixes envoyproxy#13659 Signed-off-by: Alyssa Wilk <alyssar@chromium.org> Signed-off-by: Christoph Pakulski <christoph@tetrate.io>
lizan
pushed a commit
that referenced
this issue
Nov 11, 2020
Fix a bug where the transport socket options for the first downstream got reused for subsequent upstream connections. Risk Level: low Testing: new integration test Docs Changes: n/a Release Notes: Platform Specific Features: Fixes #13659 Signed-off-by: Alyssa Wilk <alyssar@chromium.org> Signed-off-by: Christoph Pakulski <christoph@tetrate.io>
guyco-redis
added a commit
to RedisLabs/envoy
that referenced
this issue
Nov 11, 2020
* backport: Prevent SEGFAULT when disabling listener (envoyproxy#13515) (envoyproxy#13882) * Prevent SEGFAULT when disabling listener (envoyproxy#13515) This prevents the stop_listening overload action from causing segmentation faults that can occur if the action is enabled after the listener has already shut down. Signed-off-by: Alex Konradi <akonradi@google.com> Signed-off-by: Christoph Pakulski <christoph@tetrate.io> * backport to rel-1.16: proxy_proto - fixing hashing bug envoyproxy#13768 (envoyproxy#13966) Fix a bug where the transport socket options for the first downstream got reused for subsequent upstream connections. Risk Level: low Testing: new integration test Docs Changes: n/a Release Notes: Platform Specific Features: Fixes envoyproxy#13659 Signed-off-by: Alyssa Wilk <alyssar@chromium.org> Signed-off-by: Christoph Pakulski <christoph@tetrate.io> Co-authored-by: Christoph Pakulski <christoph@tetrate.io>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Title: Proxy protocol header intermittently uses wrong addresses
Description:
Proxy protocol injection (envoy.transport_sockets.upstream_proxy_protocol) works as expected to start out, but once you start connecting from different IPs, the injected proxy protocol header ends up randomly having addresses from previous connections.
The log shows the right IPs but proxy protocol header intermittently sends the IPs not from the current connection, but from one of the previous ones.
I thought maybe setting
connection_pool_per_downstream_connection: true
would help but it did not.Repro steps:
Have envoy listen on 127.0.0.1:9000 and tcp proxy to a cluster that listens for connections, reads the proxy protocol header, and then promptly closes the connection.
The expectation here and what the logs show is 20 connections each from the 3 separate addresses above. What the proxy protocol sends, ends up being a random mix.
Suspected cause:
I'm thinking this has something to do with connection pooling as the following comment suggests that it's not exactly a 1 to 1 setup currently: #4684 (comment)
Potentially related: #2800 #11694
The text was updated successfully, but these errors were encountered: