-
Notifications
You must be signed in to change notification settings - Fork 490
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for ExternalTrafficPolicy #451
Comments
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/lifecycle frozen |
While grooming we saw that this one was open for a long period of time without anyone with a strong use case to champion it. We're going to close this as we don't expect anyone's ready to drive it forward, but if you still want this feature and have a strong use case we will be happy to reconsider this and re-open. |
Please re-open this. Not acting on this forces us to patch the service after every deployment--which is an anti-pattern--as documented here https://istio.io/latest/docs/tasks/security/authorization/authz-ingress/#network The Kubernetes model is to be declarative. Hack-patching imperatively after every apply is definitely not. Even using an admission controller to do it is WTF hacking. I don't see why this change is so objectionable. #196 mentions this issue as being unsatisfactory, but was closed without solving this issue. |
@jorhett, this issue is super old and we're not handling these concepts in the same way at all any more. Because of that, this issue won't be reopened as it stands. However, if you would like us to handle a use case, I suggest opening an issue describing the outcome you want, and we can talk more there. |
Can you explain what you're looking for? Because "allow me to set In fact, every word in this issue description is valid AFAICT because he's describing the exact same problem. I don't see how you having changed whatever you changed makes this request invalid, as it's still not possible now. |
Okay, your requirement is not externalTrafficPolicy: local then, it's "I want to be able to get the original client IP", presumably for layer 4 connections, since it's completely possible for HTTP connections using X-Forwarded-For or similar mechanisms. I agree, that is not currently possible, but trying to reopen a three year old issue that proposes a solution, not a problem, is not a productive way of engaging with the community. The API is no longer called the "Services API" for a reason - because we decided to make the constructs more general than only Kubernetes Services. It's called the Gateway API now because we intended there to be implementations that don't use a Service for describing how Layer 4 traffic gets into a cluster. That's what Gateways and Listeners are for. Now, is there space for a GEP (a Gateway Enhancement Proposal) that describes how a Gateway or Listener can request a similar behavior to But the way that the spec will be defined won't be what's listed in the initial comment here, because the constructs have totally changed in the three years since this issue was logged. As I said earlier, if you really need this feature, logging a new issue is the way to go. |
What would you like to be added:
It would be nice if the Service API spec supported some version of the
externalTrafficPolicy
field from the existingservices
spec:Why is this needed:
Because the existing
services
resource supports it, and it makes sense to implement this in Service APIs as well. While it's true that the presence of a health check port (#97) alone would allow a back-end load balancer platform to determine whether or not an endpoint is healthy, i.e. whether a pod is running on a given node in the Kubernetes cluster. This would allow:To understand how the load balancer would interact with this health check, here is an example:
Create a Kind configuration file that specifies a cluster with one control plane node and two worker nodes:
Create a new Kind cluster:
Persist the IP address of each node in the cluster to a file:
Deploy Nginx to the new cluster and provide a service for the application:
Get the port used to perform a health check on the service:
Query the service's health port to determine the node on which the pod is running:
This will result in output similar to the following:
In the above example, the load balancer platform would be able to set up an L7 health check and determine that only node
172.17.0.2
is valid, as that is where the pod is currently running.There are few alternatives to supporting this:
Dynamically update route endpoints -- this is not desired because there may be time between a pod moving between nodes and a controller updating the routes associated with a
gateways
resource's listener. Not to mention updating the backend load balancer platform to point to the new endpoints for a listener.Add all cluster node IP addresses -- this would work, but it's not nearly as efficient as supporting a distinct health port since it will take time for the load balancer to determine whether or not an endpoint is heatlhy by dropped traffic. Whereas a proper health check will happen right away.
Related to #97
The text was updated successfully, but these errors were encountered: