-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
service-upstream sending traffic to unhealthy/terminating pods #8973
Comments
@evandam: This issue is currently awaiting triage. If Ingress contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/remove-kind bug But lets wait for other comments as I am not an expert. |
The issue is that switching to using endpoints creates other issues like being out of sync and needing the hooks with |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
can someone remove the stale tag and reapply the bug one please? |
|
@longwuyuan: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
It's a shame, a lot of information was provided and reproducible steps. Thanks for the feedback either way, cheers! |
@scalp42 sorry you feel that way. But where do we go from here. The docs clearly indicate that that annotation helps in use case of zero-downtime-deployments https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#service-upstream . The reference discussion is here #257 So it looks like this issue description
While this does not seem like a K8S KEP Ingress API spec, its possible it can be done. But it requires developers time. The first step to that is triaging the issue to completion. The second step is a expert to opine on available options for action. Since there is no action item being tracked here, adding it to the tally of 480 open issues can be avoided by closing it. In case the triaging data and the expert discussions related data get posted here, a relevant tag like bug or feature etc can be set by the creator of the issue, when they re-open the issue |
What happened:
Ingress-nginx sends traffic to pods with failing readiness probes or a terminating state when using
nginx.ingress.kubernetes.io/service-upstream: "true"
. Trying to access the same services directly from another pod (ex:curl podinfo.test.svc.cluster.local
results in the expected behavior of not routing traffic to an unhealthy pod.What you expected to happen:
When a pod is failing readiness probes or enters the terminating state, requests should not be sent to that pod.
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
Kubernetes version (use
kubectl version
):Environment:
uname -a
):Linux ip-10-22-21-148.us-west-2.compute.internal 5.4.196-108.356.amzn2.x86_64 #1 SMP Thu May 26 12:49:47 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
4.2.3
kubectl version
kubectl get nodes -o wide
helm ls -A | grep -i ingress
helm -n <ingresscontrollernamepspace> get values <helmreleasename>
If helm was not used, then copy/paste the complete precise command used to install the controller, along with the flags and options used
if you have more than one instance of the ingress-nginx-controller installed in the same cluster, please provide details for all the instances
Current State of the controller:
kubectl describe ingressclasses
kubectl -n <ingresscontrollernamespace> get all -A -o wide
kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>
: See abovekubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>
: See aboveCurrent state of ingress object, if applicable:
kubectl -n <appnnamespace> get all,ing -o wide
kubectl -n <appnamespace> describe ing <ingressname>
kubectl describe ...
of any custom configmap(s) created and in useHow to reproduce this issue:
Install minikube/kind
Install the ingress controller
Install ingress-controller with Helm values provided above.
Install an application that will act as default backend (is just an echo app)
kubectl apply -f https://gist.githubusercontent.com/evandam/05d15d72645827f18ef627096e129ea4/raw/fc40b527cf9d09f6af7b87eabcc5d8aa21b2b9e9/podinfo.yaml
Create an ingress (please add any additional annotation required)
make a request
POD_NAME=$(k get pods -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx -o NAME)
kubectl exec -it -n ingress-nginx $POD_NAME -- curl -H 'Host: cloudops-podinfo.example.com/' localhost
Cause the readiness probe to start failing:
Note that ingress-nginx continues to send traffic to the unhealthy pod, but accessing the service directly works as expected
Anything else we need to know:
Our environment has an AWS ALB ingress in front of ingress-nginx, so configs may be slightly different but the issue is the same.
The text was updated successfully, but these errors were encountered: