-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
delete upstream healthcheck annotation #3207
delete upstream healthcheck annotation #3207
Conversation
/assign @aledbf |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: aledbf, ElvinEfendi The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
So I'm running into issues where we need to set max_fails and because of this "little value" removal we can not now, is there a workaround to still set max_fails? |
@AndrewFarley |
This situation per the link is one in which I am not using this as so much for a health check mechanism but as a retry and anti-flapping mechanism between upstreams. The only solution I can find is to use max_fails, and since you've removed this, I am left with the option of reverse diff-ing this patch and maintaining our own fork for this purpose. Thoughts @antoineco ? |
I'm sure I don't have all the context or reasoning here in this PR, but I generally don't understand why you would remove a feature even one of "little value" that didn't have any effect if it wasn't going to be used by anyone in favor of using k8s health checks. |
This was removed due to the Lua load balancer change in the ingress controller to avoid reloads when a pod is created/destroyed. Right now there is no easy way to implement an equivalent to the max_fails feature. |
@aledbf Proxy next upstream might work for this situation, I'll try to fiddle with it, thanks for the info and history on the thought behind this change. Thanks |
Hi, @AndrewFarley @aledbf details below: (a) We cannot always rely on the This kind of 40+s duration and problem is unacceptable for many critical business. (b) the (1)setup two pods for a deployment, running on two nodes.
(3) looping curl -H "Host:xxx" ingressIp:port (yes, it's |
I found a post mentioning I don't have idea why it's not working... the original post: |
@panpan0000 I'm afraid you misinterpret my comment there. The https://github.com/openresty/lua-resty-core/blob/master/lib/ngx/balancer.md#set_more_tries |
when use ingress for custom endpoint which not belong to any Pod, upstream healthcheck is necessary. |
What this PR does / why we need it:
These annotation has little value in ingress-nginx. ingress-nginx relies on k8s for healthchecking.
Which issue this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close that issue when PR gets merged): fixes #Special notes for your reviewer: