-
Notifications
You must be signed in to change notification settings - Fork 303
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GCE: ingress only shows the first backend's healthiness in backends
annotation
#35
Comments
From @MrHohn on September 20, 2017 20:57 @yastij Nope, though I'm not quite sure how should we present the backends healthiness in annotation --- for huge cluster, we might have too many backends (nodes). It also seems unwise to append all of them via annotation... cc @nicksardo |
From @yastij on September 20, 2017 21:13 Maybe state healthy when all the backends are (checking all backends) , and unhealthy when some are (specifying which ones aren't healthy) |
From @MrHohn on September 20, 2017 21:18
Yeah that sort of makes sense, though for the |
From @nicksardo on September 25, 2017 23:29 I do not know if people use externalTrafficPolicy=Local with ingress (I've never tried it) and it's not something we document with ingress. It may technically work, but I don't know how well it works in production with rolling updates and other edge cases. Although if we wanted to support that case, another option is to correlate the instance status with the pods location(node). I agree that this annotation is not accurate. Even if it shows a correct status, the annotation is only refreshed on every sync (which may be 10 minutes or longer if there are a lot of ingress objects). My question is whether this annotation is worth keeping. Wouldn't users be better off looking at the GCP Console for backend status? Do users have daemons which poll this annotation and perform alerts? If the only case we're concerned about is bad healthcheck configuration breaking all backends, couldn't we create an alert saying "All backends are unhealthy - please investigate"? |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
@bowei @MrHohn @nicksardo - is this still open ? |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle rotten |
thanks -- let's keep this one open |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/lifecycle frozen |
Any update on this? In the kube-proxy world, showing first backends health status is probably OK, as the traffic can hop between nodes when access with node port. Each backend group will report health regardless if there is a backend in that zone at all. However, with NEG backends, it's possible to have zones with no corresponding backends at all. In that case, the health status for those backend groups will be constantly unknown, and the console looks broken though the ingress works fine: For this reason, I think this should be fixed. |
@freehan -- can we put this in the backlog? It looks like a self contained item |
FTR: the GKE console issue is tracked at https://issuetracker.google.com/issues/130748827. |
This has been fixed with #936. |
From @MrHohn on September 20, 2017 1:43
From kubernetes/enhancements#27 (comment).
We attach
backends
annotation to ingress object after LB creation:And from the implementation:
https://github.com/kubernetes/ingress/blob/937cde666e533e4f70087207910d6135c672340a/controllers/gce/backends/backends.go#L437-L452
Using only the first backend's healthiness to represent the healthiness for all backends seems incorrect.
cc @freehan
Copied from original issue: kubernetes/ingress-nginx#1395
The text was updated successfully, but these errors were encountered: