-
Notifications
You must be signed in to change notification settings - Fork 303
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Zonelister logic prevents instance removal from instance group & instance group GC #50
Comments
From @nikhiljindal on July 25, 2017 1:39 @nicksardo pointed out that the logic was removed in kubernetes/ingress-nginx@c7c2a56#diff-9141c651905f3492033cf255f8e12fd7L176. @aledbf Was that intentional? |
From @nicksardo on July 25, 2017 18:18 Quick FYI: the instance groups are eventually synced when the resync period occurs for any ingress. This may explain why nobody has noticed this in practice. |
From @nicksardo on July 31, 2017 23:47 This still occurs despite having the fix. I'm guessing that the handler is called when the node is added; however, it's NotReady state prevents the instance group from being updated. I don't know why the handler isn't called again when the node becomes Ready. |
From @aledbf on August 1, 2017 0:0 @nicksardo maybe because the handler does not have an |
From @nicksardo on August 1, 2017 0:32 Ah, that would certainly do it. According to the comment, we don't want to have an |
From @nikhiljindal on August 1, 2017 17:8 If there was a way to find out the update, we could add an UpdateFunc that adds the node to queue only if there has been a "relevant change" in node. Node's heartbeat being updated, for example, is not a "relevant change". |
From @nicksardo on August 1, 2017 17:14 Right, not a quick change though. |
From @nicksardo on August 3, 2017 0:54 Looks like removing nodes from instances groups has never worked.
|
From @nicksardo on September 14, 2017 20:42 Note: We also need to address node syncing with regards to large clusters. https://storage.googleapis.com/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-scale-correctness/12/artifacts/gce-scale-cluster-master/glbc.log |
What are the workarounds for this in practice? Does everybody follow up the k8s delete node API call with a second call to the GCP API? (that's what I do - it's annoying...) |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/help-wanted |
@rramkumar1: Please ensure the request meets the requirements listed here. If this request no longer meets these requirements, the label can be removed In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/kind bug |
/lifecycle frozen |
node can't be found because GetZoneFromNode filters nodes with not ready statuses. I opened PR to fix this particular issue. I'm working on Add/Update/Delete functions but it might take a lot of time. I've found the way how I can connect node object from nodeInformer with ingresses:
It looks ugly right now, but it's working. But I didn't find the way to track only relevant node updates. |
@bowei anything that can be done here? |
There have been various fixes in this area to make the ingress controllers react to zone changes. If the issues still exist, please open a new issue with details. |
From @nikhiljindal on July 25, 2017 1:38
Looking at the code (https://github.com/kubernetes/ingress/blob/a58b80017170eecbe8b2d6573b66192cafe0d32a/controllers/gce/controller/controller.go#L177), it looks like we are not doing anything when nodes are added and deleted. We should be updating the instance group when that happens.
Am I missing something?
Copied from original issue: kubernetes/ingress-nginx#1012
The text was updated successfully, but these errors were encountered: