-
Notifications
You must be signed in to change notification settings - Fork 116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Properly set IP Mode of Load Balancer Addresses to fix internal PROXY traffic #727
Comments
Would be fantastic if this could be implemented! I have been banging my head against the wall for quite a bit of time when running into this issue. It results in rather strange/unexpected behaviour in the cluster. Would be a BIG help if this status field could be added to prevent others from wasting time when trying to trace this issue. Related article with someone else running into this: |
…ured to use proxy protocol (#727) (#783) [KEP-1860](https://github.com/kubernetes/enhancements/tree/master/keps/sig-network/1860-kube-proxy-IP-node-binding) introduced a new field `service.status.loadBalancer.ingress[].ipMode`: VIP | Proxy to indicate the behavior of the Load Balancer. Currently users on IPVS-based networking setups can not access the Load Balancer IPs when they enable PROXY protocol. Expected behavior Users should always be able to access their services from inside of the cluster, even if they use IPVS and PROXY protocol. IIUC we should set the IP Mode to Proxy for the IPs we return in the status. --------- Co-authored-by: simonhammes <simonhammes@users.noreply.github.com>
…ured to use proxy protocol (#727) (#783) [KEP-1860](https://github.com/kubernetes/enhancements/tree/master/keps/sig-network/1860-kube-proxy-IP-node-binding) introduced a new field `service.status.loadBalancer.ingress[].ipMode`: VIP | Proxy to indicate the behavior of the Load Balancer. Currently users on IPVS-based networking setups can not access the Load Balancer IPs when they enable PROXY protocol. Expected behavior Users should always be able to access their services from inside of the cluster, even if they use IPVS and PROXY protocol. IIUC we should set the IP Mode to Proxy for the IPs we return in the status. --------- Co-authored-by: simonhammes <simonhammes@users.noreply.github.com>
This issue has been marked as stale because it has not had recent activity. The bot will close the issue if no further action occurs. |
Added in #783 |
<!-- section-start changelog --> ### Feature Highlights & Upgrade Notes #### Load Balancer IPs set to Private IPs If networking support is enabled, the load balancer IPs are now populated with the private IPs, unless the `load-balancer.hetzner.cloud/disable-private-ingress` annotation is set to `true`. Please make sure that you configured the annotation according to your needs, for example if you are using `external-dns`. #### Provided-By Label We introduced a the label `instance.hetzner.cloud/provided-by`, which will be automatically added to all **new** nodes. This label can have the values `cloud` or `robot` to distinguish between our products. We use this label in the csi-driver to ensure the daemonset is only running on cloud nodes. We recommend to add this label to your existing nodes with the appropriate value. - `kubectl label node $CLOUD_NODE_NAME instance.hetzner.cloud/provided-by=cloud` - `kubectl label node $ROBOT_NODE_NAME instance.hetzner.cloud/provided-by=robot` #### Load Balancer IPMode Proxy Kubernetes KEP-1860 added a new field to the Load Balancer Service Status that allows us to mark if the IP address we add should be considered as a Proxy (always send traffic here) and VIP (allow optimization by keeping the traffic in the cluster). Previously Kubernetes considered all IPs as VIP, which caused issues when when the PROXY protocol was in use. We have previously recommended to use the annotation `load-balancer.hetzner.cloud/hostname` to workaround this problem. We now set the new field to `Proxy` if the PROXY protocol is active so the issue should no longer appear. If you only added the `load-balancer.hetzner.cloud/hostname` annotation for this problem, you can remove it after upgrading. Further information: - kubernetes/enhancements#1860 - #160 (comment) ### Features - **service**: Specify private ip for loadbalancer (#724) - add support & tests for Kubernetes 1.31 (#747) - **helm**: allow setting extra pod volumes via chart values (#744) - **instance**: add label to distinguish servers from Cloud and Robot (#764) - emit event when robot server name and node name mismatch (#773) - **load-balancer**: Set IPMode to "Proxy" if load balancer is configured to use proxy protocol (#727) (#783) - **routes**: emit warning if cluster cidr is misconfigured (#793) - **load-balancer**: ignore nodes that don't use known provider IDs (#780) - drop tests for kubernetes v1.27 and v1.28 ### Bug Fixes - populate ingress private ip when disable-private-ingress is false (#715) - wrong version logged on startup (#729) - invalid characters in label instance-type of robot servers (#770) - no events are emitted as broadcaster has no sink configured (#774) ### Kubernetes Support This version was tested with Kubernetes 1.29 - 1.31. Furthermore, we dropped v1.27 and v1.28 support. <!-- section-end changelog --> --- <details> <summary><h4>PR by <a href="https://github.com/apricote/releaser-pleaser">releaser-pleaser</a> 🤖</h4></summary> If you want to modify the proposed release, add you overrides here. You can learn more about the options in the docs. ## Release Notes ### Prefix / Start This will be added to the start of the release notes. ```rp-prefix ### Feature Highlights & Upgrade Notes #### Load Balancer IPs set to Private IPs If networking support is enabled, the load balancer IPs are now populated with the private IPs, unless the `load-balancer.hetzner.cloud/disable-private-ingress` annotation is set to `true`. Please make sure that you configured the annotation according to your needs, for example if you are using `external-dns`. #### Provided-By Label We introduced a the label `instance.hetzner.cloud/provided-by`, which will be automatically added to all **new** nodes. This label can have the values `cloud` or `robot` to distinguish between our products. We use this label in the csi-driver to ensure the daemonset is only running on cloud nodes. We recommend to add this label to your existing nodes with the appropriate value. - `kubectl label node $CLOUD_NODE_NAME instance.hetzner.cloud/provided-by=cloud` - `kubectl label node $ROBOT_NODE_NAME instance.hetzner.cloud/provided-by=robot` #### Load Balancer IPMode Proxy Kubernetes KEP-1860 added a new field to the Load Balancer Service Status that allows us to mark if the IP address we add should be considered as a Proxy (always send traffic here) and VIP (allow optimization by keeping the traffic in the cluster). Previously Kubernetes considered all IPs as VIP, which caused issues when when the PROXY protocol was in use. We have previously recommended to use the annotation `load-balancer.hetzner.cloud/hostname` to workaround this problem. We now set the new field to `Proxy` if the PROXY protocol is active so the issue should no longer appear. If you only added the `load-balancer.hetzner.cloud/hostname` annotation for this problem, you can remove it after upgrading. Further information: - kubernetes/enhancements#1860 - #160 (comment) ``` ### Suffix / End This will be added to the end of the release notes. ```rp-suffix ### Kubernetes Support This version was tested with Kubernetes 1.29 - 1.31. Furthermore, we dropped v1.27 and v1.28 support. ``` </details> Co-authored-by: releaser-pleaser <>
TL;DR
KEP-1860 introduced a new field
service.status.loadBalancer.ingress[].ipMode: VIP | Proxy
to indicate the behavior of the Load Balancer.Currently users on IPVS-based networking setups can not access the Load Balancer IPs when they enable PROXY protocol.
Expected behavior
Users should always be able to access their services from inside of the cluster, even if they use IPVS and PROXY protocol.
IIUC we should set the IP Mode to
Proxy
for the IPs we return in the status.The text was updated successfully, but these errors were encountered: