Skip to content

Commit

Permalink
add blog for feature LoadBalancerIPMode
Browse files Browse the repository at this point in the history
  • Loading branch information
RyanAoh committed Nov 13, 2023
1 parent 0719f77 commit 9d886b5
Showing 1 changed file with 102 additions and 0 deletions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
---
layout: blog
title: "Kubernetes 1.29: New (alpha) feature LoadBalancerIPMode for service"
date: 2023-11-13
slug: kubernetes-1-29-feature-loadbalancer-ip-mode-alpha
---

**Author:** Aohan Yang

This blog introduces `LoadBalancerIPMode`, a new alpha feature in Kubernetes 1.29.
It provides a configurable approach to define how service implementations,
exemplified in this blog by kube-proxy,
handle traffic from pods to the `service.status.loadbalancer.ingress.ip` within the cluster.

## Background

As of today, the different kube-proxy implementations(including ipvs and iptables) bind the external IP
of the service which type is set to LoadBalancer to each node.
This is achieved through iptables redirecting packets directly to the service and ipvs binding the IP
to one interface on the node. This feature is implemented for the following reasons:

1. **Traffic Path Optimization:** Efficiently redirecting pod traffic sent to the load balancer IP
directly to the backend service by bypassing the load balancer.

2. **Handling Load Balancer Packets:** Some load balancers send packets with the destination IP set to
the load balancer's IP. As a result, these packets need to be routed directly to the correct backend service
on the node to avoid loops.


## Problems

However, there are several problems with the aforementioned behavior:

1. **[Source IP Issue]():** Some cloud providers use the load balancer's IP as the source IP when
transmitting packets to the node. In the ipvs mode of kube-proxy,
there is a problem that health checks from the load balancer never return as the IP is bound to an interface.

2. **[Feature Loss at Load Balancer Level](https://github.com/kubernetes/kubernetes/issues/66607):** Certain
cloud providers offer features(such as TLS termination, proxy protocol, etc.) at the load balancer level.
Bypassing the load balancer results in the loss of these features when the packet reaches the service(leading to protocol errors).


Currently, there is a workaround that involves setting the `hostname` of the `service.status.loabalancer.ingress`
to bypass kube-proxy binding([AWS and DigitalOcean for instance](https://github.com/kubernetes/kubernetes/issues/66607#issuecomment-474513060)).
But this is just a makeshift solution.

## Solution

In summary, providing an option for cloud providers to disable the current behavior would be highly beneficial.

To address this, we propose a solution that introduces a new field `ipMode` of `service.status.loadbalancer.ingress`.
This field specifies how the load balancer IP behaves and can be specified only when
the `.status.loadBalancer.ingress.ip` field is also specified.

Two values are possible for `.status.loadBalancer.ingress.ipMode`: "VIP" and "Proxy".
The default value is "VIP", meaning that traffic delivered to the node
with the destination set to the load balancer's IP and port will be redirected to the backend service by kube-proxy.
This preserves the existing behavior of kube-proxy.
The "Proxy" value is intended to prevent kube-proxy from binding the Load Balancer IP to the node in both ipvs and iptables modes.
Consequently, traffic is sent directly to the load balancer and then forwarded to the destination node.
The destination setting for forwarded packets varies depending on how the cloud provider's load balancer delivers traffic:

- If the traffic is delivered to the node then DNATed to the pod, the destination would be set to the node's IP and node port;
- If the traffic is delivered directly to the pod, the destination would be set to the pod's IP and port.

Given that `EnsureLoadBalancer` returns a `LoadBalancerStatus`,
the `ipMode` field can be set by the cloud-controller-manager before returning the status.
It is more appropriate to delegate this decision to cloud providers through the cloud-controller-manager
rather than relying on end users, who may not be familiar with these technical details.

## Usage

Here are the necessary steps to enable this feature:

- Download the [latest Kubernetes project](https://github.com/kubernetes/website/blob/main/releases/download) (version `v1.29.0` or later)
- Enable the feature gate with the command line flag `--feature-gates=LoadBalancerIPMode=true`
on kube-proxy, kube-apiserver, and cloud-controller-manager.
- Set `ipMode` for services which type is set to `LoadBalancer` to the appropriate value.
This step is likely handled by the cloud-controller-manager during the `EnsureLoadBalancer` process.

## More information

- Read [Specifying IPMode of load balancer status](/docs/concepts/services-networking/service/#load-balancer-ip-mode)
- Read [KEP-1860 Make Kubernetes aware of the LoadBalancer behaviour](https://github.com/kubernetes/enhancements/tree/master/keps/sig-network/1860-kube-proxy-IP-node-binding#kep-1860-make-kubernetes-aware-of-the-loadbalancer-behaviour)

## Getting involved

Reach us on [Slack](https://slack.k8s.io/)[#sig-network](https://kubernetes.slack.com/messages/sig-network),
or through the [mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-network).

## Acknowledgments

Huge thanks to [Sh4d1](https://github.com/Sh4d1) for the original KEP and initial implementation code.
I took over midway and completed the work. Similarly, immense gratitude to other contributors
who have assisted in the design, implementation, and review of this feature(alphabetical order):

- [aojea](https://github.com/aojea)
- [danwinship](https://github.com/danwinship)
- [sftim](https://github.com/sftim)
- [tengqm](https://github.com/tengqm)
- [thockin](https://github.com/thockin)
- [wojtek-t](https://github.com/wojtek-t)

0 comments on commit 9d886b5

Please sign in to comment.