-
Notifications
You must be signed in to change notification settings - Fork 431
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Internal Load balancer created for public cluster #696
Comments
@CecileRobertMichon creating this from our conversation awhile back. @jackfrancis maybe you have some insight here? |
@jackfrancis does AKS Engine currently provision an internal LB for non-private clusters? @justaugustus @awesomenix do you have any context on why the internal LB was originally added in capz? |
@CecileRobertMichon it does if there are more than one VMs backing the control plane. If there's only one VM, then no. |
@jackfrancis even since Azure/aks-engine#2953? |
That PR didn't change the Load Balancer implementation. I'll confirm. |
That PR was for control plane-originating requests to self-route. Other cluster traffic (e.g., from nodes) needs a LB to ensure a response when a single control plane VM goes offline. |
Confirmed:
From a cluster built w/ this config:
|
Thanks Jack! @jsturtevant does that answer the question? |
Originally the Internal LB was created for worker nodes to communicate with control plane (so that it doesnt go through public load balancer <- save money :)), not sure if thats recently changed. Just as jack mentioned. |
Went through current code and also checked a running cluster, seems the internalLB is created but not used, all kubelets (cp and worker) still talk to the public LB. The kubeadm init/join config is generated by CAPI, right now with v1alpha3 cluster.Spec only have one APIEndpoint, which should be the public LB, so seems it's not possible to generate kubeadm join config with another internal APIEndpoint. v1alpha2 have APIEndpoint as an array under cluster.Status, it was possible to do this, maybe that was how this design came from. |
/kind bug |
/assign |
/close |
@CecileRobertMichon: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/kind feature
Describe the solution you'd like
I created a cluster using
make create-workload-cluster
which uses the default flavor for workload cluster. The workload cluster was created with an internal and public load balancer for the control plane. If I create a public cluster using aks-engine I do not get a internal load balancer.It makes sense to have an internal load balancer for private clusters. Is there a reason for internal load balancer for public clusters? Is there reason to keep it or should only create when private clusters are enabled (#486)?
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
Environment:
kubectl version
): any/etc/os-release
):The text was updated successfully, but these errors were encountered: