Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CCM needs agent/worker nodes to create the automatic loadbalancer for a service/ingress #107

Closed
ikatergaris opened this issue Mar 24, 2022 · 4 comments · Fixed by #108
Closed

Comments

@ikatergaris
Copy link

I suppose the easiest way to reproduce, would be to create a single node cluster with K3s and make it a master. Apply the generated ccm-linode.yaml following the howto and then try to LoadBalance a service. It will fail. Add an agent node to the cluster and then it will succeed.

If I'm correct about this, it is limiting our options.
For example, I have setup a HA K3s for rancher and planning to use it, to create more clusters. The 3 control plane nodes should be able to do the whole job. There should be no need to add agent nodes, because you would also need at least 2.

I thought I'd just reach out and see if this is something I'm doing wrong and there is a label, or an annotation I'm missing, that might solve my problem.

@ikatergaris
Copy link
Author

This recently closed issue, contains the log message, when no agents have been added to the cluster. Hopefully it can help:
#68 (comment)

@sibucan
Copy link
Contributor

sibucan commented Mar 24, 2022

Hi @ikatergaris! Thank you for bringing this up.

I think this was an issue in the upstream service controller, not in our code. It seems that, by design, a LoadBalancer service would not include nodes that are marked as being control plane (master) nodes, see: kubernetes/kubernetes#65618

For thoroughness, I checked the upstream service controller for our k8s version in go.mod (v1.19.2, yikes!) and yes -- it does exclude nodes if they are master nodes: https://github.com/kubernetes/kubernetes/blob/f5743093fd1c663cb0cbc89748f730662345d44d/staging/src/k8s.io/cloud-provider/controllers/service/controller.go#L641

This code seems to have been removed entirely starting in v1.21.0, so I believe the first thing we can do on our end is to update our k8s package to at least v1.21.0. From your end, you could try deleting the node-role.kubernetes.io/master label from your node to get the CCM to observe it for the LoadBalancer. Let me know if that workaround helps you for now while we get a new CCM release out the door.

@ikatergaris
Copy link
Author

Your suggestion worked. I'll be eagerly waiting for the next release :-)
Thanks for the quick reply and workaround

@sibucan
Copy link
Contributor

sibucan commented Apr 18, 2022

@ikatergaris The changes have been merged, let me know if you're able to add workers to your cluster w/o needing to delete that special label. 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants