-
Notifications
You must be signed in to change notification settings - Fork 303
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why GCP deploy an ingress controller on the master node rather than worker node? #685
Comments
@toyota790 I'm sure there are many reasons but my guess is that its easier from an authentication / authorization standpoint. Ingress-GCE needs permissions to edit networking resources in the project and for that certain credentials need to be plumbed in via a config file. On GCE & GKE, this config file lives on the master. Alternatively, if we deployed on one of the worker nodes, this config file would need to be spread across each node in the cluster because we don't know on which node the controller will be deployed. |
@rramkumar1 Thank you for the quick reply. But for that case, we can utilize the ConfigMap to store the config setting. We don't need to spread the config file across to the each node. So I think that's not the main reason for it. Do you have any other idea on this? Thank you! :) |
@toyota790 I'm not sure what to say then. That's just the way it was done.
Why do you say so? Pods running on the master should not affect the scheduling of pods on worker nodes. If you are on GCE, you can always run the controller off-master by deleting the existing pod and running your own. You will have to handle authz/authn on your own though. On GKE, you can follow the instructions in https://github.com/kubernetes/ingress-gce/tree/master/deploy/glbc. |
@rramkumar1 Thank you for the information.
Because I saw most of the use cases, whenever the blog or the discussion on the Kubernetes Slack channel, don't tend to deploy an ingress controller on the worker node. However, GCP did it, and this arouses my curiosity about the design principle or motivation on this design.
Since the pods running on the master node would share the resource (CPU or memory) of the node. I consider that it may affect the performance for kube-scheduler, kube-apiserver, or some related tasks. Please correct me if I am wrong. Thank you so much! |
On GCE and GKE we ensure that each component has sufficient resources so that is not a problem. I'm going to go ahead and close this but feel free to respond if you have more questions. |
I found this GitHub page stated that
I am just curious that why GCP would deploy an ingress controller on the master node rather than worker node? I think it may impact the performance of scheduling or other things if it deploy on the master. For general cases, we would probably deploy it on the master considering the performance impact. Is there any specific reason or other consideration for it?
Thank you so much!
The text was updated successfully, but these errors were encountered: