Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why GCP deploy an ingress controller on the master node rather than worker node? #685

Closed
toyota790 opened this issue Mar 13, 2019 · 5 comments

Comments

@toyota790
Copy link

I found this GitHub page stated that

On GCP (either GCE or GKE), every Kubernetes cluster has an Ingress controller running on the master, no deployment necessary.

I am just curious that why GCP would deploy an ingress controller on the master node rather than worker node? I think it may impact the performance of scheduling or other things if it deploy on the master. For general cases, we would probably deploy it on the master considering the performance impact. Is there any specific reason or other consideration for it?

Thank you so much!

@rramkumar1
Copy link
Contributor

@toyota790 I'm sure there are many reasons but my guess is that its easier from an authentication / authorization standpoint.

Ingress-GCE needs permissions to edit networking resources in the project and for that certain credentials need to be plumbed in via a config file. On GCE & GKE, this config file lives on the master. Alternatively, if we deployed on one of the worker nodes, this config file would need to be spread across each node in the cluster because we don't know on which node the controller will be deployed.

@toyota790
Copy link
Author

toyota790 commented Mar 13, 2019

@rramkumar1 Thank you for the quick reply. But for that case, we can utilize the ConfigMap to store the config setting. We don't need to spread the config file across to the each node. So I think that's not the main reason for it.

Do you have any other idea on this? Thank you! :)

@rramkumar1
Copy link
Contributor

@toyota790 I'm not sure what to say then. That's just the way it was done.

I think it may impact the performance of scheduling or other things if it deploy on the master.

Why do you say so? Pods running on the master should not affect the scheduling of pods on worker nodes.

If you are on GCE, you can always run the controller off-master by deleting the existing pod and running your own. You will have to handle authz/authn on your own though. On GKE, you can follow the instructions in https://github.com/kubernetes/ingress-gce/tree/master/deploy/glbc.

@toyota790
Copy link
Author

@rramkumar1 Thank you for the information.

I'm not sure what to say then. That's just the way it was done.

Because I saw most of the use cases, whenever the blog or the discussion on the Kubernetes Slack channel, don't tend to deploy an ingress controller on the worker node. However, GCP did it, and this arouses my curiosity about the design principle or motivation on this design.

Why do you say so? Pods running on the master should not affect the scheduling of pods on worker nodes.

Since the pods running on the master node would share the resource (CPU or memory) of the node. I consider that it may affect the performance for kube-scheduler, kube-apiserver, or some related tasks. Please correct me if I am wrong. Thank you so much!

@rramkumar1
Copy link
Contributor

rramkumar1 commented Mar 14, 2019

@toyota790

Since the pods running on the master node would share the resource (CPU or memory) of the node. I consider that it may affect the performance for kube-scheduler, kube-apiserver, or some related tasks. Please correct me if I am wrong

On GCE and GKE we ensure that each component has sufficient resources so that is not a problem.

I'm going to go ahead and close this but feel free to respond if you have more questions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants