-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[JENKINS-67679] add maintenance mode to pause provisioning nodes #1134
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some minor style items to fix but the feature and logic looks fine.
src/main/java/org/csanchez/jenkins/plugins/kubernetes/KubernetesCloud.java
Outdated
Show resolved
Hide resolved
src/test/java/org/csanchez/jenkins/plugins/kubernetes/KubernetesCloudTest.java
Outdated
Show resolved
Hide resolved
src/test/java/org/csanchez/jenkins/plugins/kubernetes/KubernetesCloudTest.java
Outdated
Show resolved
Hide resolved
57b41c3
to
5543680
Compare
Is this really necessary? Can you not get a similar effect by putting some sort of taint on the cluster’s nodes, or making an admission webhook reject new pods? Or would builds fail with an error after timing out trying to schedule a pod? If this really needs to be done on the Jenkins side, maybe it would be better for core to have the ability to take a |
This looks like a good idea actually. |
The permissions that a Jenkins admin are given are not always the same as the set of permissions of the Kubernetes cluster (even not permitted to access the cluster API). This PR is really legit in term of usage.
That is really an excellent idea: this change would benefit way more users. Where should it be looked (a particular plugin? or the Jenkins core)? |
maintenance usually includes cordon and drain, cordon will put a taint on node, drain will evict pods to other nodes (drain is necessary for kernel upgrade, docker/containerd uprade etc)
Jenkins job pod will be deleted from the node and job will fail. Even if we enable retry on job level, plugin may launch a pod on other nodes which will also be drained. You may ask why don't put all node on cordon, that is overkill since kubernetes service are not affected by cordon and drain, kubernetes has logic to maintain PodDisruptionBudget in drain operation. Jenkins job is the only victim, we can not freeze whole cluster for jenkins if the cluster is shared. I tried to workaround the issue by using a system groovy job to put k8s plugin on hold
But if jenkins itself is running in k8s cluster and gets restarted, the containerCap will lost due to check (it is a valid check)
so I think it is better to add a self-explaining flag |
FYI: #1083 |
Hi, I'm looking a similar feature to put a cloud in maintenance mode. During my search I found the Regards, |
Taking another look at this request: I think the |
fix JENKINS-67679 add maintenance mode to pause provisioning nodes