-
Notifications
You must be signed in to change notification settings - Fork 112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[JENKINS-60481] Add throttleJobProperty @Symbol to ThrottleJobProperty #68
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! It should definitely help users
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your contribution, @bawjensen! I just merged your PR branch with master and applied some trivial formatting fixes. This should be a part of the forthcoming Throttle Concurrent Builds 2.0.2 release.
🙇♂️ thanks for the quick responses on this, @oleg-nenashev / @basil! |
when will 2.0.2 will be released, and @bawjensen does this support if yes, can't wait for this feature to be released. |
due to this issue, it's still not work with |
I have just released Throttle Concurrent Builds 2.0.2. |
pipeline { parameters { stages { I am using above configuration and I have 2 slave nodes with label slave. |
Same as the above unanswered question. I have 2 nodes with the label 'release-smoke-tests' but the builds are only running one 1. it says in the queue
|
Hello, |
Hello, as explained by @pandorasbox963 above, I'm also unable to limit the number of concurrent executions per node using Throttle Concurrent Builds plugin (ver. 2.2). Pipeline behaves as if Is this plugin still maintained? Update: I just came across this defect... looks like it describes the troublesome behaviour we're highlighting here. |
Hi @mandrije, we had the same issue with the above plugin, since no resolution was found we switched to https://plugins.jenkins.io/build-blocker-plugin/ instead, and we were able to apply the same restrictions. |
Thank you very much @edgarkz! Meantime, I realized that the remaining, working options (same param values and max total restrictions) will meet my needs, but hopefully other people reading your suggestion above will benefit from it. |
Otherwise they appear to not have any effect: https://issues.jenkins.io/browse/JENKINS-49173 jenkinsci/throttle-concurrent-builds-plugin#68 Signed-off-by: Jakub Sokołowski <jakub@status.im>
Otherwise they appear to not have any effect: https://issues.jenkins.io/browse/JENKINS-49173 jenkinsci/throttle-concurrent-builds-plugin#68 Signed-off-by: Jakub Sokołowski <jakub@status.im>
Otherwise they appear to not have any effect: https://issues.jenkins.io/browse/JENKINS-49173 jenkinsci/throttle-concurrent-builds-plugin#68 Signed-off-by: Jakub Sokołowski <jakub@status.im>
A fix for a bug triggered by recent `Jenkinsfile` refactoring done in: #3827 Which due to a big in Jenkins Throttling plugin caused jobs to start running in parallel on the same host despite global configuration that is supposed to block this: https://issues.jenkins.io/browse/JENKINS-49173 jenkinsci/throttle-concurrent-builds-plugin#68 An attempt to fix this was made in this PR: #3913 But it was ineffective due to bugs in the Throttle plugin. As a result semi-random testnet launches would fail with errors like this: ``` ./scripts/launch_local_testnet.sh: line 1026: 58977 Killed: 9 ${BEACON_NODE_COMMAND} ... ``` The culprit was the old process cleanup in `scripts/launch_local_testnet.sh`: ``` + make local-testnet-mainnet Found old process listening on port 7001, with PID 58977. Killing it. Found old process listening on port 7002, with PID 59024. Killing it. Found old process listening on port 7003, withu PID 59027. Killing it. Found old process listening on port 7004, with PID 59030. Killing it. ``` Which was triggered due to use of immediate assignment for `EXECUTOR_NUMBER`: ``` EXECUTOR_NUMBER := 0 ``` Which cause the `EXECUTOR_NUMBER` value set by Jenkins to be ignored. For more details see: https://www.gnu.org/software/make/manual/html_node/Flavors.html#Flavors Signed-off-by: Jakub Sokołowski <jakub@status.im>
A fix for a bug triggered by recent `Jenkinsfile` refactoring done in: #3827 Which due to a big in Jenkins Throttling plugin caused jobs to start running in parallel on the same host despite global configuration that is supposed to block this: https://issues.jenkins.io/browse/JENKINS-49173 jenkinsci/throttle-concurrent-builds-plugin#68 An attempt to fix this was made in this PR: #3913 But it was ineffective due to bugs in the Throttle plugin. As a result semi-random testnet launches would fail with errors like this: ``` ./scripts/launch_local_testnet.sh: line 1026: 58977 Killed: 9 ${BEACON_NODE_COMMAND} ... ``` The culprit was the old process cleanup in `scripts/launch_local_testnet.sh`: ``` + make local-testnet-mainnet Found old process listening on port 7001, with PID 58977. Killing it. Found old process listening on port 7002, with PID 59024. Killing it. Found old process listening on port 7003, withu PID 59027. Killing it. Found old process listening on port 7004, with PID 59030. Killing it. ``` Which was triggered due to use of immediate assignment for `EXECUTOR_NUMBER`: ``` EXECUTOR_NUMBER := 0 ``` Which cause the `EXECUTOR_NUMBER` value set by Jenkins to be ignored. For more details see: https://www.gnu.org/software/make/manual/html_node/Flavors.html#Flavors Signed-off-by: Jakub Sokołowski <jakub@status.im>
I pulled inspiration for this change from jenkinsci/branch-api-plugin#127, so I'm hoping it's a fairly straightforward change. In my local testing (using
mvn hpi:run
and then manually installingPipeline: Declarative
) I can confirm that before this change, the dropdown suggestions for the Declarative Directive Generator foroptions
included onlythrottle
and after this change it included boththrottle
andthrottleJobProperty
. WhenthrottleJobProperty
was selected the UI that came up was as expected for the job property exposed by this plugin.Also tested by creating a pipeline job and setting this as its pipeline script:
Running 5 builds within a minute displayed the expected waiting message, and inspecting the job's configuration showed that it had been persisted as desired there.