Concurrency Cancel Pending #5435
-
Any chance we can make this configurable? The expectation for concurrency is to ensure jobs are run in order and not concurrent. Cancelling jobs seems to defeat the purpose of ensuring order. I would envision a https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#concurrency |
Beta Was this translation helpful? Give feedback.
Replies: 24 comments 46 replies
-
We have a concurrency set for an label/unlabel event. If 4 labels are added at once, two out of four actions are being cancelled. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
It seems that updating labels on action that uses |
Beta Was this translation helpful? Give feedback.
-
Please add |
Beta Was this translation helpful? Give feedback.
-
I'm running into similar issues at a job concurrency level when trying to control how many runners are handling some platform-specific integration tests, so as not to rate-limit the API that they're talking to. A way to avoid canceling other |
Beta Was this translation helpful? Give feedback.
-
Yeah, just cancelling concurrent jobs screws up the idea of PR check. Pending and waiting should be the default not to talk about the option to configure it. EDIT: nevermind, I see the intended purpose. I hope I can achieve what I want with if cancelled check |
Beta Was this translation helpful? Give feedback.
-
Thank you for all of the feedback. The current feature was geared for two specific scenarios. The first being the default behavior where you have a deployment to say production and you only want 1 deployment happening at any given time and you always want the latest code deployed. The second scenario where you set We have had various discussions on the team around the other scenarios mentioned in the thread and we do hope to address them at some point in the future. However, at this time I do not have any concrete dates. |
Beta Was this translation helpful? Give feedback.
-
It would make sense to include some sort of concurrency option to queue a workflow/job/step (without cancelling it). We used something similar in Jenkins. We could lock a resource like a function in our scripted pipeline. That way only 1 job could make use of that bit of code at a time. Worked great for potential race conditions. |
Beta Was this translation helpful? Give feedback.
-
I needed this functionality so I made a quick-and-dirty spin lock mutex. It's working pretty well for me so far: https://github.com/ben-z/gh-action-mutex A more involved explanation here: https://gh.neting.ccmunity/t/avoiding-conflicts-when-two-workflows-push-to-the-same-branch/233774/9 |
Beta Was this translation helpful? Give feedback.
-
Yeah, this would be a great addition, we could use this to coordinate the deployments of the services based on merges to the repo. But as the behavior now is to cancel pending jobs, we can't longer rely on files modified to know which services need to be triggered, forcing us to deploy all the services every time. While a Come on Github, it has been 2years since this was reported! |
Beta Was this translation helpful? Give feedback.
-
Any updates on this? We also need this for an auto-merge job that has concurrency issues when several jobs try to merge at the same time. Having a lock allowing only one job at a time but not cancelling pending jobs would solve this. |
Beta Was this translation helpful? Give feedback.
-
Lack of |
Beta Was this translation helpful? Give feedback.
-
Another vote for |
Beta Was this translation helpful? Give feedback.
-
Almost the 2 year anniversary. We need this feature too. Can someone please prioritize this? Something tells me it wouldn't take that much time to complete. Either |
Beta Was this translation helpful? Give feedback.
-
Perhaps we need to open a new ticket for this, since this ticket seems to be answered - maybe no one at github is paying attention to answered tickets? |
Beta Was this translation helpful? Give feedback.
-
Would be great if someone can prioritize this. |
Beta Was this translation helpful? Give feedback.
-
Hi guys, our use case is that we need a passive queue mechanism, two levels of concurrency instead of just one. Our workflow for status checks should cancel pending tasks from the same group 1 (pull request), but keep pending tasks from a different group 2 (repository). Essentially, the workflow should never run in parallel, but should always be queued if it comes from a different pull request. Not being able to specify more groups would be fine as long as we can at least disable cancelling pending tasks at all. This is what we had in Jenkins CI. I find it hard to believe that this is such a rare use case unsolved for years. |
Beta Was this translation helpful? Give feedback.
-
Hi, |
Beta Was this translation helpful? Give feedback.
-
Yet another 👍 from me, very similar problem: We need a proper mutex behaviour in order to ensure sequential runs. It's important so that we ensure consistency of the concurrent merges over time, but also so that we don't lose traceability of what merge caused what action to change status. |
Beta Was this translation helpful? Give feedback.
-
Unfortunately stumbled across this issue as well, so created a simple GitHub Action to rerun cancelled jobs - https://github.com/marketplace/actions/rerun-workflows Obviously there's plenty of enhancements that can be made to it 😄 but was simple enough to fix our problem. Feel free to open up a PR if there's enhancements you'd like to make. |
Beta Was this translation helpful? Give feedback.
-
ey github team , please, could you help us? we really need the feature |
Beta Was this translation helpful? Give feedback.
-
GitHub is nothing but pain but nothing can be done to stop execs from buying it. Someday I will work at a place that uses gitlab and I will be at peace. |
Beta Was this translation helpful? Give feedback.
-
I also recently got surprised by a limit of pending jobs set to 1. I tested my solution with only 1 extra job running in parallel and did not expect that with 3+ I will get into trouble. There is no easy workaround for this limitation. Additional setting would really help: |
Beta Was this translation helpful? Give feedback.
Thank you for all of the feedback.
The current feature was geared for two specific scenarios. The first being the default behavior where you have a deployment to say production and you only want 1 deployment happening at any given time and you always want the latest code deployed.
The second scenario where you set
cancel-in-progress: true
is for making that PR checks are always running for the latest commit in the PR and as PRs are updated you want to make sure in progress jobs are killed so you don't waste resources.We have had various discussions on the team around the other scenarios mentioned in the thread and we do hope to address them at some point in the future. However, at this t…