-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Add retry on locks #4997
base: main
Are you sure you want to change the base?
fix: Add retry on locks #4997
Conversation
284e6f9
to
d313d5a
Compare
e6d6135
to
6fdaa09
Compare
6fdaa09
to
f554a9b
Compare
I don't see that this change is adding much value or helping on any of the issues you have linked. The only scenario where it will help is in the example you have given, i.e. running two |
There are a lot of people using terragrunt, atmos and other tools that can run many projects/plans at once, so I see how this help those users. |
I use Terragrunt extensively with Atlantis running parallel plans/applies in a PR, and don't have any locking issues that this change would make any difference to. |
I truly believe we still have much to do to make sure that the locking issue is dealt with. Although I'm not so sure that this code only is applicable for simultaneous commands, I've only used it to showcase the issue since is the same mechanism ( Buy I may argue that the UX will be even better for most cases. The current behavior is:
This PR will change this to:
As I stated here most users are suffering from seeing the message that just says "try again later", I'm automating this step so users only see this message if we are more or less sure that there is a real issue with the plan taking too long (which can be configured by the timeout setting). |
I've used to work in a company that had a pretty big Atlantis install, unfortunately I don't anymore so I can't really test this in large scale, I invite anyone who can test this PR to give it a try. Two areas of improvement I already can see:
|
what
I'm opening this as a draft to receive feedback early, I don't expect this to break anything but I believe it could be hidden behind a flag and with better default values for timeout and retries (maybe exponential retry?).
This adds a retry logic to the lock mechanism to mitigate the issue described in here and also in this ADR.
The locking issue itself is more complex and requires much more work, this is just a small step so users don't have to see the error anymore effectively making the code wait instead of asking the user to retry.
why
Currently the user has to rerun any operations that fail because a certain workspace path is locked, this tries just to automate the process.
tests
Will add if this approach receive support.
references
Did my best to try to understand which issues this would affect.
Relates to #3345
Relates to #2921
Relates to #2882
Relates to #4489
Relates to #305
Relates to #4829
Relates to #1847
Relates to #4566
Closes #1618
Closes #2200
Closes #3785
Closes #4489
Closes #4368