-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
draft: add delete batches #11
base: develop
Are you sure you want to change the base?
Conversation
f24d19d
to
c83ceb1
Compare
Kudos, SonarCloud Quality Gate passed! 0 Bugs No Coverage information |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not really clear what this PR is trying to solve/improve. adding a description would be nice!
Also, a few '
replaced by "
. If we want to change the style, we should commit mass change for that separately to avoid noise
Indeed. I believe the purpose of this PR was to optimize the entry deletions by adding batches, reducing the number of locks and (potentially) reducing the load of removing thousands of entries in big models. I don't know if this fixes it, but I believe that's the context 😅 As for the changes of |
Ok, long locks make sense as a problem statement, thank you! Yet, this does not look like a good solution. for one, it is introducing aggregates on the query. It would nice to see timing tests to see that it does improve something. Something simple creating multiple records and timing there cleanup, with and without this change. second, that aggregate condition is on id which has no enforced relationship with any given customizable date time field, so it doesn’t feel like a good condition to split batches. But I might be seeing it wrong… But I think that if this is not a sure fix, it shouldn’t be merged. We shouldn’t be merging stuff “to see if it works” (like done in some scanners). At least releasing a RC/dev version from the PR and using it for some time would provide some degree of confidence, if proper testing is not done… |
IMO, limiting queryset to BATCH number of records before calling .delete() on it should be enough (and looping until no more records are deleted) it will still apply the lock in the entire table but for shorter period. Code should be cleaner (add limit and loop, no aggregates or filters on IDs) and more likely to perform as expected |
No description provided.