-
-
Notifications
You must be signed in to change notification settings - Fork 277
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
uniquejobs:digests sorted set seems to grow forever #821
Comments
looking at details on #637 as it seems very similar |
@JeremiahChurch I believe this have improved with cddcc08 and those changes should be on the main branch. I have also tweaked the reaper a bit. |
Hey everybody, we do currently observe the same behavior of a growing digests set in one of our environments. All environments use
The interesting thing is that on 3 environments we have the same application but we can only observe the behavior on one of them - but its working fine in the other two.
@JeremiahChurch have you been able to identify/fix the issue? I am wondering if the reaper is maybe actually working but just can't keep up with the amount of locks to be removed. Maybe somebody has a hint on debugging this issue? |
Describe the bug
Our prod uniquejobs:digests sorted set in redis grew to 3GB in about 3 weeks. (~5mil jobs/day, less than a 1000 total jobs in queues and dead job queue during screenshot time)
our lock TTLs are at max 6 hours - the vast majority are 5 minutes.
Expected behavior
my understanding is that we should clean the digests as conditions occur (mostly when our jobs exit successfully) or at worst when the reaper runs.
Current behavior
the uniquejobs:digests sorted set grows until we run out of redis ram
Worker class
47 different jobs that have a lock on them. the only locks we use are:
until_and_while_executing
,until_executing
, &until_executed
, 95% of them areuntil_and_while_executing
Additional context
We're generally running the top of main from a version perspective. currently 8.0.6. sidekiq 7.1.6 currently, rails 7.0.8.
This is the second or 3rd time that we've seen the issue cropped up, not sure if it's been introduced recently or if it's always been there and we just haven't noticed until recently.
Failures or jobs exiting because of an exception or other 'non normal' exit are less than 0.1% of all jobs run.
I've been through the reaper issues, found some similar issues but seemingly nothing exact.
As always, huge love for the gem <3
The text was updated successfully, but these errors were encountered: