-
-
Notifications
You must be signed in to change notification settings - Fork 277
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sidekiq_unique record in Redis is not cleaned when foreman process is killed #112
Comments
The trick is that since the job is gone the due to crashing/stopping/killing Sidekiq in the middle of processing there isn't any good ways for me to remove the lock. Possibly a page with locks and arguments could be added as a tab in Sidekiq. If the job is still in a scheduled it should be possible to remove the lock as well when deleting the job in the web client. Basically |
Hi @marclennox, thanks for answering and I get your point about locks. Actually, I tried Moreover One more point: I wonder what happens on production mode when the application is deployed with Docker and the container is stopped/removed when some worker is in process. I suppose that unique record won't be deleted in this case and we will have the same issue. And of course we cannot flush Redis on production. |
I think you might have ment to mention @mhenrixon and not @marclennox :) Depending on the queue emptying it might actually be |
Yeah sorry guys I haven’t used unique-jobs in a while so probably can’t be of much help here. — On Mon, Sep 21, 2015 at 4:45 AM, Mikael Henriksson
|
@mhenrixon, sure I wanted to mention you :) In my opinion, a background job that checks for missing Meanwhile I thought about some different solution. What if we added cleanup on startup Sidekiq callback? Such callback could look the following way: Sidekiq.configure_server do |config|
config.on(:startup) { clear_old_unique_jobs }
end Then this code would be executed only once on each startup. What do you think? |
We need a work around. Such bug of sidekiq unique jobs is preventing job from added to sidekiq occasionally in our production, which is very serious. |
Check version 4.0.0 and report back if still an issue. |
Hi all, I've tried to verify this issue on version 4.0.0 and I have noticed a critical bug. Here is a snippet of my Gemfile: gem 'rails', '4.2.4'
gem 'sidekiq'
gem 'sidekiq-unique-jobs', '4.0.0' Here is my HardWorker: class HardWorker
include Sidekiq::Worker
sidekiq_options :backtrace => 5, :unique => true
def perform(name, count, salt)
raise name if name == 'crash'
logger.info Time.now
sleep count
end
end I try to call the following instruction in rails console (it returns => "e673a0b0b3be7bb61b147f11" ): HardWorker.perform_async("asdf", 15) I see that my worker fails. Please let me know what is wrong. The log is given below. Thanks, ID-ovkrdiju8 WARN: {"class"=>"HardWorker", "args"=>["asdf", 15], "retry"=>true, "queue"=>"default", "backtrace"=>5, "unique"=>true, "jid"=>"e673a0b0b3be7bb61b147f11", "created_at"=>1444154512.150679, "enqueued_at"=>1444154512.151179, "error_message"=>"uninitialized constant SidekiqUniqueJobs::RunLockFailed", "error_class"=>"NameError", "failed_at"=>1444154512.161897, "retry_count"=>0, "error_backtrace"=>["/Users/lena/.rvm/gems/ruby-2.2.1@sidekiq2/gems/sidekiq-unique-jobs-4.0.0/lib/sidekiq_unique_jobs/server/middleware.rb:49:in |
Sorry about that, should be good in 4.0.2 |
4.0.{0,1,2} is no longer working for me. Jobs are being duplicated all over the place, repeatedly. |
Already wrote you about that in the other issue. Check the reade like suggested :) |
Hi all, I have tried to verify the issue one more time for 4.0.2. I've run my HardWorker with sleep = 15 seconds and pressed "Ctrl+C" while the worker was executing. I've done this two times and one time it worked fine and the other time it did not work correctly. Here is the log of successful shutdown (unique record was not present in Redis after this): 07:22:40 sidekiq.1 | 2015-10-08T04:22:40.809Z 48011 TID-ouufr2xjk HardWorker JID-3dcbd45eaba841cd6f9c9bf4 INFO: start And here is the log of not successful shutdown (unique record was present in Redis after this). 07:23:39 sidekiq.1 | 2015-10-08T04:23:39.279Z 48014 TID-ouvuwhjes HardWorker JID-07c1f5c362e070cf634fec2c INFO: start Here is also my Redis output for the second log: 127.0.0.1:6379> keys unique*
Based on this, I suppose that the solution provided for unique_jobs in version 4.0.2 is not very stable. Could you please take a look? Thanks, |
Not sure there is a whole lot I can do about the kill signal. What happens On Thu, Oct 8, 2015, 6:34 AM Elena Rakita notifications@github.com wrote:
|
If Foreman only gives 5 seconds to exit, you can't use a shutdown timeout of 8 seconds. Use |
Isn't it the lock will expire after a while? At least it is auto-healing? |
@phuongnd08, yes, the lock is auto-healing by default (at least it was in version 3). @mhenrixon, could you please remind us what is default expiration time of the lock? |
Hi all,
I have noticed the following issue in my development mode. When I kill foreman with "Ctrl-C" and some worker is being processed, the "unique" record related to this worker is not deleted from Redis.
So, this worker is considered as not unique when I start foreman next time. I have not found any kind of possibility to solve this problem that is provided in your gem. So, I have added appropriate cleaning to my sidekiq initializer.
Did I miss anything? Do you provide a possibility of cleaning the sidekiq_unique record in Redis?
Thanks,
Elena.
The text was updated successfully, but these errors were encountered: