Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perform_at and perform_async do not unique if perform_at is in the future. #91

Closed
HParker opened this issue Jul 2, 2015 · 2 comments
Closed

Comments

@HParker
Copy link
Contributor

HParker commented Jul 2, 2015

When you queue the same jobs twice, once using perform_async and once using perform_at with the same arguments. They are not counted as duplicates if the time provided to perform_at is in the future.

Example:

pry(main)> MyUniqueWorker.perform_at(Time.now + 1.minute, 1)
=> "81f090cda6659b529ff91e5a"
[6] pry(main)> MyUniqueWorker.perform_async(1)
=> "d1469ddf0c6f9fada9875db2"

There are now two jobs that have the same arguments and should have been de-duped.

My unique worker is equivalent to:

class MyUniqueWorker
  include Sidekiq::Worker
  sidekiq_options queue: :default,
                  retry: true,
                  unique: true,
                  unique_job_expiration: 2.hours,
                  retry_count: 10

  def perform(id)
    # do some work...
  end
end

Work Around

The work around I am using now is to override perform_in and perform_async so that the 'at' arg is always there.

# override perform_async to just schedule for now.
  # due to unique jobs bug.
  def self.perform_async(*args)
    perform_in(Time.now, *args)
  end

  # override perform_in to never remove the 'at' key
  def self.perform_in(interval, *args)
    int = interval.to_f
    now = Time.now
    ts = (int < 1_000_000_000 ? (now + interval).to_f : int)

    # removed the optimization in lib/sidekiq/worker.rb
    # item.delete('at'.freeze) if ts <= now.to_f
    item = { 'class' => self, 'args' => args, 'at' => ts }
    client_push(item)
  end

Long Term Solution

Any ideas at a long term solution for this?
Looking through the code it looks like something around *_unique_for? might be causing this issue.
Any ideas for a fix would be appreciated.

@HParker
Copy link
Contributor Author

HParker commented Jul 2, 2015

Found a PR that seems to relate: #85

@mhenrixon
Copy link
Owner

Since the below tests pass fine using real redis I am closing this issue. The code you provided "works on my machine" ™

class MyUniqueWorker
  include Sidekiq::Worker
  sidekiq_options queue: :customqueue, retry: true, unique: true,
                  unique_job_expiration: 7200, retry_count: 10
  def perform(_)
  end
end

describe 'when a job is already scheduled' do
  before { MyUniqueWorker.perform_in(3600, 1) }
  it 'rejects new jobs with the same argument' do
    expect(MyUniqueWorker.perform_async(1)).to eq(nil)
  end
end

If something still is not working for you I suggest you provide a failing test.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants