Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Until and while executing reschedule #324

Closed

Conversation

mfly
Copy link

@mfly mfly commented Aug 15, 2018

Add new lock type:

  • while conflicting job is already scheduled it uses the user-provided strategy
  • while job is executing, conflicting jobs are being rescheduled

@mhenrixon
Copy link
Owner

Hi @mfly, really appreciate the suggestion. Would the following work for you?

sidekiq_options unique: :until_and_while_executing, on_conflict: :reschedule

If I am not completely missing something the above should give you just what you are after.

I started creating more specialized lock types but then later decided it is better to handle it with conflict strategies.

@mfly
Copy link
Author

mfly commented Aug 16, 2018

Hi @mhenrixon. The idea behind this is to solve the case when jobs should only be rescheduled while the conflicting job is being processed by worker and when the job is in queue, conflicting jobs should be ignored/replaced. It might be so that this use-case is quite specific to our setup.

With your suggested setup, you'll end up with a stack overflow when trying to enqueue a job while conflicting job is running and also scheduled (i.e. enqueue three long-running jobs one after another). Try this snippet:

#!/bin/env ruby

require 'sidekiq'
require 'sidekiq-unique-jobs'

class Worker
  include Sidekiq::Worker
  
  sidekiq_options unique: :until_and_while_executing, on_conflict: :reschedule

  def perform(args)
    sleep 5
  end
end

# Task 1: running on worker
Worker.perform_async(1)
# Task 2: rescheduled
Worker.perform_async(1)
# Task 3: stack overflow
Worker.perform_async(1)

IMHO a better way of solving this problem and making this library more flexible would be to allow a separate set of lock, strategy and possibly unique_args for the "while executing" case.

# Executes in the Sidekiq server process
# @yield to the worker class perform method
def execute
return unless locked?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mfly I assume this is here because it's in UntilAndWhileExecuting. @marclennox could you provide any additional context for why this line is needed? I looked at #277 which added it but don't really understand.

I'm asking because this line appears to break Sidekiq Pro's super_fetch reliability feature. When I start a job using this lock strategy and then kill the worker suddenly with kill -9, super_fetch will see that the job died and try to re-enqueue it but won't be able to because of this line. When I remove this return unless locked? line it works as I expect.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be honest I haven't worked with this library in many years and have lost any context in the deep recesses of my brain... apologies.

Copy link

@JacobEvelyn JacobEvelyn May 15, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No worries! I actually think I meant to tag @mhenrixon for that question. Will something break if I remove this line?

@JacobEvelyn
Copy link

I'm not sure why this conversation seems to have died but @mfly's code here works for me too (with one caveat—see my comment on the PR) and seems to solve the issues mentioned in #381 and #328.

@mhenrixon
Copy link
Owner

The underlying problem will be solved in #387 eventually. Still working out a few quirks but when that PR gets merged a lot of the current problems should be fixed.

@JacobEvelyn
Copy link

Thanks, @mhenrixon! Do you have a sense for when #387 will be done? I'd be happy to help test it when you're ready!

@mhenrixon
Copy link
Owner

It should hopefully be done towards the end of the week.

@mhenrixon
Copy link
Owner

@JacobEvelyn @mfly I fixed some stuff related to this in both v6 and v7 branches. It has been released and I will close this and ask that new issues be raised if any problems still persist

@mhenrixon mhenrixon closed this Nov 28, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants