Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixing Job processing order without munging the job IDS #708

Closed
wants to merge 7 commits into from
Closed

Fixing Job processing order without munging the job IDS #708

wants to merge 7 commits into from

Conversation

hockeytim11
Copy link
Contributor

This is my attempt at fixing the FIFO issue.

I created an internal zid value that is used for zsets. This string value is in form: (id.length)id.

This allows for the ID to be parsed back out while maintaining the job order.

The length portion is fixed at 3 characters, which allow for ids to stay in order up to 999 digits long. This seemed sufficient to me, but is an upper limit. This could be easily increased if there is a need.

Redis sorts zsets lexographically for members with the same priority.
This causes jobs to run out of order. If you submit 11 jobs at the same
priority level, they will run as 1,10,11,2,3,4,5,6,7,8,9.

This adds a zid to the job that prepends the number of digits in the job
id (001)1. This new zset value will be sorted correctly by redis. This
is not desirable as a general purpose id, so it is hidden and the
standard jobid can be used for all external purposes.

The prepended value is stripped off when doing zpop.
Fixing range by and delay code so all tests pass again.
@hockeytim11
Copy link
Contributor Author

I know there have been a couple fixes for this submitted, but I wanted one that didn't permanently mess up the job id. This should preserve a FIFO job order up to a 999 digit job id while presenting the normal expected job id to the user of the library.

I am happy to clean it up some if that would help get it merged.

related issues:
#693
#691

related pull requests:
#698
#705

@hockeytim11
Copy link
Contributor Author

According to the docs for redis, INCR will max out at a 64bit signed integer: 9223372036854775807. So this method of job id sorting will outlast the redis server. A 2 digit length value would actually be sufficient.

@kulbirsaini
Copy link

Thanks for this. Looks good. I think it's a good bet to keep id length to two digits. A job id of more than 99 digits is a bit too far fetched in my opinion :-)

My suggestion is to use a | or - to separate idLen and jobId. That way you do save one character. Plus if you go for 2 digit length, then you save another character there. In a system with a million jobs, that's 2M characters removed from memory.

The suggestion to remove 2 characters from the zid seemed reasonable.
Also renaming parseID to stripFIFO to be consistent with the other file.
@hockeytim11
Copy link
Contributor Author

Those seem like good suggestions. Changes have been pushed.

I didn't like defining the strip FIFO and create FIFO in separate files.
This collocates them in one place next to the getKey call that is used
everywhere.
@hockeytim11
Copy link
Contributor Author

Moved the stripFIFO and createFIFO calls to the client object so they can be updated in one place. Just like the getKey function.

I was using tab characters, switching to spaces.
@behrad
Copy link
Collaborator

behrad commented Sep 14, 2015

will check this soon, thank you guys...

Expecting a string was causing a crash when undefined was passed in.
Undefined doesn't resolve to an id, so we return null.
@behrad behrad added the Bug label Nov 4, 2015
behrad added a commit that referenced this pull request Nov 20, 2015
@behrad
Copy link
Collaborator

behrad commented Nov 20, 2015

merged in 0.10.3

@behrad behrad closed this Nov 20, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants