A modern framework for all your scheduled tasks
- 📅 Human-friendly scheduling (unlike cron jobs)
- ⚛️ Lightweight accurate triggers
- 🔁 Repeatable tasks
- ❌ Error handling tools (logs, retries intervals & limits)
- ✅ Task dependancy workflows
- 📈 Statistics about your tasks (repetition, retries, execution & duration)
After having a CouchDB
instance installed and running:
npm install --save dlay-core
Dlay Core only officially supports CouchDB as backend storage, but you can create your own custom adapter. For the next version we are discussing support for MongoDB, Redis and Amazon Dynamo. Would you like to help?
// 1. Get a worker by givin it a name (ex: manobi)
const fetch = require('node-fetch'),
{ worker, createTask } = require('dlay-core')(),
manobi = worker('manobi');
// 2. Register a job for the worker
manobi.addJob('dognizer', async (ctx, done) => {
// Async exec
const res = await fetch('https://dog.ceo/api/breeds/image/random');
return res.json();
});
// 3. Assign tasks for the worker
const { createTask } = require('dlay-core');
createTask({
"date": "2018-12-23T09:21:44.000Z",
"worker": "manobi",
"job": "dognizer",
"data": {
"url": "https://dog.ceo/api/breeds/image/random",
"user": "test"
}
});
- Date
- Status
- Data
- Job
- Worker
- Repeat
- Retry
- Dependencies
- Id (readonly)
- History (readonly)
- Repetitions (readonly)
- Retries (readonly)
- Duration (readonly)
- Executions (readonly)
- Result (readonly)
- Error (readonly)
ISO 8601 format of date and time in UTC. It's used to schedule the first time you want a task to run. Later it will be used to repeat or retries.
Task's current status, it starts with waiting
but can chante to scheduled
, running
, cancel
, retry
, complete
or done
.
Payload you would to pass as argument for the job. It might be and Object, String, Array or whatever you can use on a JSON file.
{
"date": "2019-01-01T13:45:39.564Z",
"data": {
"url": "https://google.com.br",
"position": 3
}
}
A string matching one of the jobs you have added to the worker. A single worker may proccess as many jobs as you want. However we recommend running only one job per worker in production.
{
"date": "2019-01-01T13:45:39.564Z",
"job": "compress-video"
}
Since every worker is connected to the storage listening for changes, you have to specify with worker you want to perform the task. Always ensure that the worker you assigned a task have the task job registered.
{
"date": "2019-01-01T13:45:39.564Z",
"worker": "east-video-compress"
}
Define frequency interval
and limit
of a task.
Intervals can be representented with ISO 8601 interval notation or as an object (thanks to luxon.js).
{
"date": "",
"repeat": {
"limit": 4,
"interval": "P1M2DT1H10M5S"
}
}
Is exactly the same as
{
"date": "",
"retry": {
"limit": 4,
"interval": {
"month": 1,
"day": 2,
"hour": 1,
"minute": 10,
"seconds": 5
}
}
}
Just like repeat, retry options accepts an object with limit
and interval
.
{
"date": "",
"retry": {
"limit": 4,
"interval": {
"month": 1,
"day": 2,
"hour": 1,
"minute": 10,
"seconds": 5
}
}
}
Specify an array of task's ids which you can use at execution time to decide if and how it should run, based on the status of other tasks you depend.
{
"date": "2019-01-01T13:45:39.564Z",
"dependencies": [
"f1a718d1deaa20479577239a6b00a1ec", "bf9f490f1e0d29131a0da86b68c86d61"
]
}
Every task have it's own ID and it can vary based on your backend storage implementation. If you are using the built-in CouchDB storage adaptor it's going to be a UUID string.
Integer of how many times a task have run, after it's initial schedule.
After the first failure it starts incrementing until it reaches retry limit or succeed.
Describe how much time a task took execution the job in milleseconds.
The ammount of task's executions counting initial scheduling, repetitions and retries.
The result object you commited using
done(null, {success: true, msg: "Web crawling done"});
If something went wrong during the execution of your task, a timeout or a user informed object
done({error: true, 'Something went wrong'});
Differently from crontab Dlay-Core was designed easily to let you run the same job (script) with different contexts.
With crontab if need a script to run every minute:
*/1 * * * * ./jobs/collect-customer-usage.js
But what if you have to run something like "collect usage from customer abc at october 6" and "collect usage from customer xyz at october 12", then you would have to access your server and setup a different "cron job" for each of your customer.
Imagine now that customer "abc" is not one of your users anymore, you have again to access the server and remove this job.
Since most backend frameworks have some kind of integration with the native crontab, if you have to perform application level scheduling people usualy uses cron to trigger app scripts that connects the database and do batch processing.
When your cron job invoke some database query it's done something Dlay was designed to avoid it's called "Pulling".
Lets say your application deal with campaign date management like e-mail marketing delivery, display media campaign or ecommerce platform product offer. In order to be precise about when the campaign starts and ends you would have make your cron job to be triggered every second. If you have to start a single campaign today at midnight, your job would have uselessly being trigger 86400 times and queried your database 86400 only to be effective at the last run.
With you handling a multitenancy architecture where each tentant have their own database then you have to do the same process for every single database on your datacenter.
When using crontab for batch processing like syncing products what do you do when a single product sync fail? Those are the problems Dlay-Core was created to solve.
Job queues processing like RabbitMQ, ZeroMQ, Amazon SQS, Ruby Resque, Python Celery or Node.js Bull or Kue are known for their implementation of FIFO ( first in, first out) algorithm.
The first in, first out perfect for queued job processing and have being adapted from the logistics world into computing.
The parallel to the fisical world makes it easy to understand the difference between task queues and task scheduling.
In an ecommerce distribution center the first that arrives at the logistic department should be the priority to leave the building otherwise customers starts to get crazy.
Job queues impleting this same protocol have being th goto solution for then your are doing background processing for long running tasks.
Now imagine a medicine distributor, products like this have a expiracy date. If they are not delivered in the right date and time it's would unuseful and some times after the privacy date it's better if it never come.
Instead of FIFO Dlay-Core implements a methd called FEFO (First-Expire, First-Out), it have being design that no matter how many tasks do you have to proccess but ensure that tasks are in the ecxact time it was scheduled to run.
Dlay does not have task priorization mechanism like RabbitMQ and other since it take as priority the date and time a task was assined.
Agendajs is a very popular job scheduling tool for node.js, it's uses MongoDB as backend while Dlay Core built-in support for CouchDB and allows you to create your own storage adaptor.
Recently MongoDB launched Change Streams, which seems to be a mechanism similar to CouchDB Changes Feed what would allow us to support Mongo in the next versions but it looks like Agenda is not using this feature and chooses pulling methodology yet.
Dlay-Core 2.0 was designed to be distribuited across many servers, that's what workers are for. If one of workers are into heavy load you can assing it's task to a new worker at database level which is not thay easy to do with Agenda.
The initial release of Dlay-Core is actually a few months older (under the repo adlayer/after) than Agenda, but Dlay was never published on npm until version 2.0, where we came came to know the incredible work Agenda's community have being doing and kind was an inspiration for the project revitalization.