Skip to content

background jobs queueing

iwhurtafly edited this page Nov 10, 2012 · 1 revision

Background jobs are key to building truly scalable web apps as they transfer both time and computationally intensive tasks from the web layer to a background process outside the user request/response lifecycle. This ensures that web requests can always return immediately and reduces compounding performance issues that occur when requests become backlogged.

A good rule of thumb is to avoid web requests which run longer than 500ms. If you find that your app has requests that take one, two, or more seconds to complete, then you should consider using a background job instead.

This article provides an overview of this architectural pattern, describes the general approach, and points to several implementations of the concept for a number of different programming languages and frameworks.

Overview

Fetching data from remote APIs, reading RSS feeds, resizing images, and uploading data to S3 are all examples of tasks that should be processed as background jobs. The web process that requests the job schedules it for processing and immediately returns to the client. The client can then poll for updates to see when their job is complete.

Consider the example of a web-based RSS reader. An app like this will have a form where users can submit a new feed URL to be read. After a delay, the user will be taken to a page where they can see the contents of the feed. A simple but non-scalable way to do this would be to retrieve the RSS from the third-party site directly inside the web request.

In-process RSS fetching w/ minimal delay

Fetching data from external sources will sometimes happen in as little as a few hundred milliseconds. Other times it may take several seconds. If the feed's server is down, it could hang for 30 seconds or more until the request times out.

In-process RSS fetching w/ high latency

Tying up your app processes during this time prevents it from handling other requests and results in a very poor user experience. This may not manifest itself under low load but as soon as your app has multiple simultaneous users you'll find that response times become more and more inconsistent and may experience H12 or other error statuses. As a result, your application won't be able to scale very well.

Approach

A more predictable and scalable architecture is to background the high-latency or long-running work in a process separate from the web layer and immediately respond to the user's request with some indicator of work progress.

Backgrounded RSS fetching

Here, one or more backround services, running separate from the web process and not serving web requests, will read items off their work queue one by one and do the work asynchronously. The results will be placed in local storage (DB, Memcached etc...) when finished.

Judging by the sequence diagram the background approach may not appear to be of any benefit as there are now more client (HTTP) requests than before. This is true but masks the real gain. While the browser may have to make more than one request to retrieve the backgrounded work the benefit is that these are very low-latency and predictable requests. No longer is any single user-request waiting, or hanging, for a long-running task to complete.

Handling long-running work with background workers has many benefits. It avoids tying up your web dynos, preventing them from serving other requests, and keeps your site snappy. You can now monitor, control and scale the worker processes independently in response to site load. The user experience is also greatly improved when all requests are immediately served, even if only to indicate the current work progress.

Process model

Cedar's robust process support allows you to specify an application-specific process model which can include background workers retrieving and processing jobs from the work queue. Here's an example Procfile for a Clojure application describing a process formation that has both a web process and a process for processing background jobs:

If desired, the worker process type can have a name other than `worker`. Unlike the `web` process type, `worker` doesn't have any significance on Heroku.
web:    lein run -m myapp.web
worker: lein run -m myapp.worker

You can then scale the number of web dynos independently of the number of worker dynos.

:::term
$ heroku ps:scale web=1 worker=5

Implementation

Backgrounding by itself is just a concept. There are many libraries and services that allow you to implement background jobs in your applications. Some popular tools include database-backed job frameworks and message queues.

Some concrete examples of background worker implementations in various languages include:

Language/framework Tutorials
Ruby/Rails
Node.js
Python
Java
Scala
Clojure
Clone this wiki locally