-
Notifications
You must be signed in to change notification settings - Fork 331
Configuration
You have a 2-level configuration hierarchy in Sneakers. Global, and local, or if you want - Sneakers and Worker. This kind of configuration is decided and set offline, and set once.
In addition we have auto-scaling configuration done via file, which is dynamic and can be reloaded on the fly.
Specify configuration via
Sneakers.configure :key => value, :otherkey => othervalue
All of the following keys are available, for convenience divided into 2 categories: daemon
and worker
. The defaults are pretty good so feel free to just leave as-is.
Connection configuration is made through Sneakers.configure
. Here is the default config:
Sneakers.configure :heartbeat => 30,
:amqp => 'amqp://guest:guest@localhost:5672',
:vhost => '/',
:exchange => 'sneakers',
:exchange_type => :direct
# more configuration options...
By default each worker will open its own connection to RabbitMQ. With a larger number of workers this is suboptimal.
It is possible to make Sneakers use a provided Bunny connection:
Sneakers.configure :connection => Bunny.new(…)
this connection will be shared by all workers.
You can set the level of the logger with:
Sneakers.logger.level = Logger::INFO
This needs to be used after Sneakers.configure
You may edit the error reporters to notify about any exception inside the workers in a custom way. by default there is Sneakers::ErrorReporter::DefaultLogger
just logs error with Sneakers.logger
help. For example, to add another one to notify Honeybadger either adjust config the next way:
Sneakers.error_reporters << proc { |exception, _worker, context_hash| Honeybadger.notify(exception, context_hash) }
Controls how the Sneakers daemon behaves.
:runner_config_file => nil, # A configuration file (see below)
:metrics => nil, # A metrics provider implementation
:daemonize => true, # Send to background
:start_worker_delay => 0.2, # When workers do frenzy-die, randomize to avoid resource starvation
:workers => 4, # Number of per-cpu processes to run
:log => 'sneakers.log', # Log file
:pid_path => 'sneakers.pid', # Pid file
Specify a global configuration for all workers.
:timeout_job_after => 5, # Maximal seconds to wait for job
:prefetch => 10, # Grab 10 jobs together. Better speed.
:threads => 10, # Threadpool size (good to match prefetch)
:env => ENV['RACK_ENV'], # Environment
:durable => true, # Is queue durable?
:ack => true, # Must we acknowledge?
:heartbeat => 2, # Keep a good connection with broker
:exchange => 'sneakers', # AMQP exchange
:hooks => {} # before_fork/after_fork hooks
:start_worker_delay => 10 # Delay between thread startup
When you create a new worker you can specify the configuration explicitly. This is the preferred way since it keeps things untangled and flexible should you want to only tweak one worker type.
The configuration keys are similar to what we've seen so far.
class ProfilingWorker
include Sneakers::Worker
from_queue 'downloads',
:env => 'test',
:durable => false,
:ack => true,
:threads => 50,
:prefetch => 50,
:timeout_job_after => 1,
:exchange => 'dummy',
:heartbeat => 5
def work(msg)
ack!
end
end
We can achieve auto-scaling by increasing, and decreasing the number of processes dynamically. Sneakers can shut down workers gracefully and can travel the scaling path elegantly.
For this to happen we need a sneakers.conf.rb
file that looks like this:
workers 2
before_fork do
Sneakers::logger.info "I'm in a child process!"
end
after_fork do
Sneakers::logger.info " I'm in a child process!"
end
The familiar before/after fork blocks are used to disconnect any suspected shared-resources such as TCP connections, databases, etc. much similar to anything you've already seen with ActiveRecord, Redis, Unicorn, or any forking testing framework such as Spork.
Next, is the workers 2
definition. In order to scale workers up, for example, you can edit this file into workers 4
and signal Sneakers to auto-scale. More on this within the Auto Scaling wiki doc.