mrjob is a Python 2.7/3.3+ package that helps you write and run Hadoop Streaming jobs.
Stable version (v0.6.0) documentation
Development version documentation
mrjob fully supports Amazon's Elastic MapReduce (EMR) service, which allows you to buy time on a Hadoop cluster on an hourly basis. mrjob has basic support for Google Cloud Dataproc (Dataproc) which allows you to buy time on a Hadoop cluster on a minute-by-minute basis. It also works with your own Hadoop cluster.
Some important features:
- Run jobs on EMR, Google Cloud Dataproc, your own Hadoop cluster, or locally (for testing).
- Write multi-step jobs (one map-reduce step feeds into the next)
- Easily launch Spark jobs on EMR or your own Hadoop cluster
- Duplicate your production environment inside Hadoop
- Upload your source tree and put it in your job's
$PYTHONPATH
- Run make and other setup scripts
- Set environment variables (e.g.
$TZ
) - Easily install python packages from tarballs (EMR only)
- Setup handled transparently by
mrjob.conf
config file
- Upload your source tree and put it in your job's
- Automatically interpret error logs
- SSH tunnel to hadoop job tracker (EMR only)
- Minimal setup
- To run on EMR, set
$AWS_ACCESS_KEY_ID
and$AWS_SECRET_ACCESS_KEY
- To run on Dataproc, set up your Google account and credentials (see Dataproc Quickstart).
- To run on your Hadoop cluster, just make sure
$HADOOP_HOME
is set.
- To run on EMR, set
From PyPI:
pip install mrjob
From source:
python setup.py install
Code for this example and more live in mrjob/examples
.
"""The classic MapReduce job: count the frequency of words. """ from mrjob.job import MRJob import re WORD_RE = re.compile(r"[\w']+") class MRWordFreqCount(MRJob): def mapper(self, _, line): for word in WORD_RE.findall(line): yield (word.lower(), 1) def combiner(self, word, counts): yield (word, sum(counts)) def reducer(self, word, counts): yield (word, sum(counts)) if __name__ == '__main__': MRWordFreqCount.run()
# locally python mrjob/examples/mr_word_freq_count.py README.rst > counts # on EMR python mrjob/examples/mr_word_freq_count.py README.rst -r emr > counts # on Dataproc python mrjob/examples/mr_word_freq_count.py README.rst -r dataproc > counts # on your Hadoop cluster python mrjob/examples/mr_word_freq_count.py README.rst -r hadoop > counts
- create an Amazon Web Services account
- Get your access and secret keys (click "Security Credentials" on your account page)
- Set the environment variables
$AWS_ACCESS_KEY_ID
and$AWS_SECRET_ACCESS_KEY
accordingly
- Create a Google Cloud Platform account, see top-right
- Learn about Google Cloud Platform "projects"
- Select or create a Cloud Platform Console project
- Enable billing for your project
- Go to the API Manager and search for / enable the following APIs...
- Google Cloud Storage
- Google Cloud Storage JSON API
- Google Cloud Dataproc API
- Under Credentials, Create Credentials and select Service account key. Then, select New service account, enter a Name and select Key type JSON.
- Install the Google Cloud SDK
To run in other AWS regions, upload your source tree, run make
, and use
other advanced mrjob features, you'll need to set up mrjob.conf
. mrjob looks
for its conf file in:
- The contents of
$MRJOB_CONF
~/.mrjob.conf
/etc/mrjob.conf
See the mrjob.conf documentation for more information.
- PyCon 2011 mrjob overview
- Introduction to Recommendations and MapReduce with mrjob (source code)
- Social Graph Analysis Using Elastic MapReduce and PyPy
Thanks to Greg Killion (ROMEO ECHO_DELTA) for the logo.