diff --git a/README.md b/README.md index 2303498b..d4672ec3 100644 --- a/README.md +++ b/README.md @@ -1,10 +1,6 @@ #pydruid pydruid exposes a simple API to create, execute, and analyze [Druid](http://druid.io/) queries. pydruid can parse query results into [Pandas](http://pandas.pydata.org/) DataFrame objects for subsequent data analysis -- this offers a tight integration between [Druid](http://druid.io/), the [SciPy](http://www.scipy.org/stackspec.html) stack (for scientific computing) and [scikit-learn](http://scikit-learn.org/stable/) (for machine learning). Additionally, pydruid can export query results into TSV or JSON for further processing with your favorite tool, e.g., R, Julia, Matlab, Excel. -#setup - -#documentation - #examples The following exampes show how to execute and analyze the results of three types of queries: timeseries, topN, and groupby. We will use these queries to ask simple questions about twitter's public data set. @@ -111,6 +107,22 @@ plot(g, "tweets.png", layout=layout, vertex_size=2, bbox=(400, 400), margin=25, ![alt text](https://github.com/metamx/pydruid/raw/docs/docs/figures/twitter_graph.png "Social Network") +#documentation + +pydruid is a [Sphinx](http://sphinx-doc.org/) project. You can view documentation locally by opening the following with a web browser: + +```python +pydruid/docs/build/html/index.html +``` + +The docstrings are written in [ReStructuredText](http://docutils.sourceforge.net/rst.html). If edited, documentation can be re-generated by running: + +```python +make html +``` + +from within the docs directory, assuming Sphinx is installed on your machine. +