Skip to content

Commit

Permalink
updating docs and link to snappy-store for hbase jar
Browse files Browse the repository at this point in the history
Required for snappy-store HDFS feature (#194)
  • Loading branch information
Sumedh Wale committed Mar 15, 2016
1 parent 18b76a0 commit d1e0663
Show file tree
Hide file tree
Showing 3 changed files with 12 additions and 1 deletion.
3 changes: 3 additions & 0 deletions docs/build-instructions.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,6 +84,9 @@ Useful build and test targets:
./gradlew runQuickstart - run the quickstart suite (the "Getting Started" section of docs)
./gradlew precheckin - cleanAll, buildAll, scalaStyle, build docs,
and run full snappydata testsuite including quickstart
./gradlew precheckin -Pstore - cleanAll, buildAll, scalaStyle, build docs,
run full snappydata testsuite including quickstart
and also full snappy-store testsuite
```

The default build directory is _build-artifacts/scala-2.10_ for projects. Exception is _snappy-store_ project, where the default build directory is _build-artifacts/<os>_ where _<os>_ is _linux_ on Linux systems, _osx_ on Mac, _windows_ on Windows.
Expand Down
8 changes: 8 additions & 0 deletions docs/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,14 @@ The following are the few important SnappyData properties that you would like to

For a detail list of SnappyData configurations for Leads and Servers, you can consult [this](http://gemfirexd.docs.pivotal.io/docs-gemfirexd/reference/gfxd_commands/gfxd-server.html). For a detail list of SnappyData configurations for Locators, you can consult [this](http://gemfirexd.docs.pivotal.io/docs-gemfirexd/reference/gfxd_commands/gfxd-locator.html).

#### HDFS with SnappyData store

If using SnappyData store persistence to Hadoop as documented [here](http://gemfirexd.docs.pivotal.io/docs-gemfirexd/disk_storage/persist-hdfs.html), then you will need to add the [hbase jar](http://search.maven.org/#artifactdetails|org.apache.hbase|hbase|0.94.27|jar) explicitly to CLASSPATH. The jar is now packed in the product tree, so that can be used or download from maven. Then add to conf/spark-env.sh:

export SPARK_DIST_CLASSPATH=/path/to/hbase-0.94.27.jar

(subsitute the actual path for /path/to/ above)

#### Spark specific properties

Since SnappyData embeds Spark components, [Spark Runtime environment properties](http://spark.apache.org/docs/latest/configuration.html#runtime-environment) (like spark.driver.memory, spark.executor.memory, spark.driver.extraJavaOptions, spark.executorEnv) do not take effect. They have to be specified using SnappyData configuration properties. Apart from these properties, other Spark properties can be specified in the configuration file of the Lead nodes. You have to prepend them with a _hyphen(-)_. The Spark properties that are specified on the Lead node are sent to the Server nodes. Any Spark property that is specified in the conf/servers or conf/locators file is ignored.
Expand Down
2 changes: 1 addition & 1 deletion snappy-store

0 comments on commit d1e0663

Please sign in to comment.