Skip to content

Commit

Permalink
Updating version to 0.2.1-PREVIEW
Browse files Browse the repository at this point in the history
 * updating build and docs for the new version
 * linking against updated snappy-store having build fixes and version as 1.5.0-BETA as per the upcoming release
 * copying snappy-store hbase dependency to the product tree so users who need to use GFXD HDFS support can continue to do so
 * adding "-bin" to the distribution zips
  • Loading branch information
Sumedh Wale committed Mar 15, 2016
1 parent 554b661 commit 18b76a0
Show file tree
Hide file tree
Showing 7 changed files with 33 additions and 25 deletions.
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ SnappyData is a **distributed in-memory data store for real-time operational ana
## Download binary distribution
You can download the latest version of SnappyData here:

* SnappyData Preview 0.2 download link [(tar.gz)](https://github.com/SnappyDataInc/snappydata/releases/download/v0.2-preview/snappydata-0.2.0-PREVIEW.tar.gz) [(zip)](https://github.com/SnappyDataInc/snappydata/releases/download/v0.2-preview/snappydata-0.2.0-PREVIEW.zip)
* SnappyData Preview 0.2 download link [(tar.gz)](https://github.com/SnappyDataInc/snappydata/releases/download/v0.2-preview/snappydata-0.2.1-PREVIEW-bin.tar.gz) [(zip)](https://github.com/SnappyDataInc/snappydata/releases/download/v0.2-preview/snappydata-0.2.1-PREVIEW-bin.zip)

SnappyData has been tested on Linux and Mac OSX. If not already installed, you will need to download [Java 8](http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html).

Expand All @@ -32,12 +32,12 @@ SnappyData artifacts are hosted in Maven Central. You can add a Maven dependency
```
groupId: io.snappydata
artifactId: snappy-tools_2.10
version: 0.2.0-PREVIEW
version: 0.2.1-PREVIEW
```

If you are using sbt, add this line to your build.sbt for core snappy artifacts:

`libraryDependencies += "io.snappydata" % "snappy-core_2.10" % "0.2.0-PREVIEW"`
`libraryDependencies += "io.snappydata" % "snappy-core_2.10" % "0.2.1-PREVIEW"`

Check out more specific SnappyData artifacts here: http://mvnrepository.com/artifact/io.snappydata

Expand Down
22 changes: 15 additions & 7 deletions build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ allprojects {
apply plugin: 'eclipse'

group = 'io.snappydata'
version = '0.2.0-PREVIEW'
version = '0.2.1-PREVIEW'

// apply compiler options
compileJava.options.encoding = 'UTF-8'
Expand All @@ -72,7 +72,7 @@ allprojects {
slf4jVersion = '1.7.12'
junitVersion = '4.11'
hadoopVersion = '2.4.1'
gemfireXDVersion = '2.0-BETA'
gemfireXDVersion = '1.5.0-BETA'
buildFlags = ''
createdBy = System.getProperty("user.name")
}
Expand Down Expand Up @@ -212,7 +212,6 @@ subprojects {
}
tasks.withType(Test).each { test ->
test.configure {
onlyIf { !Boolean.getBoolean('skip.tests') }
environment 'SNAPPY_HOME': snappyProductDir,
'SNAPPY_DIST_CLASSPATH': "${sourceSets.test.runtimeClasspath.asPath}"

Expand Down Expand Up @@ -450,12 +449,16 @@ task product {
rename { filename -> archiveName }
}
// copy datanucleus jars specifically since they don't work as part of fat jar
// copy hbase jar as required for GFXD HDFS support (needs to be explicitly added to SPARK_DIST_CLASSPATH)
// copy bin, sbin, data etc from spark
if (new File(rootDir, "snappy-spark/build.gradle").exists()) {
copy {
from project(":snappy-spark:snappy-spark-hive_${scalaBinaryVersion}").configurations.runtime.filter {
it.getName().contains('datanucleus')
}
from project(":snappy-store:gemfire-core").configurations.provided.filter {
it.getName().contains('hbase')
}
into "${snappyProductDir}/lib"
}
copy {
Expand Down Expand Up @@ -568,9 +571,14 @@ distributions {
}
}
}
distTar.dependsOn product
distZip.dependsOn product

distTar {
dependsOn product
classifier 'bin'
}
distZip {
dependsOn product
classifier 'bin'
}

def copyTestsCommonResources(def bdir) {
def outdir = "${bdir}/resources/test"
Expand Down Expand Up @@ -733,7 +741,7 @@ task checkAll {
dependsOn project(':snappy-spark').getTasksByName('check', true).collect { it.path }
}
if (project.hasProperty('store')) {
dependsOn ':snappy-store:checkAll'
dependsOn ':snappy-store:check'
}
mustRunAfter buildAll
}
Expand Down
14 changes: 7 additions & 7 deletions docs/GettingStarted.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ SnappyData is a **distributed in-memory data store for real-time operational ana
## Download binary distribution
You can download the latest version of SnappyData here:

* SnappyData Preview 0.2 download link [(tar.gz)](https://github.com/SnappyDataInc/snappydata/releases/download/v0.2-preview/snappydata-0.2.0-PREVIEW.tar.gz) [(zip)](https://github.com/SnappyDataInc/snappydata/releases/download/v0.2-preview/snappydata-0.2.0-PREVIEW.zip)
* SnappyData Preview 0.2 download link [(tar.gz)](https://github.com/SnappyDataInc/snappydata/releases/download/v0.2-preview/snappydata-0.2.1-PREVIEW-bin.tar.gz) [(zip)](https://github.com/SnappyDataInc/snappydata/releases/download/v0.2-preview/snappydata-0.2.1-PREVIEW-bin.zip)

SnappyData has been tested on Linux and Mac OSX. If not already installed, you will need to download [Java 8](http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html).

Expand All @@ -59,7 +59,7 @@ SnappyData artifacts are hosted in Maven Central. You can add a Maven dependency
```
groupId: io.snappydata
artifactId: snappy-tools_2.10
version: 0.2.0-PREVIEW
version: 0.2.1-PREVIEW
```

## Working with SnappyData Source Code
Expand Down Expand Up @@ -495,7 +495,7 @@ Submit `CreateAndLoadAirlineDataJob` over the REST API to create row and column

```bash
# Submit a job to Lead node on port 8090
$ ./bin/snappy-job.sh submit --lead localhost:8090 --app-name airlineApp --class io.snappydata.examples.CreateAndLoadAirlineDataJob --app-jar ./lib/quickstart-0.2.0-PREVIEW.jar
$ ./bin/snappy-job.sh submit --lead localhost:8090 --app-name airlineApp --class io.snappydata.examples.CreateAndLoadAirlineDataJob --app-jar ./lib/quickstart-0.2.1-PREVIEW.jar
{"status": "STARTED",
"result": {
"jobId": "321e5136-4a18-4c4f-b8ab-f3c8f04f0b48",
Expand Down Expand Up @@ -540,7 +540,7 @@ snappyContext.update(rowTableName, filterExpr, newColumnValues, updateColumns)
```bash
# Submit AirlineDataJob to SnappyData's Lead node on port 8090
$ ./bin/snappy-job.sh submit --lead localhost:8090 --app-name airlineApp --class io.snappydata.examples.AirlineDataJob --app-jar ./lib/quickstart-0.2.0-PREVIEW.jar
$ ./bin/snappy-job.sh submit --lead localhost:8090 --app-name airlineApp --class io.snappydata.examples.AirlineDataJob --app-jar ./lib/quickstart-0.2.1-PREVIEW.jar
{ "status": "STARTED",
"result": {
"jobId": "1b0d2e50-42da-4fdd-9ea2-69e29ab92de2",
Expand Down Expand Up @@ -656,7 +656,7 @@ Submit the `TwitterPopularTagsJob` that declares a stream table, creates and pop
```bash
# Submit the TwitterPopularTagsJob to SnappyData's Lead node on port 8090
$ ./bin/snappy-job.sh submit --lead localhost:8090 --app-name TwitterPopularTagsJob --class io.snappydata.examples.TwitterPopularTagsJob --app-jar ./lib/quickstart-0.2.0-PREVIEW.jar --stream
$ ./bin/snappy-job.sh submit --lead localhost:8090 --app-name TwitterPopularTagsJob --class io.snappydata.examples.TwitterPopularTagsJob --app-jar ./lib/quickstart-0.2.1-PREVIEW.jar --stream
# Run the following utility in another terminal to simulate a twitter stream by copying tweets in the folder on which file stream table is listening.
$ quickstart/scripts/simulateTwitterStream
Expand All @@ -671,7 +671,7 @@ $ export APP_PROPS="consumerKey=<consumerKey>,consumerSecret=<consumerSecret>,ac
# submit the TwitterPopularTagsJob Lead node on port 8090 that declares a stream table, creates and populates a topk -structure, registers CQ on it and stores the result in a snappy store table
# This job runs streaming for two minutes.
$ ./bin/snappy-job.sh submit --lead localhost:8090 --app-name TwitterPopularTagsJob --class io.snappydata.examples.TwitterPopularTagsJob --app-jar ./lib/quickstart-0.2.0-PREVIEW.jar --stream
$ ./bin/snappy-job.sh submit --lead localhost:8090 --app-name TwitterPopularTagsJob --class io.snappydata.examples.TwitterPopularTagsJob --app-jar ./lib/quickstart-0.2.1-PREVIEW.jar --stream
```
The output of the job can be found in `TwitterPopularTagsJob_timestamp.out` in the lead directory which by default is `SNAPPY_HOME/work/localhost-lead-*/`.
Expand Down Expand Up @@ -699,7 +699,7 @@ scala> val airlineDF = sqlContext.table("airline").show
# Start the Spark standalone cluster.
$ sbin/start-all.sh
# Submit AirlineDataSparkApp to Spark Cluster with snappydata's locator host port.
$ bin/spark-submit --class io.snappydata.examples.AirlineDataSparkApp --master spark://masterhost:7077 --conf snappydata.store.locators=localhost:10334 --conf spark.ui.port=4041 $SNAPPY_HOME/lib/quickstart-0.2.0-PREVIEW.jar
$ bin/spark-submit --class io.snappydata.examples.AirlineDataSparkApp --master spark://masterhost:7077 --conf snappydata.store.locators=localhost:10334 --conf spark.ui.port=4041 $SNAPPY_HOME/lib/quickstart-0.2.1-PREVIEW.jar
# The results can be seen on the command line.
```
Expand Down
4 changes: 2 additions & 2 deletions docs/connectingToCluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ The SnappyData SQL Shell (_snappy-shell_) provides a simple command line interfa
// from the SnappyData base directory
$ cd quickstart/scripts
$ ../../bin/snappy-shell
Version 2.0-BETA
Version 1.5.0-BETA
snappy>

//Connect to the cluster as a client
Expand Down Expand Up @@ -53,7 +53,7 @@ Any spark application can also use the SnappyData as store and spark as computat
# Start the Spark standalone cluster from SnappyData base directory
$ sbin/start-all.sh
# Submit AirlineDataSparkApp to Spark Cluster with snappydata's locator host port.
$ bin/spark-submit --class io.snappydata.examples.AirlineDataSparkApp --master spark://masterhost:7077 --conf snappydata.store.locators=locatorhost:port --conf spark.ui.port=4041 $SNAPPY_HOME/lib/quickstart-0.2.0-PREVIEW.jar
$ bin/spark-submit --class io.snappydata.examples.AirlineDataSparkApp --master spark://masterhost:7077 --conf snappydata.store.locators=locatorhost:port --conf spark.ui.port=4041 $SNAPPY_HOME/lib/quickstart-0.2.1-PREVIEW.jar

# The results can be seen on the command line.
```
Expand Down
8 changes: 4 additions & 4 deletions docs/jobs.md
Original file line number Diff line number Diff line change
Expand Up @@ -120,14 +120,14 @@ SnappySQLJob trait extends the SparkJobBase trait. It provides users the singlet


#### Submitting jobs
Following command submits [CreateAndLoadAirlineDataJob](https://github.com/SnappyDataInc/snappydata/blob/master/snappy-examples/src/main/scala/io/snappydata/examples/CreateAndLoadAirlineDataJob.scala) from the [snappy-examples](https://github.com/SnappyDataInc/snappydata/tree/master/snappy-examples/src/main/scala/io/snappydata/examples) directory. This job creates dataframes from parquet files, loads the data from dataframe into column tables and row tables and creates sample table on column table in its runJob method. The program is compiled into a jar file (quickstart-0.2.0-PREVIEW.jar) and submitted to jobs server as shown below.
Following command submits [CreateAndLoadAirlineDataJob](https://github.com/SnappyDataInc/snappydata/blob/master/snappy-examples/src/main/scala/io/snappydata/examples/CreateAndLoadAirlineDataJob.scala) from the [snappy-examples](https://github.com/SnappyDataInc/snappydata/tree/master/snappy-examples/src/main/scala/io/snappydata/examples) directory. This job creates dataframes from parquet files, loads the data from dataframe into column tables and row tables and creates sample table on column table in its runJob method. The program is compiled into a jar file (quickstart-0.2.1-PREVIEW.jar) and submitted to jobs server as shown below.

```
$ bin/snappy-job.sh submit \
--lead hostNameOfLead:8090 \
--app-name airlineApp \
--class io.snappydata.examples.CreateAndLoadAirlineDataJob \
--app-jar $SNAPPY_HOME/lib/quickstart-0.2.0-PREVIEW.jar
--app-jar $SNAPPY_HOME/lib/quickstart-0.2.1-PREVIEW.jar
```
The utility snappy-job.sh submits the job and returns a JSON that has a jobId of this job.

Expand Down Expand Up @@ -169,7 +169,7 @@ $ bin/snappy-job.sh submit \
--lead hostNameOfLead:8090 \
--app-name airlineApp \
--class io.snappydata.examples.AirlineDataJob \
--app-jar $SNAPPY_HOME/lib/quickstart-0.2.0-PREVIEW.jar
--app-jar $SNAPPY_HOME/lib/quickstart-0.2.1-PREVIEW.jar
```
The status of this job can be queried in the same manner as shown above. The result of the this job will return a file path that has the query results.

Expand All @@ -183,6 +183,6 @@ $ bin/snappy-job.sh submit \
--lead hostNameOfLead:8090 \
--app-name airlineApp \
--class io.snappydata.examples.TwitterPopularTagsJob \
--app-jar $SNAPPY_HOME/lib/quickstart-0.2.0-PREVIEW.jar \
--app-jar $SNAPPY_HOME/lib/quickstart-0.2.1-PREVIEW.jar \
--stream
```
2 changes: 1 addition & 1 deletion snappy-dunits/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ testClasses.doLast {
test {
dependsOn ':cleanDUnit'
dependsOn ':product'
maxParallelForks = Math.max((int)Math.sqrt(Runtime.getRuntime().availableProcessors() + 1), 2)
maxParallelForks = 1
minHeapSize '128m'
maxHeapSize '1g'

Expand Down
2 changes: 1 addition & 1 deletion snappy-store

0 comments on commit 18b76a0

Please sign in to comment.