Skip to content

Commit

Permalink
* Version changes in docs files.
Browse files Browse the repository at this point in the history
  • Loading branch information
Amogh Shetkar committed Jan 15, 2020
1 parent e5febe3 commit 340a4a7
Show file tree
Hide file tree
Showing 27 changed files with 56 additions and 56 deletions.
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,21 +102,21 @@ SnappyData artifacts are hosted in Maven Central. You can add a Maven dependency
```
groupId: io.snappydata
artifactId: snappydata-cluster_2.11
version: 1.1.1
version: 1.2.0
```

### Using SBT Dependency

If you are using SBT, add this line to your **build.sbt** for core SnappyData artifacts:

```
libraryDependencies += "io.snappydata" % "snappydata-core_2.11" % "1.1.1"
libraryDependencies += "io.snappydata" % "snappydata-core_2.11" % "1.2.0"
```

For additions related to SnappyData cluster, use:

```
libraryDependencies += "io.snappydata" % "snappydata-cluster_2.11" % "1.1.1"
libraryDependencies += "io.snappydata" % "snappydata-cluster_2.11" % "1.2.0"
```

You can find more specific SnappyData artifacts [here](http://mvnrepository.com/artifact/io.snappydata)
Expand Down
6 changes: 3 additions & 3 deletions docs/GettingStarted.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,21 +99,21 @@ SnappyData artifacts are hosted in Maven Central. You can add a Maven dependency
```
groupId: io.snappydata
artifactId: snappydata-cluster_2.11
version: 1.1.1
version: 1.2.0
```

### Using SBT Dependency

If you are using SBT, add this line to your **build.sbt** for core SnappyData artifacts:

```
libraryDependencies += "io.snappydata" % "snappydata-core_2.11" % "1.1.1"
libraryDependencies += "io.snappydata" % "snappydata-core_2.11" % "1.2.0"
```

For additions related to SnappyData cluster, use:

```
libraryDependencies += "io.snappydata" % "snappydata-cluster_2.11" % "1.1.1"
libraryDependencies += "io.snappydata" % "snappydata-cluster_2.11" % "1.2.0"
```

You can find more specific SnappyData artifacts [here](http://mvnrepository.com/artifact/io.snappydata)
Expand Down
6 changes: 3 additions & 3 deletions docs/affinity_modes/connector_mode.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ You can either start SnappyData members using the `snappy_start_all` script or y

```pre
./bin/spark-shell --master local[*] --conf spark.snappydata.connection=localhost:1527 --packages "SnappyDataInc:snappydata:1.1.1-s_2.11"
./bin/spark-shell --master local[*] --conf spark.snappydata.connection=localhost:1527 --packages "SnappyDataInc:snappydata:1.2.0-s_2.11"
```
!!! Note
* The `spark.snappydata.connection` property points to the locator of a running SnappyData cluster. The value of this property is a combination of locator host and JDBC client port on which the locator listens for connections (default is 1527).
Expand Down Expand Up @@ -82,11 +82,11 @@ The code example for writing a Smart Connector application program is located in
**Cluster mode**

```pre
./bin/spark-submit --deploy-mode cluster --class somePackage.someClass --master spark://localhost:7077 --conf spark.snappydata.connection=localhost:1527 --packages "SnappyDataInc:snappydata:1.1.1-s_2.11"
./bin/spark-submit --deploy-mode cluster --class somePackage.someClass --master spark://localhost:7077 --conf spark.snappydata.connection=localhost:1527 --packages "SnappyDataInc:snappydata:1.2.0-s_2.11"
```
**Client mode**
```pre
./bin/spark-submit --class somePackage.someClass --master spark://localhost:7077 --conf spark.snappydata.connection=localhost:1527 --packages "SnappyDataInc:snappydata:1.1.1-s_2.11"
./bin/spark-submit --class somePackage.someClass --master spark://localhost:7077 --conf spark.snappydata.connection=localhost:1527 --packages "SnappyDataInc:snappydata:1.2.0-s_2.11"
```


Expand Down
6 changes: 3 additions & 3 deletions docs/affinity_modes/local_mode.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,15 +28,15 @@ You can use an IDE of your choice, and provide the below dependency to get Snapp
<dependency>
<groupId>io.snappydata</groupId>
<artifactId>snappydata-cluster_2.11</artifactId>
<version>1.1.1</version>
<version>1.2.0</version>
</dependency>
```

**Example: SBT dependency**

```pre
// https://mvnrepository.com/artifact/io.snappydata/snappydata-cluster_2.11
libraryDependencies += "io.snappydata" % "snappydata-cluster_2.11" % "1.1.1"
libraryDependencies += "io.snappydata" % "snappydata-cluster_2.11" % "1.2.0"
```

**Note**:</br>
Expand Down Expand Up @@ -71,5 +71,5 @@ To start SnappyData store you need to create a SnappySession in your program:
If you already have Spark2.0 installed in your local machine you can directly use `--packages` option to download the SnappyData binaries.

```pre
./bin/spark-shell --packages "SnappyDataInc:snappydata:1.1.1-s_2.11"
./bin/spark-shell --packages "SnappyDataInc:snappydata:1.2.0-s_2.11"
```
2 changes: 1 addition & 1 deletion docs/configuring_cluster/configuring_cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -244,7 +244,7 @@ Spark applications run as independent sets of processes on a cluster, coordinate
```pre
$ ./bin/spark-submit --deploy-mode cluster --class somePackage.someClass
--master spark://localhost:7077 --conf spark.snappydata.connection=localhost:1527
--packages 'SnappyDataInc:snappydata:1.1.1-s_2.11'
--packages 'SnappyDataInc:snappydata:1.2.0-s_2.11'
```
<a id="environment"></a>
## Environment Settings
Expand Down
2 changes: 1 addition & 1 deletion docs/connectors/jdbc_streaming_connector.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ SnappyData core and SnappyData jdbc streaming connector maven dependencies would
<dependency>
<groupId>io.snappydata</groupId>
<artifactId>snappydata-core_2.11</artifactId>
<version>1.1.1</version>
<version>1.2.0</version>
<scope>compile</scope>
</dependency>
```
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -54,8 +54,8 @@ Log on to Zeppelin from your web browser and configure the [JDBC Interpreter](ht
|default.password|user123|The JDBC user password|
|default.user|user1|The JDBC username|

3. **Dependency settings**</br> Since Zeppelin includes only PostgreSQL driver jar by default, you need to add the Client (JDBC) JAR file path for SnappyData. The SnappyData Client (JDBC) JAR file (snappydata-jdbc-2.11_1.1.1.jar) is available on [the release page](https://github.com/SnappyDataInc/snappydata/releases/latest). </br>
The SnappyData Client (JDBC) JAR file (snappydata-jdbc_2.11-1.1.1.jar)can also be placed under **<ZEPPELIN_HOME>/interpreter/jdbc** before starting Zeppelin instead of providing it in the dependency setting.
3. **Dependency settings**</br> Since Zeppelin includes only PostgreSQL driver jar by default, you need to add the Client (JDBC) JAR file path for SnappyData. The SnappyData Client (JDBC) JAR file (snappydata-jdbc-2.11_1.2.0.jar) is available on [the release page](https://github.com/SnappyDataInc/snappydata/releases/latest). </br>
The SnappyData Client (JDBC) JAR file (snappydata-jdbc_2.11-1.2.0.jar)can also be placed under **<ZEPPELIN_HOME>/interpreter/jdbc** before starting Zeppelin instead of providing it in the dependency setting.

4. If required, edit other properties, and then click **Save** to apply your changes.

Expand Down
4 changes: 2 additions & 2 deletions docs/howto/connect_oss_vis_client_tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ To connect SnappyData from SQL Workbench/J, do the following:
4. Click **Manage Drivers** from the bottom left. The **Manage driver** dialog box is displayed.
5. Enter the following details:
* **Name**: Provide a name for the driver.
* **Library**: Click the folder icon and select the JDBC Client jar. <br> You must download the JDBC Client jar (snappydata-jdbc_2.11-1.1.1.jar) from the SnappyData website to your local machine.
* **Library**: Click the folder icon and select the JDBC Client jar. <br> You must download the JDBC Client jar (snappydata-jdbc_2.11-1.2.0.jar) from the SnappyData website to your local machine.
* **Classname**: **io.snappydata.jdbc.ClientDriver**.
* **Sample** **URL**: jdbc:snappydata://server:port/
6. Click **OK**. The **Select Connection Profile** page is displayed.
Expand Down Expand Up @@ -139,7 +139,7 @@ To connect SnappyData from SQuirreL SQL Client, do the following:
* website URL
3. Add the downloaded **snappydata jdbc jar** in the extra classpath tab and provide the class name to be used for the connection. <br>
```
jdbc jar: https://mvnrepository.com/artifact/io.snappydata/snappydata-jdbc_2.11/1.1.1
jdbc jar: https://mvnrepository.com/artifact/io.snappydata/snappydata-jdbc_2.11/1.2.0
jdbc class: io.snappydata.jdbc.ClientPoolDriver
```
4. Go to **Aliases** tab and then click **+** to add a new alias. </br> ![Images](../Images/sql_clienttools_images/squirrel2.png)
Expand Down
4 changes: 2 additions & 2 deletions docs/howto/connect_using_jdbc_driver.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,14 +21,14 @@ You can use the Maven or the SBT dependencies to get the latest released version
<dependency>
<groupId>io.snappydata</groupId>
<artifactId>snappydata-jdbc_2.11</artifactId>
<version>1.1.1</version>
<version>1.2.0</version>
</dependency>
```

**Example: SBT dependency**
```pre
// https://mvnrepository.com/artifact/io.snappydata/snappydata-store-client
libraryDependencies += "io.snappydata" % "snappydata-jdbc_2.11" % "1.1.1"
libraryDependencies += "io.snappydata" % "snappydata-jdbc_2.11" % "1.2.0"
```

!!! Note
Expand Down
4 changes: 2 additions & 2 deletions docs/howto/connect_using_odbc_driver.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ To download and install the Visual C++ Redistributable for Visual Studio 2013:

To download and install the ODBC driver:

1. [Download the TIBCO ComputeDB 1.1.1 Enterprise Version](https://edelivery.tibco.com/storefront/index.ep). The downloaded file contains the TIBCO ComputeDB ODBC driver installers.
1. [Download the TIBCO ComputeDB 1.2.0 Enterprise Version](https://edelivery.tibco.com/storefront/index.ep). The downloaded file contains the TIBCO ComputeDB ODBC driver installers.

2. Depending on your Windows installation, extract the contents of the 32-bit or 64-bit version of the TIBCO ComputeDB ODBC Driver.

Expand All @@ -28,7 +28,7 @@ To download and install the ODBC driver:
|32-bit for 32-bit platform|TIB_compute-odbc_1.2.0_win_x86.zip|
|64-bit for 64-bit platform|TIB_compute-odbc_1.2.0_win_x64.zip|

4. Double-click on the extracted **TIB_compute-odbc_1.1.1_win.msi** file, and follow the steps to complete the installation.
4. Double-click on the extracted **TIB_compute-odbc_1.2.0_win.msi** file, and follow the steps to complete the installation.

!!! Note
Ensure that [TIBCO ComputeDB is installed](../install.md) and the [TIBCO ComputeDB cluster is running](start_snappy_cluster.md).
Expand Down
4 changes: 2 additions & 2 deletions docs/howto/run_spark_job_inside_cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,15 +29,15 @@ To compile your job, use the Maven/SBT dependencies for the latest released vers
<dependency>
<groupId>io.snappydata</groupId>
<artifactId>snappydata-cluster_2.11</artifactId>
<version>1.1.1</version>
<version>1.2.0</version>
</dependency>
```

**Example: SBT dependency**:

```pre
// https://mvnrepository.com/artifact/io.snappydata/snappydata-cluster_2.11
libraryDependencies += "io.snappydata" % "snappydata-cluster_2.11" % "1.1.1"
libraryDependencies += "io.snappydata" % "snappydata-cluster_2.11" % "1.2.0"
```

!!! Note
Expand Down
8 changes: 4 additions & 4 deletions docs/howto/spark_installation_using_smart_connector.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ Start a SnappyData cluster and create a table.
$ ./sbin/snappy-start-all.sh
$ ./bin/snappy
SnappyData version 1.1.1
SnappyData version 1.2.0
snappy> connect client 'localhost:1527';
Using CONNECTION0
snappy> CREATE TABLE SNAPPY_COL_TABLE(r1 Integer, r2 Integer) USING COLUMN;
Expand All @@ -67,7 +67,7 @@ The Smart Connector Application can now connect to this SnappyData cluster. </br
The following command executes an example that queries SNAPPY_COL_TABLE and creates a new table inside the SnappyData cluster. </br>SnappyData package has to be specified along with the application jar to run the Smart Connector application.

```pre
$ ./bin/spark-submit --master local[*] --conf snappydata.connection=localhost:1527 --class org.apache.spark.examples.snappydata.SmartConnectorExample --packages SnappyDataInc:snappydata:1.1.1-s_2.11 $SNAPPY_HOME/examples/jars/quickstart.jar
$ ./bin/spark-submit --master local[*] --conf snappydata.connection=localhost:1527 --class org.apache.spark.examples.snappydata.SmartConnectorExample --packages SnappyDataInc:snappydata:1.2.0-s_2.11 $SNAPPY_HOME/examples/jars/quickstart.jar
```

## Execute a Smart Connector Application
Expand All @@ -77,7 +77,7 @@ Start a SnappyData cluster and create a table inside it.
$ ./sbin/snappy-start-all.sh
$ ./bin/snappy
SnappyData version 1.1.1
SnappyData version 1.2.0
snappy> connect client 'localhost:1527';
Using CONNECTION0
snappy> CREATE TABLE SNAPPY_COL_TABLE(r1 Integer, r2 Integer) USING COLUMN;
Expand All @@ -91,5 +91,5 @@ exit;
A Smart Connector Application can now connect to this SnappyData cluster. The following command executes an example that queries SNAPPY_COL_TABLE and creates a new table inside SnappyData cluster. SnappyData package has to be specified along with the application jar to run the Smart Connector application.

```pre
$ ./bin/spark-submit --master local[*] --conf spark.snappydata.connection=localhost:1527 --class org.apache.spark.examples.snappydata.SmartConnectorExample --packages SnappyDataInc:snappydata:1.1.1-s_2.11 $SNAPPY_HOME/examples/jars/quickstart.jar
$ ./bin/spark-submit --master local[*] --conf spark.snappydata.connection=localhost:1527 --class org.apache.spark.examples.snappydata.SmartConnectorExample --packages SnappyDataInc:snappydata:1.2.0-s_2.11 $SNAPPY_HOME/examples/jars/quickstart.jar
```
6 changes: 3 additions & 3 deletions docs/howto/start_snappy_cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,15 +16,15 @@ It may take 30 seconds or more to bootstrap the entire cluster on your local mac
**Sample Output**: The sample output for `snappy-start-all.sh` is displayed as:

```pre
Logs generated in /home/cbhatt/snappydata-1.1.1-bin/work/localhost-locator-1/snappylocator.log
Logs generated in /home/cbhatt/snappydata-1.2.0-bin/work/localhost-locator-1/snappylocator.log
SnappyData Locator pid: 10813 status: running
Distributed system now has 1 members.
Started Thrift locator (Compact Protocol) on: localhost/127.0.0.1[1527]
Logs generated in /home/cbhatt/snappydata-1.1.1-bin/work/localhost-server-1/snappyserver.log
Logs generated in /home/cbhatt/snappydata-1.2.0-bin/work/localhost-server-1/snappyserver.log
SnappyData Server pid: 11018 status: running
Distributed system now has 2 members.
Started Thrift server (Compact Protocol) on: localhost/127.0.0.1[1528]
Logs generated in /home/cbhatt/snappydata-1.1.1-bin/work/localhost-lead-1/snappyleader.log
Logs generated in /home/cbhatt/snappydata-1.2.0-bin/work/localhost-lead-1/snappyleader.log
SnappyData Leader pid: 11213 status: running
Distributed system now has 3 members.
Starting hive thrift server (session=snappy)
Expand Down
2 changes: 1 addition & 1 deletion docs/howto/tableauconnect.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ To install Tableau desktop:

### Step 3: Connect Tableau Desktop to SnappyData Server

When using Tableau with the SnappyData ODBC Driver for the first time, you must add the **odbc-snappydata.tdc** file that is available in the downloaded **TIB_compute-odbc_1.1.1_win.zip**.
When using Tableau with the SnappyData ODBC Driver for the first time, you must add the **odbc-snappydata.tdc** file that is available in the downloaded **TIB_compute-odbc_1.2.0_win.zip**.

To connect the Tableau Desktop to the SnappyData Server:

Expand Down
4 changes: 2 additions & 2 deletions docs/howto/using_snappydata_for_any_spark_dist.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,11 @@ Following is a sample of Spark JDBC extension setup and usage:

1. Include the **TIB_compute-jdbc** package in the Spark job with spark-submit or spark-shell:

$SPARK_HOME/bin/spark-shell --jars snappydata-jdbc-2.11_1.1.1.jar
$SPARK_HOME/bin/spark-shell --jars snappydata-jdbc-2.11_1.2.0.jar

2. Set the session properties.</br>The SnappyData connection properties (to enable auto-configuration of JDBC URL) and credentials can be provided in Spark configuration itself, or set later in SparkSession to avoid passing them in all the method calls. These properties can also be provided in **spark-defaults.conf ** along with all the other Spark properties.</br> Following is a sample code of configuring the properties in **SparkConf**:

$SPARK_HOME/bin/spark-shell --jars snappydata-jdbc-2.11_1.1.1.jar --conf spark.snappydata.connection=localhost:1527 --conf spark.snappydata.user=<user> --conf spark.snappydata.password=<password>
$SPARK_HOME/bin/spark-shell --jars snappydata-jdbc-2.11_1.2.0.jar --conf spark.snappydata.connection=localhost:1527 --conf spark.snappydata.user=<user> --conf spark.snappydata.password=<password>

Overloads of the above methods accepting *user+password* and *host+port* is also provided in case those properties are not set in the session or needs to be overridden. You can optionally pass additional connection properties similarly as in the **DataFrameReader.jdbc** method.

Expand Down
6 changes: 3 additions & 3 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,21 +99,21 @@ SnappyData artifacts are hosted in Maven Central. You can add a Maven dependency
```
groupId: io.snappydata
artifactId: snappydata-cluster_2.11
version: 1.1.1
version: 1.2.0
```

### Using SBT Dependency

If you are using SBT, add this line to your **build.sbt** for core SnappyData artifacts:

```
libraryDependencies += "io.snappydata" % "snappydata-core_2.11" % "1.1.1"
libraryDependencies += "io.snappydata" % "snappydata-core_2.11" % "1.2.0"
```

For additions related to SnappyData cluster, use:

```
libraryDependencies += "io.snappydata" % "snappydata-cluster_2.11" % "1.1.1"
libraryDependencies += "io.snappydata" % "snappydata-cluster_2.11" % "1.2.0"
```

You can find more specific SnappyData artifacts [here](http://mvnrepository.com/artifact/io.snappydata)
Expand Down
4 changes: 2 additions & 2 deletions docs/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,9 @@ For more information on the capabilities of the Community Edition and Enterprise
<heading2>Download SnappyData Community Edition</heading2>


Download the [SnappyData 1.1.1 Community Edition (Open Source)](https://github.com/SnappyDataInc/snappydata/releases/) from the release page, which lists the latest and previous releases of SnappyData. The packages are available in compressed files (.tar format).
Download the [SnappyData 1.2.0 Community Edition (Open Source)](https://github.com/SnappyDataInc/snappydata/releases/) from the release page, which lists the latest and previous releases of SnappyData. The packages are available in compressed files (.tar format).

* [**SnappyData 1.1.1 Release download link**](https://github.com/SnappyDataInc/snappydata/releases/download/v1.1.1/snappydata-1.1.1-bin.tar.gz)
* [**SnappyData 1.2.0 Release download link**](https://github.com/SnappyDataInc/snappydata/releases/download/v1.2.0/snappydata-1.2.0-bin.tar.gz)


<heading2>Download SnappyData Enterprise Edition</heading2>
Expand Down
4 changes: 2 additions & 2 deletions docs/install/setting_up_cluster_on_amazon_web_services.md
Original file line number Diff line number Diff line change
Expand Up @@ -281,12 +281,12 @@ For example, to use **SnappyData Enterprise** build to launch the cluster, downl
www.snappydata.io/download on your local machine and give its path as value to above option.

```pre
./snappy-ec2 -k my-ec2-key -i ~/my-ec2-key.pem launch my-cluster --snappydata-tarball="/home/ec2-user/snappydata/distributions/snappydata-1.1.1-bin.tar.gz"
./snappy-ec2 -k my-ec2-key -i ~/my-ec2-key.pem launch my-cluster --snappydata-tarball="/home/ec2-user/snappydata/distributions/snappydata-1.2.0-bin.tar.gz"
```

Alternatively, you can also put your build file on a public web server and provide its URL to this option.
```pre
./snappy-ec2 -k my-ec2-key -i ~/my-ec2-key.pem launch my-cluster --snappydata-tarball="https://s3-us-east-2.amazonaws.com/mybucket/distributions/snappydata-1.1.1-bin.tar.gz"
./snappy-ec2 -k my-ec2-key -i ~/my-ec2-key.pem launch my-cluster --snappydata-tarball="https://s3-us-east-2.amazonaws.com/mybucket/distributions/snappydata-1.2.0-bin.tar.gz"
```

The build file should be in **.tar.gz** format.
Expand Down
2 changes: 1 addition & 1 deletion docs/programming_guide/snappydata_jobs.md
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ For writing jobs users need to include [Maven/SBT dependencies for the latest re
For example, gradle can be configured as:

```pre
compile('io.snappydata:snappydata-cluster_2.11:1.1.1') {
compile('io.snappydata:snappydata-cluster_2.11:1.2.0') {
exclude(group: 'io.snappydata', module: 'snappy-spark-unsafe_2.11')
exclude(group: 'io.snappydata', module: 'snappy-spark-core_2.11')
exclude(group: 'io.snappydata',module: 'snappy-spark-yarn_2.11')
Expand Down
Loading

0 comments on commit 340a4a7

Please sign in to comment.