diff --git a/README.md b/README.md
index 089824d..b148516 100644
--- a/README.md
+++ b/README.md
@@ -30,7 +30,7 @@ For use with Java and Scala projects, the package can be found [here](https://ce
io.qdrant
spark
- 1.12
+ 1.13
```
@@ -43,7 +43,7 @@ from pyspark.sql import SparkSession
spark = SparkSession.builder.config(
"spark.jars",
- "spark-1.12-jar-with-dependencies.jar", # specify the downloaded JAR file
+ "spark-1.13-jar-with-dependencies.jar", # specify the downloaded JAR file
)
.master("local[*]")
.appName("qdrant")
@@ -73,7 +73,7 @@ To load data into Qdrant, a collection has to be created beforehand with the app
You can use the `qdrant-spark` connector as a library in Databricks to ingest data into Qdrant.
- Go to the `Libraries` section in your cluster dashboard.
- Select `Install New` to open the library installation modal.
-- Search for `io.qdrant:spark:1.12` in the Maven packages and click `Install`.
+- Search for `io.qdrant:spark:1.13` in the Maven packages and click `Install`.
diff --git a/pom.xml b/pom.xml
index 1393228..af503f3 100644
--- a/pom.xml
+++ b/pom.xml
@@ -6,7 +6,7 @@
4.0.0
io.qdrant
spark
- 1.12
+ 1.13
qdrant-spark
https://github.com/qdrant/qdrant-spark
An Apache Spark connector for the Qdrant vector database