Skip to content

Commit

Permalink
added external docs (#4111)
Browse files Browse the repository at this point in the history
Signed-off-by: Raza Jafri <rjafri@nvidia.com>

Co-authored-by: Raza Jafri <rjafri@nvidia.com>
  • Loading branch information
razajafri and razajafri authored Nov 16, 2021
1 parent 2d14428 commit 2d62a15
Show file tree
Hide file tree
Showing 3 changed files with 10 additions and 6 deletions.
3 changes: 3 additions & 0 deletions pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -734,6 +734,7 @@
<buildver>301</buildver>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
<java.major.version>8</java.major.version>
<spark.version>${spark301.version}</spark.version>
<spark.test.version>${spark.version}</spark.test.version>
<spark.version.classifier>spark${buildver}</spark.version.classifier>
Expand Down Expand Up @@ -1031,7 +1032,9 @@
</goals>
<configuration>
<args>
<arg>-doc-external-doc:${java.home}/lib/rt.jar#https://docs.oracle.com/javase/${java.major.version}/docs/api/index.html</arg>
<arg>-doc-external-doc:${settings.localRepository}/${scala.local-lib.path}#https://scala-lang.org/api/${scala.version}/</arg>
<arg>-doc-external-doc:${settings.localRepository}/org/apache/spark/spark-sql_${scala.binary.version}/${spark.version}/spark-sql_${scala.binary.version}-${spark.version}.jar#https://spark.apache.org/docs/${spark.version}/api/scala/index.html</arg>
</args>
</configuration>
</execution>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -132,9 +132,10 @@ class ParquetCachedBatchSerializer extends GpuCachedBatchSerializer {

/**
* Builds a function that can be used to filter batches prior to being decompressed.
* In most cases extending [[SimpleMetricsCachedBatchSerializer]] will provide the filter logic
* necessary. You will need to provide metrics for this to work. [[SimpleMetricsCachedBatch]]
* provides the APIs to hold those metrics and explains the metrics used, really just min and max.
* In most cases extending [[org.apache.spark.sql.columnar.SimpleMetricsCachedBatchSerializer]]
* will provide the filter logic necessary. You will need to provide metrics for this to work.
* [[org.apache.spark.sql.columnar.SimpleMetricsCachedBatch]] provides the APIs to hold those
* metrics and explains the metrics used, really just min and max.
* Note that this is intended to skip batches that are not needed, and the actual filtering of
* individual rows is handled later.
* @param predicates the set of expressions to use for filtering.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -53,9 +53,9 @@ object ExplainPlan {
* @param explain If ALL returns all the explain data, otherwise just returns what does not
* work on the GPU. Default is ALL.
* @return String containing the explained plan.
* @throws IllegalArgumentException if an argument is invalid or it is unable to determine the
* Spark version
* @throws IllegalStateException if the plugin gets into an invalid state while trying
* @throws java.lang.IllegalArgumentException if an argument is invalid or it is unable to
* determine the Spark version
* @throws java.lang.IllegalStateException if the plugin gets into an invalid state while trying
* to process the plan or there is an unexepected exception.
*/
@throws[IllegalArgumentException]
Expand Down

0 comments on commit 2d62a15

Please sign in to comment.