Skip to content

Latest commit

 

History

History
409 lines (390 loc) · 67.6 KB

configs.md

File metadata and controls

409 lines (390 loc) · 67.6 KB
layout title nav_order
page
Configuration
4

RAPIDS Accelerator for Apache Spark Configuration

The following is the list of options that rapids-plugin-4-spark supports.

On startup use: --conf [conf key]=[conf value]. For example:

${SPARK_HOME}/bin/spark --jars 'rapids-4-spark_2.12-22.04.0-SNAPSHOT.jar,cudf-22.04.0-cuda11.jar' \
--conf spark.plugins=com.nvidia.spark.SQLPlugin \
--conf spark.rapids.sql.incompatibleOps.enabled=true

At runtime use: spark.conf.set("[conf key]", [conf value]). For example:

scala> spark.conf.set("spark.rapids.sql.incompatibleOps.enabled", true)

All configs can be set on startup, but some configs, especially for shuffle, will not work if they are set at runtime.

General Configuration

Name Description Default Value
spark.rapids.alluxio.pathsToReplace List of paths to be replaced with corresponding alluxio scheme. Eg, when configureis set to "s3:/foo->alluxio://0.1.2.3:19998/foo,gcs:/bar->alluxio://0.1.2.3:19998/bar", which means: s3:/foo/a.csv will be replaced to alluxio://0.1.2.3:19998/foo/a.csv and gcs:/bar/b.csv will be replaced to alluxio://0.1.2.3:19998/bar/b.csv None
spark.rapids.cloudSchemes Comma separated list of additional URI schemes that are to be considered cloud based filesystems. Schemes already included: abfs, abfss, dbfs, gs, s3, s3a, s3n, wasbs. Cloud based stores generally would be total separate from the executors and likely have a higher I/O read cost. Many times the cloud filesystems also get better throughput when you have multiple readers in parallel. This is used with spark.rapids.sql.format.parquet.reader.type None
spark.rapids.gpu.resourceName The name of the Spark resource that represents a GPU that you want the plugin to use if using custom resources with Spark. gpu
spark.rapids.memory.gpu.allocFraction The fraction of available (free) GPU memory that should be allocated for pooled memory. This must be less than or equal to the maximum limit configured via spark.rapids.memory.gpu.maxAllocFraction, and greater than or equal to the minimum limit configured via spark.rapids.memory.gpu.minAllocFraction. 1.0
spark.rapids.memory.gpu.debug Provides a log of GPU memory allocations and frees. If set to STDOUT or STDERR the logging will go there. Setting it to NONE disables logging. All other values are reserved for possible future expansion and in the mean time will disable logging. NONE
spark.rapids.memory.gpu.direct.storage.spill.batchWriteBuffer.size The size of the GPU memory buffer used to batch small buffers when spilling to GDS. Note that this buffer is mapped to the PCI Base Address Register (BAR) space, which may be very limited on some GPUs (e.g. the NVIDIA T4 only has 256 MiB), and it is also used by UCX bounce buffers. 8388608
spark.rapids.memory.gpu.direct.storage.spill.enabled Should GPUDirect Storage (GDS) be used to spill GPU memory buffers directly to disk. GDS must be enabled and the directory spark.local.dir must support GDS. This is an experimental feature. For more information on GDS, see https://docs.nvidia.com/gpudirect-storage/. false
spark.rapids.memory.gpu.maxAllocFraction The fraction of total GPU memory that limits the maximum size of the RMM pool. The value must be greater than or equal to the setting for spark.rapids.memory.gpu.allocFraction. Note that this limit will be reduced by the reserve memory configured in spark.rapids.memory.gpu.reserve. 1.0
spark.rapids.memory.gpu.minAllocFraction The fraction of total GPU memory that limits the minimum size of the RMM pool. The value must be less than or equal to the setting for spark.rapids.memory.gpu.allocFraction. 0.25
spark.rapids.memory.gpu.oomDumpDir The path to a local directory where a heap dump will be created if the GPU encounters an unrecoverable out-of-memory (OOM) error. The filename will be of the form: "gpu-oom-.hprof" where is the process ID. None
spark.rapids.memory.gpu.pool Select the RMM pooling allocator to use. Valid values are "DEFAULT", "ARENA", "ASYNC", and "NONE". With "DEFAULT", the RMM pool allocator is used; with "ARENA", the RMM arena allocator is used; with "ASYNC", the new CUDA stream-ordered memory allocator in CUDA 11.2+ is used. If set to "NONE", pooling is disabled and RMM just passes through to CUDA memory allocation directly. ARENA
spark.rapids.memory.gpu.pooling.enabled Should RMM act as a pooling allocator for GPU memory, or should it just pass through to CUDA memory allocation directly. DEPRECATED: please use spark.rapids.memory.gpu.pool instead. true
spark.rapids.memory.gpu.reserve The amount of GPU memory that should remain unallocated by RMM and left for system use such as memory needed for kernels and kernel launches. 671088640
spark.rapids.memory.gpu.unspill.enabled When a spilled GPU buffer is needed again, should it be unspilled, or only copied back into GPU memory temporarily. Unspilling may be useful for GPU buffers that are needed frequently, for example, broadcast variables; however, it may also increase GPU memory usage false
spark.rapids.memory.host.pageablePool.size The size of the pageable memory pool in bytes unless otherwise specified. Use 0 to disable the pool. 1073741824
spark.rapids.memory.host.spillStorageSize Amount of off-heap host memory to use for buffering spilled GPU data before spilling to local disk. Use -1 to set the amount to the combined size of pinned and pageable memory pools. -1
spark.rapids.memory.pinnedPool.size The size of the pinned memory pool in bytes unless otherwise specified. Use 0 to disable the pool. 0
spark.rapids.python.concurrentPythonWorkers Set the number of Python worker processes that can execute concurrently per GPU. Python worker processes may temporarily block when the number of concurrent Python worker processes started by the same executor exceeds this amount. Allowing too many concurrent tasks on the same GPU may lead to GPU out of memory errors. >0 means enabled, while <=0 means unlimited 0
spark.rapids.python.memory.gpu.allocFraction The fraction of total GPU memory that should be initially allocated for pooled memory for all the Python workers. It supposes to be less than (1 - $(spark.rapids.memory.gpu.allocFraction)), since the executor will share the GPU with its owning Python workers. Half of the rest will be used if not specified None
spark.rapids.python.memory.gpu.maxAllocFraction The fraction of total GPU memory that limits the maximum size of the RMM pool for all the Python workers. It supposes to be less than (1 - $(spark.rapids.memory.gpu.maxAllocFraction)), since the executor will share the GPU with its owning Python workers. when setting to 0 it means no limit. 0.0
spark.rapids.python.memory.gpu.pooling.enabled Should RMM in Python workers act as a pooling allocator for GPU memory, or should it just pass through to CUDA memory allocation directly. When not specified, It will honor the value of config 'spark.rapids.memory.gpu.pooling.enabled' None
spark.rapids.shuffle.enabled Enable or disable the RAPIDS Shuffle Manager at runtime. The RAPIDS Shuffle Manager must already be configured. When set to false, the built-in Spark shuffle will be used. true
spark.rapids.shuffle.transport.earlyStart Enable early connection establishment for RAPIDS Shuffle true
spark.rapids.shuffle.transport.earlyStart.heartbeatInterval Shuffle early start heartbeat interval (milliseconds). Executors will send a heartbeat RPC message to the driver at this interval 5000
spark.rapids.shuffle.transport.earlyStart.heartbeatTimeout Shuffle early start heartbeat timeout (milliseconds). Executors that don't heartbeat within this timeout will be considered stale. This timeout must be higher than the value for spark.rapids.shuffle.transport.earlyStart.heartbeatInterval 10000
spark.rapids.shuffle.transport.maxReceiveInflightBytes Maximum aggregate amount of bytes that be fetched at any given time from peers during shuffle 1073741824
spark.rapids.shuffle.ucx.activeMessages.forceRndv Set to true to force 'rndv' mode for all UCX Active Messages. This should only be required with UCX 1.10.x. UCX 1.11.x deployments should set to false. false
spark.rapids.shuffle.ucx.managementServerHost The host to be used to start the management server null
spark.rapids.shuffle.ucx.useWakeup When set to true, use UCX's event-based progress (epoll) in order to wake up the progress thread when needed, instead of a hot loop. true
spark.rapids.sql.batchSizeBytes Set the target number of bytes for a GPU batch. Splits sizes for input data is covered by separate configs. The maximum setting is 2 GB to avoid exceeding the cudf row count limit of a column. 2147483647
spark.rapids.sql.castDecimalToFloat.enabled Casting from decimal to floating point types on the GPU returns results that have tiny difference compared to results returned from CPU. false
spark.rapids.sql.castDecimalToString.enabled When set to true, casting from decimal to string is supported on the GPU. The GPU does NOT produce exact same string as spark produces, but producing strings which are semantically equal. For instance, given input BigDecimal(123, -2), the GPU produces "12300", which spark produces "1.23E+4". false
spark.rapids.sql.castFloatToDecimal.enabled Casting from floating point types to decimal on the GPU returns results that have tiny difference compared to results returned from CPU. false
spark.rapids.sql.castFloatToIntegralTypes.enabled Casting from floating point types to integral types on the GPU supports a slightly different range of values when using Spark 3.1.0 or later. Refer to the CAST documentation for more details. false
spark.rapids.sql.castFloatToString.enabled Casting from floating point types to string on the GPU returns results that have a different precision than the default results of Spark. false
spark.rapids.sql.castStringToFloat.enabled When set to true, enables casting from strings to float types (float, double) on the GPU. Currently hex values aren't supported on the GPU. Also note that casting from string to float types on the GPU returns incorrect results when the string represents any number "1.7976931348623158E308" <= x < "1.7976931348623159E308" and "-1.7976931348623158E308" >= x > "-1.7976931348623159E308" in both these cases the GPU returns Double.MaxValue while CPU returns "+Infinity" and "-Infinity" respectively false
spark.rapids.sql.castStringToTimestamp.enabled When set to true, casting from string to timestamp is supported on the GPU. The GPU only supports a subset of formats when casting strings to timestamps. Refer to the CAST documentation for more details. false
spark.rapids.sql.concurrentGpuTasks Set the number of tasks that can execute concurrently per GPU. Tasks may temporarily block when the number of concurrent tasks in the executor exceeds this amount. Allowing too many concurrent tasks on the same GPU may lead to GPU out of memory errors. 1
spark.rapids.sql.csv.read.decimal.enabled CSV reading is not 100% compatible when reading decimals. false
spark.rapids.sql.csv.read.double.enabled CSV reading is not 100% compatible when reading doubles. false
spark.rapids.sql.csv.read.float.enabled CSV reading is not 100% compatible when reading floats. false
spark.rapids.sql.decimalOverflowGuarantees FOR TESTING ONLY. DO NOT USE IN PRODUCTION. Please see the decimal section of the compatibility documents for more information on this config. true
spark.rapids.sql.enabled Enable (true) or disable (false) sql operations on the GPU true
spark.rapids.sql.explain Explain why some parts of a query were not placed on a GPU or not. Possible values are ALL: print everything, NONE: print nothing, NOT_ON_GPU: print only parts of a query that did not go on the GPU NONE
spark.rapids.sql.fast.sample Option to turn on fast sample. If enable it is inconsistent with CPU sample because of GPU sample algorithm is inconsistent with CPU. false
spark.rapids.sql.format.avro.enabled When set to true enables all avro input and output acceleration. (only input is currently supported anyways) false
spark.rapids.sql.format.avro.read.enabled When set to true enables avro input acceleration false
spark.rapids.sql.format.csv.enabled When set to false disables all csv input and output acceleration. (only input is currently supported anyways) true
spark.rapids.sql.format.csv.read.enabled When set to false disables csv input acceleration true
spark.rapids.sql.format.json.enabled When set to true enables all json input and output acceleration. (only input is currently supported anyways) false
spark.rapids.sql.format.json.read.enabled When set to true enables json input acceleration false
spark.rapids.sql.format.orc.enabled When set to false disables all orc input and output acceleration true
spark.rapids.sql.format.orc.multiThreadedRead.maxNumFilesParallel A limit on the maximum number of files per task processed in parallel on the CPU side before the file is sent to the GPU. This affects the amount of host memory used when reading the files in parallel. Used with MULTITHREADED reader, see spark.rapids.sql.format.orc.reader.type 2147483647
spark.rapids.sql.format.orc.multiThreadedRead.numThreads The maximum number of threads, on the executor, to use for reading small orc files in parallel. This can not be changed at runtime after the executor has started. Used with MULTITHREADED reader, see spark.rapids.sql.format.orc.reader.type. 20
spark.rapids.sql.format.orc.read.enabled When set to false disables orc input acceleration true
spark.rapids.sql.format.orc.reader.type Sets the orc reader type. We support different types that are optimized for different environments. The original Spark style reader can be selected by setting this to PERFILE which individually reads and copies files to the GPU. Loading many small files individually has high overhead, and using either COALESCING or MULTITHREADED is recommended instead. The COALESCING reader is good when using a local file system where the executors are on the same nodes or close to the nodes the data is being read on. This reader coalesces all the files assigned to a task into a single host buffer before sending it down to the GPU. It copies blocks from a single file into a host buffer in separate threads in parallel, see spark.rapids.sql.format.orc.multiThreadedRead.numThreads. MULTITHREADED is good for cloud environments where you are reading from a blobstore that is totally separate and likely has a higher I/O read cost. Many times the cloud environments also get better throughput when you have multiple readers in parallel. This reader uses multiple threads to read each file in parallel and each file is sent to the GPU separately. This allows the CPU to keep reading while GPU is also doing work. See spark.rapids.sql.format.orc.multiThreadedRead.numThreads and spark.rapids.sql.format.orc.multiThreadedRead.maxNumFilesParallel to control the number of threads and amount of memory used. By default this is set to AUTO so we select the reader we think is best. This will either be the COALESCING or the MULTITHREADED based on whether we think the file is in the cloud. See spark.rapids.cloudSchemes. AUTO
spark.rapids.sql.format.orc.write.enabled When set to false disables orc output acceleration true
spark.rapids.sql.format.parquet.enabled When set to false disables all parquet input and output acceleration true
spark.rapids.sql.format.parquet.multiThreadedRead.maxNumFilesParallel A limit on the maximum number of files per task processed in parallel on the CPU side before the file is sent to the GPU. This affects the amount of host memory used when reading the files in parallel. Used with MULTITHREADED reader, see spark.rapids.sql.format.parquet.reader.type 2147483647
spark.rapids.sql.format.parquet.multiThreadedRead.numThreads The maximum number of threads, on the executor, to use for reading small parquet files in parallel. This can not be changed at runtime after the executor has started. Used with COALESCING and MULTITHREADED reader, see spark.rapids.sql.format.parquet.reader.type. 20
spark.rapids.sql.format.parquet.read.enabled When set to false disables parquet input acceleration true
spark.rapids.sql.format.parquet.reader.type Sets the parquet reader type. We support different types that are optimized for different environments. The original Spark style reader can be selected by setting this to PERFILE which individually reads and copies files to the GPU. Loading many small files individually has high overhead, and using either COALESCING or MULTITHREADED is recommended instead. The COALESCING reader is good when using a local file system where the executors are on the same nodes or close to the nodes the data is being read on. This reader coalesces all the files assigned to a task into a single host buffer before sending it down to the GPU. It copies blocks from a single file into a host buffer in separate threads in parallel, see spark.rapids.sql.format.parquet.multiThreadedRead.numThreads. MULTITHREADED is good for cloud environments where you are reading from a blobstore that is totally separate and likely has a higher I/O read cost. Many times the cloud environments also get better throughput when you have multiple readers in parallel. This reader uses multiple threads to read each file in parallel and each file is sent to the GPU separately. This allows the CPU to keep reading while GPU is also doing work. See spark.rapids.sql.format.parquet.multiThreadedRead.numThreads and spark.rapids.sql.format.parquet.multiThreadedRead.maxNumFilesParallel to control the number of threads and amount of memory used. By default this is set to AUTO so we select the reader we think is best. This will either be the COALESCING or the MULTITHREADED based on whether we think the file is in the cloud. See spark.rapids.cloudSchemes. AUTO
spark.rapids.sql.format.parquet.write.enabled When set to false disables parquet output acceleration true
spark.rapids.sql.format.parquet.writer.int96.enabled When set to false, disables accelerated parquet write if the spark.sql.parquet.outputTimestampType is set to INT96 true
spark.rapids.sql.hasExtendedYearValues Spark 3.2.0+ extended parsing of years in dates and timestamps to support the full range of possible values. Prior to this it was limited to a positive 4 digit year. The Accelerator does not support the extended range yet. This config indicates if your data includes this extended range or not, or if you don't care about getting the correct values on values with the extended range. true
spark.rapids.sql.hasNans Config to indicate if your data has NaN's. Cudf doesn't currently support NaN's properly so you can get corrupt data if you have NaN's in your data and it runs on the GPU. true
spark.rapids.sql.hashOptimizeSort.enabled Whether sorts should be inserted after some hashed operations to improve output ordering. This can improve output file sizes when saving to columnar formats. false
spark.rapids.sql.improvedFloatOps.enabled For some floating point operations spark uses one way to compute the value and the underlying cudf implementation can use an improved algorithm. In some cases this can result in cudf producing an answer when spark overflows. Because this is not as compatible with spark, we have it disabled by default. false
spark.rapids.sql.improvedTimeOps.enabled When set to true, some operators will avoid overflowing by converting epoch days directly to seconds without first converting to microseconds false
spark.rapids.sql.incompatibleDateFormats.enabled When parsing strings as dates and timestamps in functions like unix_timestamp, some formats are fully supported on the GPU and some are unsupported and will fall back to the CPU. Some formats behave differently on the GPU than the CPU. Spark on the CPU interprets date formats with unsupported trailing characters as nulls, while Spark on the GPU will parse the date with invalid trailing characters. More detail can be found at parsing strings as dates or timestamps. false
spark.rapids.sql.incompatibleOps.enabled For operations that work, but are not 100% compatible with the Spark equivalent set if they should be enabled by default or disabled by default. false
spark.rapids.sql.join.cross.enabled When set to true cross joins are enabled on the GPU true
spark.rapids.sql.join.existence.enabled When set to true existence joins are enabled on the GPU true
spark.rapids.sql.join.fullOuter.enabled When set to true full outer joins are enabled on the GPU true
spark.rapids.sql.join.inner.enabled When set to true inner joins are enabled on the GPU true
spark.rapids.sql.join.leftAnti.enabled When set to true left anti joins are enabled on the GPU true
spark.rapids.sql.join.leftOuter.enabled When set to true left outer joins are enabled on the GPU true
spark.rapids.sql.join.leftSemi.enabled When set to true left semi joins are enabled on the GPU true
spark.rapids.sql.join.rightOuter.enabled When set to true right outer joins are enabled on the GPU true
spark.rapids.sql.json.read.decimal.enabled JSON reading is not 100% compatible when reading decimals. false
spark.rapids.sql.json.read.double.enabled JSON reading is not 100% compatible when reading doubles. false
spark.rapids.sql.json.read.float.enabled JSON reading is not 100% compatible when reading floats. false
spark.rapids.sql.metrics.level GPU plans can produce a lot more metrics than CPU plans do. In very large queries this can sometimes result in going over the max result size limit for the driver. Supported values include DEBUG which will enable all metrics supported and typically only needs to be enabled when debugging the plugin. MODERATE which should output enough metrics to understand how long each part of the query is taking and how much data is going to each part of the query. ESSENTIAL which disables most metrics except those Apache Spark CPU plans will also report or their equivalents. MODERATE
spark.rapids.sql.mode Set the mode for the Rapids Accelerator. The supported modes are explainOnly and executeOnGPU. This config can not be changed at runtime, you must restart the application for it to take affect. The default mode is executeOnGPU, which means the RAPIDS Accelerator plugin convert the Spark operations and execute them on the GPU when possible. The explainOnly mode allows running queries on the CPU and the RAPIDS Accelerator will evaluate the queries as if it was going to run on the GPU. The explanations of what would have run on the GPU and why are output in log messages. When using explainOnly mode, the default explain output is ALL, this can be changed by setting spark.rapids.sql.explain. See that config for more details. executeongpu
spark.rapids.sql.python.gpu.enabled This is an experimental feature and is likely to change in the future. Enable (true) or disable (false) support for scheduling Python Pandas UDFs with GPU resources. When enabled, pandas UDFs are assumed to share the same GPU that the RAPIDs accelerator uses and will honor the python GPU configs false
spark.rapids.sql.reader.batchSizeBytes Soft limit on the maximum number of bytes the reader reads per batch. The readers will read chunks of data until this limit is met or exceeded. Note that the reader may estimate the number of bytes that will be used on the GPU in some cases based on the schema and number of rows in each batch. 2147483647
spark.rapids.sql.reader.batchSizeRows Soft limit on the maximum number of rows the reader will read per batch. The orc and parquet readers will read row groups until this limit is met or exceeded. The limit is respected by the csv reader. 2147483647
spark.rapids.sql.regexp.enabled Specifies whether regular expressions should be evaluated on GPU. Complex expressions can cause out of memory issues so this is disabled by default. Setting this config to true will make supported regular expressions run on the GPU. See the compatibility guide for more information about which regular expressions are supported on the GPU. false
spark.rapids.sql.replaceSortMergeJoin.enabled Allow replacing sortMergeJoin with HashJoin true
spark.rapids.sql.rowBasedUDF.enabled When set to true, optimizes a row-based UDF in a GPU operation by transferring only the data it needs between GPU and CPU inside a query operation, instead of falling this operation back to CPU. This is an experimental feature, and this config might be removed in the future. false
spark.rapids.sql.shuffle.spillThreads Number of threads used to spill shuffle data to disk in the background. 6
spark.rapids.sql.stableSort.enabled Enable or disable stable sorting. Apache Spark's sorting is typically a stable sort, but sort stability cannot be guaranteed in distributed work loads because the order in which upstream data arrives to a task is not guaranteed. Sort stability then only matters when reading and sorting data from a file using a single task/partition. Because of limitations in the plugin when you enable stable sorting all of the data for a single task will be combined into a single batch before sorting. This currently disables spilling from GPU memory if the data size is too large. false
spark.rapids.sql.suppressPlanningFailure Option to fallback an individual query to CPU if an unexpected condition prevents the query plan from being converted to a GPU-enabled one. Note this is different from a normal CPU fallback for a yet-to-be-supported Spark SQL feature. If this happens the error should be reported and investigated as a GitHub issue. false
spark.rapids.sql.udfCompiler.enabled When set to true, Scala UDFs will be considered for compilation as Catalyst expressions false
spark.rapids.sql.variableFloatAgg.enabled Spark assumes that all operations produce the exact same result each time. This is not true for some floating point aggregations, which can produce slightly different results on the GPU as the aggregation is done in parallel. This can enable those operations if you know the query is only computing it once. false
spark.rapids.sql.window.range.byte.enabled When the order-by column of a range based window is byte type and the range boundary calculated for a value has overflow, CPU and GPU will get the different results. When set to false disables the range window acceleration for the byte type order-by column false
spark.rapids.sql.window.range.int.enabled When the order-by column of a range based window is int type and the range boundary calculated for a value has overflow, CPU and GPU will get the different results. When set to false disables the range window acceleration for the int type order-by column true
spark.rapids.sql.window.range.long.enabled When the order-by column of a range based window is long type and the range boundary calculated for a value has overflow, CPU and GPU will get the different results. When set to false disables the range window acceleration for the long type order-by column true
spark.rapids.sql.window.range.short.enabled When the order-by column of a range based window is short type and the range boundary calculated for a value has overflow, CPU and GPU will get the different results. When set to false disables the range window acceleration for the short type order-by column false

Supported GPU Operators and Fine Tuning

The RAPIDS Accelerator for Apache Spark can be configured to enable or disable specific GPU accelerated expressions. Enabled expressions are candidates for GPU execution. If the expression is configured as disabled, the accelerator plugin will not attempt replacement, and it will run on the CPU.

Please leverage the spark.rapids.sql.explain setting to get feedback from the plugin as to why parts of a query may not be executing on the GPU.

NOTE: Setting spark.rapids.sql.incompatibleOps.enabled=true will enable all the settings in the table below which are not enabled by default due to incompatibilities.

Expressions

Name SQL Function(s) Description Default Value Notes
spark.rapids.sql.expression.Abs abs Absolute value true None
spark.rapids.sql.expression.Acos acos Inverse cosine true None
spark.rapids.sql.expression.Acosh acosh Inverse hyperbolic cosine true None
spark.rapids.sql.expression.Add + Addition true None
spark.rapids.sql.expression.Alias Gives a column a name true None
spark.rapids.sql.expression.And and Logical AND true None
spark.rapids.sql.expression.AnsiCast Convert a column of one type of data into another type true None
spark.rapids.sql.expression.ArrayContains array_contains Returns a boolean if the array contains the passed in key true None
spark.rapids.sql.expression.ArrayExists exists Return true if any element satisfies the predicate LambdaFunction true None
spark.rapids.sql.expression.ArrayMax array_max Returns the maximum value in the array true None
spark.rapids.sql.expression.ArrayMin array_min Returns the minimum value in the array true None
spark.rapids.sql.expression.ArrayTransform transform Transform elements in an array using the transform function. This is similar to a map in functional programming true None
spark.rapids.sql.expression.Asin asin Inverse sine true None
spark.rapids.sql.expression.Asinh asinh Inverse hyperbolic sine true None
spark.rapids.sql.expression.AtLeastNNonNulls Checks if number of non null/Nan values is greater than a given value true None
spark.rapids.sql.expression.Atan atan Inverse tangent true None
spark.rapids.sql.expression.Atanh atanh Inverse hyperbolic tangent true None
spark.rapids.sql.expression.AttributeReference References an input column true None
spark.rapids.sql.expression.BRound bround Round an expression to d decimal places using HALF_EVEN rounding mode true None
spark.rapids.sql.expression.BitLength bit_length The bit length of string data true None
spark.rapids.sql.expression.BitwiseAnd & Returns the bitwise AND of the operands true None
spark.rapids.sql.expression.BitwiseNot ~ Returns the bitwise NOT of the operands true None
spark.rapids.sql.expression.BitwiseOr | Returns the bitwise OR of the operands true None
spark.rapids.sql.expression.BitwiseXor ^ Returns the bitwise XOR of the operands true None
spark.rapids.sql.expression.CaseWhen when CASE WHEN expression true None
spark.rapids.sql.expression.Cast timestamp, tinyint, binary, float, smallint, string, decimal, double, boolean, cast, date, int, bigint Convert a column of one type of data into another type true None
spark.rapids.sql.expression.Cbrt cbrt Cube root true None
spark.rapids.sql.expression.Ceil ceiling, ceil Ceiling of a number true None
spark.rapids.sql.expression.CheckOverflow CheckOverflow after arithmetic operations between DecimalType data true None
spark.rapids.sql.expression.Coalesce coalesce Returns the first non-null argument if exists. Otherwise, null true None
spark.rapids.sql.expression.Concat concat List/String concatenate true None
spark.rapids.sql.expression.ConcatWs concat_ws Concatenates multiple input strings or array of strings into a single string using a given separator true None
spark.rapids.sql.expression.Contains Contains true None
spark.rapids.sql.expression.Cos cos Cosine true None
spark.rapids.sql.expression.Cosh cosh Hyperbolic cosine true None
spark.rapids.sql.expression.Cot cot Cotangent true None
spark.rapids.sql.expression.CreateArray array Returns an array with the given elements true None
spark.rapids.sql.expression.CreateMap map Create a map true None
spark.rapids.sql.expression.CreateNamedStruct named_struct, struct Creates a struct with the given field names and values true None
spark.rapids.sql.expression.CurrentRow$ Special boundary for a window frame, indicating stopping at the current row true None
spark.rapids.sql.expression.DateAdd date_add Returns the date that is num_days after start_date true None
spark.rapids.sql.expression.DateAddInterval Adds interval to date true None
spark.rapids.sql.expression.DateDiff datediff Returns the number of days from startDate to endDate true None
spark.rapids.sql.expression.DateFormatClass date_format Converts timestamp to a value of string in the format specified by the date format true None
spark.rapids.sql.expression.DateSub date_sub Returns the date that is num_days before start_date true None
spark.rapids.sql.expression.DayOfMonth dayofmonth, day Returns the day of the month from a date or timestamp true None
spark.rapids.sql.expression.DayOfWeek dayofweek Returns the day of the week (1 = Sunday...7=Saturday) true None
spark.rapids.sql.expression.DayOfYear dayofyear Returns the day of the year from a date or timestamp true None
spark.rapids.sql.expression.DenseRank dense_rank Window function that returns the dense rank value within the aggregation window true None
spark.rapids.sql.expression.Divide / Division true None
spark.rapids.sql.expression.ElementAt element_at Returns element of array at given(1-based) index in value if column is array. Returns value for the given key in value if column is map. true None
spark.rapids.sql.expression.EndsWith Ends with true None
spark.rapids.sql.expression.EqualNullSafe <=> Check if the values are equal including nulls <=> true None
spark.rapids.sql.expression.EqualTo =, == Check if the values are equal true None
spark.rapids.sql.expression.Exp exp Euler's number e raised to a power true None
spark.rapids.sql.expression.Explode explode, explode_outer Given an input array produces a sequence of rows for each value in the array true None
spark.rapids.sql.expression.Expm1 expm1 Euler's number e raised to a power minus 1 true None
spark.rapids.sql.expression.Floor floor Floor of a number true None
spark.rapids.sql.expression.FromUnixTime from_unixtime Get the string from a unix timestamp true None
spark.rapids.sql.expression.GetArrayItem Gets the field at ordinal in the Array true None
spark.rapids.sql.expression.GetArrayStructFields Extracts the ordinal-th fields of all array elements for the data with the type of array of struct true None
spark.rapids.sql.expression.GetJsonObject get_json_object Extracts a json object from path true None
spark.rapids.sql.expression.GetMapValue Gets Value from a Map based on a key true None
spark.rapids.sql.expression.GetStructField Gets the named field of the struct true None
spark.rapids.sql.expression.GetTimestamp Gets timestamps from strings using given pattern. true None
spark.rapids.sql.expression.GreaterThan > > operator true None
spark.rapids.sql.expression.GreaterThanOrEqual >= >= operator true None
spark.rapids.sql.expression.Greatest greatest Returns the greatest value of all parameters, skipping null values true None
spark.rapids.sql.expression.Hour hour Returns the hour component of the string/timestamp true None
spark.rapids.sql.expression.Hypot hypot Pythagorean addition (Hypotenuse) of real numbers true None
spark.rapids.sql.expression.If if IF expression true None
spark.rapids.sql.expression.In in IN operator true None
spark.rapids.sql.expression.InSet INSET operator true None
spark.rapids.sql.expression.InitCap initcap Returns str with the first letter of each word in uppercase. All other letters are in lowercase false This is not 100% compatible with the Spark version because the Unicode version used by cuDF and the JVM may differ, resulting in some corner-case characters not changing case correctly.
spark.rapids.sql.expression.InputFileBlockLength input_file_block_length Returns the length of the block being read, or -1 if not available true None
spark.rapids.sql.expression.InputFileBlockStart input_file_block_start Returns the start offset of the block being read, or -1 if not available true None
spark.rapids.sql.expression.InputFileName input_file_name Returns the name of the file being read, or empty string if not available true None
spark.rapids.sql.expression.IntegralDivide div Division with a integer result true None
spark.rapids.sql.expression.IsNaN isnan Checks if a value is NaN true None
spark.rapids.sql.expression.IsNotNull isnotnull Checks if a value is not null true None
spark.rapids.sql.expression.IsNull isnull Checks if a value is null true None
spark.rapids.sql.expression.KnownFloatingPointNormalized Tag to prevent redundant normalization true None
spark.rapids.sql.expression.KnownNotNull Tag an expression as known to not be null true None
spark.rapids.sql.expression.Lag lag Window function that returns N entries behind this one true None
spark.rapids.sql.expression.LambdaFunction Holds a higher order SQL function true None
spark.rapids.sql.expression.LastDay last_day Returns the last day of the month which the date belongs to true None
spark.rapids.sql.expression.Lead lead Window function that returns N entries ahead of this one true None
spark.rapids.sql.expression.Least least Returns the least value of all parameters, skipping null values true None
spark.rapids.sql.expression.Length length, character_length, char_length String character length or binary byte length true None
spark.rapids.sql.expression.LessThan < < operator true None
spark.rapids.sql.expression.LessThanOrEqual <= <= operator true None
spark.rapids.sql.expression.Like like Like true None
spark.rapids.sql.expression.Literal Holds a static value from the query true None
spark.rapids.sql.expression.Log ln Natural log true None
spark.rapids.sql.expression.Log10 log10 Log base 10 true None
spark.rapids.sql.expression.Log1p log1p Natural log 1 + expr true None
spark.rapids.sql.expression.Log2 log2 Log base 2 true None
spark.rapids.sql.expression.Logarithm log Log variable base true None
spark.rapids.sql.expression.Lower lower, lcase String lowercase operator false This is not 100% compatible with the Spark version because the Unicode version used by cuDF and the JVM may differ, resulting in some corner-case characters not changing case correctly.
spark.rapids.sql.expression.MakeDecimal Create a Decimal from an unscaled long value for some aggregation optimizations true None
spark.rapids.sql.expression.MapEntries map_entries Returns an unordered array of all entries in the given map true None
spark.rapids.sql.expression.MapKeys map_keys Returns an unordered array containing the keys of the map true None
spark.rapids.sql.expression.MapValues map_values Returns an unordered array containing the values of the map true None
spark.rapids.sql.expression.Md5 md5 MD5 hash operator true None
spark.rapids.sql.expression.Minute minute Returns the minute component of the string/timestamp true None
spark.rapids.sql.expression.MonotonicallyIncreasingID monotonically_increasing_id Returns monotonically increasing 64-bit integers true None
spark.rapids.sql.expression.Month month Returns the month from a date or timestamp true None
spark.rapids.sql.expression.Multiply * Multiplication true None
spark.rapids.sql.expression.Murmur3Hash hash Murmur3 hash operator true None
spark.rapids.sql.expression.NaNvl nanvl Evaluates to left iff left is not NaN, right otherwise true None
spark.rapids.sql.expression.NamedLambdaVariable A parameter to a higher order SQL function true None
spark.rapids.sql.expression.Not !, not Boolean not operator true None
spark.rapids.sql.expression.OctetLength octet_length The byte length of string data true None
spark.rapids.sql.expression.Or or Logical OR true None
spark.rapids.sql.expression.PercentRank percent_rank Window function that returns the percent rank value within the aggregation window true None
spark.rapids.sql.expression.Pmod pmod Pmod true None
spark.rapids.sql.expression.PosExplode posexplode_outer, posexplode Given an input array produces a sequence of rows for each value in the array true None
spark.rapids.sql.expression.Pow pow, power lhs ^ rhs true None
spark.rapids.sql.expression.PreciseTimestampConversion Expression used internally to convert the TimestampType to Long and back without losing precision, i.e. in microseconds. Used in time windowing true None
spark.rapids.sql.expression.PromotePrecision PromotePrecision before arithmetic operations between DecimalType data true None
spark.rapids.sql.expression.PythonUDF UDF run in an external python process. Does not actually run on the GPU, but the transfer of data to/from it can be accelerated true None
spark.rapids.sql.expression.Quarter quarter Returns the quarter of the year for date, in the range 1 to 4 true None
spark.rapids.sql.expression.RLike rlike Regular expression version of Like true None
spark.rapids.sql.expression.Rand random, rand Generate a random column with i.i.d. uniformly distributed values in [0, 1) true None
spark.rapids.sql.expression.Rank rank Window function that returns the rank value within the aggregation window true None
spark.rapids.sql.expression.RegExpExtract regexp_extract Extract a specific group identified by a regular expression true None
spark.rapids.sql.expression.RegExpReplace regexp_replace String replace using a regular expression pattern true None
spark.rapids.sql.expression.Remainder %, mod Remainder or modulo true None
spark.rapids.sql.expression.ReplicateRows Given an input row replicates the row N times true None
spark.rapids.sql.expression.Rint rint Rounds up a double value to the nearest double equal to an integer true None
spark.rapids.sql.expression.Round round Round an expression to d decimal places using HALF_UP rounding mode true None
spark.rapids.sql.expression.RowNumber row_number Window function that returns the index for the row within the aggregation window true None
spark.rapids.sql.expression.ScalaUDF User Defined Function, the UDF can choose to implement a RAPIDS accelerated interface to get better performance. true None
spark.rapids.sql.expression.Second second Returns the second component of the string/timestamp true None
spark.rapids.sql.expression.Sequence sequence Sequence true None
spark.rapids.sql.expression.ShiftLeft shiftleft Bitwise shift left (<<) true None
spark.rapids.sql.expression.ShiftRight shiftright Bitwise shift right (>>) true None
spark.rapids.sql.expression.ShiftRightUnsigned shiftrightunsigned Bitwise unsigned shift right (>>>) true None
spark.rapids.sql.expression.Signum sign, signum Returns -1.0, 0.0 or 1.0 as expr is negative, 0 or positive true None
spark.rapids.sql.expression.Sin sin Sine true None
spark.rapids.sql.expression.Sinh sinh Hyperbolic sine true None
spark.rapids.sql.expression.Size size, cardinality The size of an array or a map true None
spark.rapids.sql.expression.SortArray sort_array Returns a sorted array with the input array and the ascending / descending order true None
spark.rapids.sql.expression.SortOrder Sort order true None
spark.rapids.sql.expression.SparkPartitionID spark_partition_id Returns the current partition id true None
spark.rapids.sql.expression.SpecifiedWindowFrame Specification of the width of the group (or "frame") of input rows around which a window function is evaluated true None
spark.rapids.sql.expression.Sqrt sqrt Square root true None
spark.rapids.sql.expression.StartsWith Starts with true None
spark.rapids.sql.expression.StringLPad lpad Pad a string on the left true None
spark.rapids.sql.expression.StringLocate position, locate Substring search operator true None
spark.rapids.sql.expression.StringRPad rpad Pad a string on the right true None
spark.rapids.sql.expression.StringRepeat repeat StringRepeat operator that repeats the given strings with numbers of times given by repeatTimes true None
spark.rapids.sql.expression.StringReplace replace StringReplace operator true None
spark.rapids.sql.expression.StringSplit split Splits str around occurrences that match regex true None
spark.rapids.sql.expression.StringToMap str_to_map Creates a map after splitting the input string into pairs of key-value strings true None
spark.rapids.sql.expression.StringTrim trim StringTrim operator true None
spark.rapids.sql.expression.StringTrimLeft ltrim StringTrimLeft operator true None
spark.rapids.sql.expression.StringTrimRight rtrim StringTrimRight operator true None
spark.rapids.sql.expression.Substring substr, substring Substring operator true None
spark.rapids.sql.expression.SubstringIndex substring_index substring_index operator true None
spark.rapids.sql.expression.Subtract - Subtraction true None
spark.rapids.sql.expression.Tan tan Tangent true None
spark.rapids.sql.expression.Tanh tanh Hyperbolic tangent true None
spark.rapids.sql.expression.TimeAdd Adds interval to timestamp true None
spark.rapids.sql.expression.ToDegrees degrees Converts radians to degrees true None
spark.rapids.sql.expression.ToRadians radians Converts degrees to radians true None
spark.rapids.sql.expression.ToUnixTimestamp to_unix_timestamp Returns the UNIX timestamp of the given time true None
spark.rapids.sql.expression.TransformKeys transform_keys Transform keys in a map using a transform function true None
spark.rapids.sql.expression.TransformValues transform_values Transform values in a map using a transform function true None
spark.rapids.sql.expression.UnaryMinus negative Negate a numeric value true None
spark.rapids.sql.expression.UnaryPositive positive A numeric value with a + in front of it true None
spark.rapids.sql.expression.UnboundedFollowing$ Special boundary for a window frame, indicating all rows preceding the current row true None
spark.rapids.sql.expression.UnboundedPreceding$ Special boundary for a window frame, indicating all rows preceding the current row true None
spark.rapids.sql.expression.UnixTimestamp unix_timestamp Returns the UNIX timestamp of current or specified time true None
spark.rapids.sql.expression.UnscaledValue Convert a Decimal to an unscaled long value for some aggregation optimizations true None
spark.rapids.sql.expression.Upper upper, ucase String uppercase operator false This is not 100% compatible with the Spark version because the Unicode version used by cuDF and the JVM may differ, resulting in some corner-case characters not changing case correctly.
spark.rapids.sql.expression.WeekDay weekday Returns the day of the week (0 = Monday...6=Sunday) true None
spark.rapids.sql.expression.WindowExpression Calculates a return value for every input row of a table based on a group (or "window") of rows true None
spark.rapids.sql.expression.WindowSpecDefinition Specification of a window function, indicating the partitioning-expression, the row ordering, and the width of the window true None
spark.rapids.sql.expression.Year year Returns the year from a date or timestamp true None
spark.rapids.sql.expression.AggregateExpression Aggregate expression true None
spark.rapids.sql.expression.ApproximatePercentile percentile_approx, approx_percentile Approximate percentile false This is not 100% compatible with the Spark version because the GPU implementation of approx_percentile is not bit-for-bit compatible with Apache Spark. To enable it, set spark.rapids.sql.incompatibleOps.enabled
spark.rapids.sql.expression.Average avg, mean Average aggregate operator true None
spark.rapids.sql.expression.CollectList collect_list Collect a list of non-unique elements, not supported in reduction true None
spark.rapids.sql.expression.CollectSet collect_set Collect a set of unique elements, not supported in reduction true None
spark.rapids.sql.expression.Count count Count aggregate operator true None
spark.rapids.sql.expression.First first_value, first first aggregate operator true None
spark.rapids.sql.expression.Last last, last_value last aggregate operator true None
spark.rapids.sql.expression.Max max Max aggregate operator true None
spark.rapids.sql.expression.Min min Min aggregate operator true None
spark.rapids.sql.expression.PivotFirst PivotFirst operator true None
spark.rapids.sql.expression.StddevPop stddev_pop Aggregation computing population standard deviation true None
spark.rapids.sql.expression.StddevSamp stddev_samp, std, stddev Aggregation computing sample standard deviation true None
spark.rapids.sql.expression.Sum sum Sum aggregate operator true None
spark.rapids.sql.expression.VariancePop var_pop Aggregation computing population variance true None
spark.rapids.sql.expression.VarianceSamp var_samp, variance Aggregation computing sample variance true None
spark.rapids.sql.expression.NormalizeNaNAndZero Normalize NaN and zero true None
spark.rapids.sql.expression.ScalarSubquery Subquery that will return only one row and one column true None
spark.rapids.sql.expression.HiveGenericUDF Hive Generic UDF, the UDF can choose to implement a RAPIDS accelerated interface to get better performance true None
spark.rapids.sql.expression.HiveSimpleUDF Hive UDF, the UDF can choose to implement a RAPIDS accelerated interface to get better performance true None

Execution

Name Description Default Value Notes
spark.rapids.sql.exec.CoalesceExec The backend for the dataframe coalesce method true None
spark.rapids.sql.exec.CollectLimitExec Reduce to single partition and apply limit false This is disabled by default because Collect Limit replacement can be slower on the GPU, if huge number of rows in a batch it could help by limiting the number of rows transferred from GPU to CPU
spark.rapids.sql.exec.ExpandExec The backend for the expand operator true None
spark.rapids.sql.exec.FileSourceScanExec Reading data from files, often from Hive tables true None
spark.rapids.sql.exec.FilterExec The backend for most filter statements true None
spark.rapids.sql.exec.GenerateExec The backend for operations that generate more output rows than input rows like explode true None
spark.rapids.sql.exec.GlobalLimitExec Limiting of results across partitions true None
spark.rapids.sql.exec.LocalLimitExec Per-partition limiting of results true None
spark.rapids.sql.exec.ProjectExec The backend for most select, withColumn and dropColumn statements true None
spark.rapids.sql.exec.RangeExec The backend for range operator true None
spark.rapids.sql.exec.SampleExec The backend for the sample operator true None
spark.rapids.sql.exec.SortExec The backend for the sort operator true None
spark.rapids.sql.exec.SubqueryBroadcastExec Plan to collect and transform the broadcast key values true None
spark.rapids.sql.exec.TakeOrderedAndProjectExec Take the first limit elements as defined by the sortOrder, and do projection if needed true None
spark.rapids.sql.exec.UnionExec The backend for the union operator true None
spark.rapids.sql.exec.CustomShuffleReaderExec A wrapper of shuffle query stage true None
spark.rapids.sql.exec.HashAggregateExec The backend for hash based aggregations true None
spark.rapids.sql.exec.ObjectHashAggregateExec The backend for hash based aggregations supporting TypedImperativeAggregate functions true None
spark.rapids.sql.exec.SortAggregateExec The backend for sort based aggregations true None
spark.rapids.sql.exec.InMemoryTableScanExec Implementation of InMemoryTableScanExec to use GPU accelerated Caching true None
spark.rapids.sql.exec.DataWritingCommandExec Writing data true None
spark.rapids.sql.exec.BatchScanExec The backend for most file input true None
spark.rapids.sql.exec.BroadcastExchangeExec The backend for broadcast exchange of data true None
spark.rapids.sql.exec.ShuffleExchangeExec The backend for most data being exchanged between processes true None
spark.rapids.sql.exec.BroadcastHashJoinExec Implementation of join using broadcast data true None
spark.rapids.sql.exec.BroadcastNestedLoopJoinExec Implementation of join using brute force. Full outer joins and joins where the broadcast side matches the join side (e.g.: LeftOuter with left broadcast) are not supported true None
spark.rapids.sql.exec.CartesianProductExec Implementation of join using brute force true None
spark.rapids.sql.exec.ShuffledHashJoinExec Implementation of join using hashed shuffled data true None
spark.rapids.sql.exec.SortMergeJoinExec Sort merge join, replacing with shuffled hash join true None
spark.rapids.sql.exec.AggregateInPandasExec The backend for an Aggregation Pandas UDF, this accelerates the data transfer between the Java process and the Python process. It also supports scheduling GPU resources for the Python process when enabled. true None
spark.rapids.sql.exec.ArrowEvalPythonExec The backend of the Scalar Pandas UDFs. Accelerates the data transfer between the Java process and the Python process. It also supports scheduling GPU resources for the Python process when enabled true None
spark.rapids.sql.exec.FlatMapCoGroupsInPandasExec The backend for CoGrouped Aggregation Pandas UDF, it runs on CPU itself now but supports scheduling GPU resources for the Python process when enabled false This is disabled by default because Performance is not ideal now
spark.rapids.sql.exec.FlatMapGroupsInPandasExec The backend for Flat Map Groups Pandas UDF, Accelerates the data transfer between the Java process and the Python process. It also supports scheduling GPU resources for the Python process when enabled. true None
spark.rapids.sql.exec.MapInPandasExec The backend for Map Pandas Iterator UDF. Accelerates the data transfer between the Java process and the Python process. It also supports scheduling GPU resources for the Python process when enabled. true None
spark.rapids.sql.exec.WindowInPandasExec The backend for Window Aggregation Pandas UDF, Accelerates the data transfer between the Java process and the Python process. It also supports scheduling GPU resources for the Python process when enabled. For now it only supports row based window frame. false This is disabled by default because it only supports row based frame for now
spark.rapids.sql.exec.WindowExec Window-operator backend true None

Scans

Name Description Default Value Notes
spark.rapids.sql.input.CSVScan CSV parsing true None
spark.rapids.sql.input.JsonScan Json parsing true None
spark.rapids.sql.input.OrcScan ORC parsing true None
spark.rapids.sql.input.ParquetScan Parquet parsing true None
spark.rapids.sql.input.AvroScan Avro parsing true None

Partitioning

Name Description Default Value Notes
spark.rapids.sql.partitioning.HashPartitioning Hash based partitioning true None
spark.rapids.sql.partitioning.RangePartitioning Range partitioning true None
spark.rapids.sql.partitioning.RoundRobinPartitioning Round robin partitioning true None
spark.rapids.sql.partitioning.SinglePartition$ Single partitioning true None