Skip to content

Commit

Permalink
Merge remote-tracking branch 'upstream/branch-21.10' into timeadd
Browse files Browse the repository at this point in the history
  • Loading branch information
wbo4958 committed Sep 16, 2021
2 parents 347c6b8 + e24bf78 commit b0096ce
Show file tree
Hide file tree
Showing 27 changed files with 620 additions and 491 deletions.
4 changes: 4 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -295,5 +295,9 @@ Ref: [spark-premerge-build.sh](jenkins/spark-premerge-build.sh)
If it fails, you can click the `Details` link of this check, and go to `Upload log -> Jenkins log for pull request xxx (click here)` to
find the uploaded log.

Options:
1. Skip tests run by adding `[skip ci]` to title, this should only be used for doc-only change
2. Run build and tests in databricks runtimes by adding `[databricks]` to title, this would add around 30-40 minutes

## Attribution
Portions adopted from https://github.com/rapidsai/cudf/blob/main/CONTRIBUTING.md, https://github.com/NVIDIA/nvidia-docker/blob/main/CONTRIBUTING.md, and https://github.com/NVIDIA/DALI/blob/main/CONTRIBUTING.md
3 changes: 1 addition & 2 deletions docs/compatibility.md
Original file line number Diff line number Diff line change
Expand Up @@ -539,8 +539,7 @@ Casting from string to timestamp currently has the following limitations.
| `"tomorrow"` | Yes |
| `"yesterday"` | Yes |

- <a name="Footnote1"></a>[1] The timestamp portion must be complete in terms of hours, minutes, seconds, and
milliseconds, with 2 digits each for hours, minutes, and seconds, and 6 digits for milliseconds.
- <a name="Footnote1"></a>[1] The timestamp portion must have 6 digits for milliseconds.
Only timezone 'Z' (UTC) is supported. Casting unsupported formats will result in null values.

Spark is very lenient when casting from string to timestamp because all date and time components
Expand Down
19 changes: 1 addition & 18 deletions docs/configs.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ Name | Description | Default Value
<a name="memory.gpu.oomDumpDir"></a>spark.rapids.memory.gpu.oomDumpDir|The path to a local directory where a heap dump will be created if the GPU encounters an unrecoverable out-of-memory (OOM) error. The filename will be of the form: "gpu-oom-<pid>.hprof" where <pid> is the process ID.|None
<a name="memory.gpu.pool"></a>spark.rapids.memory.gpu.pool|Select the RMM pooling allocator to use. Valid values are "DEFAULT", "ARENA", "ASYNC", and "NONE". With "DEFAULT", the RMM pool allocator is used; with "ARENA", the RMM arena allocator is used; with "ASYNC", the new CUDA stream-ordered memory allocator in CUDA 11.2+ is used. If set to "NONE", pooling is disabled and RMM just passes through to CUDA memory allocation directly. Note: "ARENA" is the recommended pool allocator if CUDF is built with Per-Thread Default Stream (PTDS), as "DEFAULT" is known to be unstable (https://github.com/NVIDIA/spark-rapids/issues/1141)|ARENA
<a name="memory.gpu.pooling.enabled"></a>spark.rapids.memory.gpu.pooling.enabled|Should RMM act as a pooling allocator for GPU memory, or should it just pass through to CUDA memory allocation directly. DEPRECATED: please use spark.rapids.memory.gpu.pool instead.|true
<a name="memory.gpu.reserve"></a>spark.rapids.memory.gpu.reserve|The amount of GPU memory that should remain unallocated by RMM and left for system use such as memory needed for kernels, kernel launches or JIT compilation.|1073741824
<a name="memory.gpu.reserve"></a>spark.rapids.memory.gpu.reserve|The amount of GPU memory that should remain unallocated by RMM and left for system use such as memory needed for kernels and kernel launches.|1073741824
<a name="memory.gpu.unspill.enabled"></a>spark.rapids.memory.gpu.unspill.enabled|When a spilled GPU buffer is needed again, should it be unspilled, or only copied back into GPU memory temporarily. Unspilling may be useful for GPU buffers that are needed frequently, for example, broadcast variables; however, it may also increase GPU memory usage|false
<a name="memory.host.spillStorageSize"></a>spark.rapids.memory.host.spillStorageSize|Amount of off-heap host memory to use for buffering spilled GPU data before spilling to local disk|1073741824
<a name="memory.pinnedPool.size"></a>spark.rapids.memory.pinnedPool.size|The size of the pinned memory pool in bytes unless otherwise specified. Use 0 to disable the pool.|0
Expand Down Expand Up @@ -378,20 +378,3 @@ Name | Description | Default Value | Notes
<a name="sql.partitioning.RangePartitioning"></a>spark.rapids.sql.partitioning.RangePartitioning|Range partitioning|true|None|
<a name="sql.partitioning.RoundRobinPartitioning"></a>spark.rapids.sql.partitioning.RoundRobinPartitioning|Round robin partitioning|true|None|
<a name="sql.partitioning.SinglePartition$"></a>spark.rapids.sql.partitioning.SinglePartition$|Single partitioning|true|None|

### JIT Kernel Cache Path

CUDF can compile GPU kernels at runtime using a just-in-time (JIT) compiler. The
resulting kernels are cached on the filesystem. The default location for this cache is
under the `.cudf` directory in the user's home directory. When running in an environment
where the user's home directory cannot be written, such as running in a container
environment on a cluster, the JIT cache path will need to be specified explicitly with
the `LIBCUDF_KERNEL_CACHE_PATH` environment variable.
The specified kernel cache path should be specific to the user to avoid conflicts with
others running on the same host. For example, the following would specify the path to a
user-specific location under `/tmp`:

```
--conf spark.executorEnv.LIBCUDF_KERNEL_CACHE_PATH="/tmp/cudf-$USER"
```

Loading

0 comments on commit b0096ce

Please sign in to comment.