Skip to content

Commit

Permalink
Updated doc for decimal support (#4750)
Browse files Browse the repository at this point in the history
* Updated doc for decimal

Signed-off-by: Hao Zhu <hazhu@nvidia.com>

* Update docs/supported_ops.md

Co-authored-by: Sameer Raheja <sameerz@users.noreply.github.com>

* Update docs/supported_ops.md

Co-authored-by: Sameer Raheja <sameerz@users.noreply.github.com>

* Update docs/supported_ops.md

Co-authored-by: Sameer Raheja <sameerz@users.noreply.github.com>

* Modify TypeChecks.scala as well

Signed-off-by: Hao Zhu <hazhu@nvidia.com>

* fix 100 char limit

Signed-off-by: Hao Zhu <hazhu@nvidia.com>

Co-authored-by: Sameer Raheja <sameerz@users.noreply.github.com>
  • Loading branch information
viadea and sameerz authored Feb 13, 2022
1 parent 5461368 commit f3ab6ae
Show file tree
Hide file tree
Showing 2 changed files with 12 additions and 16 deletions.
14 changes: 6 additions & 8 deletions docs/supported_ops.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,14 +14,12 @@ apply to other versions of Spark, but there may be slight changes.

# General limitations
## `Decimal`
The `Decimal` type in Spark supports a precision
up to 38 digits (128-bits). The RAPIDS Accelerator in most cases stores values up to
64-bits and will support 128-bit in the future. As such the accelerator currently only
supports a precision up to 18 digits. Note that
decimals are disabled by default in the plugin, because it is supported by a relatively
small number of operations presently. This can result in a lot of data movement to and
from the GPU, slowing down processing in some cases.
Result `Decimal` precision and scale follow the same rule as CPU mode in Apache Spark:
The `Decimal` type in Spark supports a precision up to 38 digits (128-bits).
The RAPIDS Accelerator supports 128-bit starting from version 21.12 and decimals are
enabled by default.
Please check [Decimal Support](compatibility.md#decimal-support) for more details.

`Decimal` precision and scale follow the same rule as CPU mode in Apache Spark:

```
* In particular, if we have expressions e1 and e2 with precision/scale p1/s1 and p2/s2
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1706,14 +1706,12 @@ object SupportedOpsDocs {
println()
println("# General limitations")
println("## `Decimal`")
println("The `Decimal` type in Spark supports a precision")
println("up to 38 digits (128-bits). The RAPIDS Accelerator in most cases stores values up to")
println("64-bits and will support 128-bit in the future. As such the accelerator currently only")
println(s"supports a precision up to ${DType.DECIMAL64_MAX_PRECISION} digits. Note that")
println("decimals are disabled by default in the plugin, because it is supported by a relatively")
println("small number of operations presently. This can result in a lot of data movement to and")
println("from the GPU, slowing down processing in some cases.")
println("Result `Decimal` precision and scale follow the same rule as CPU mode in Apache Spark:")
println("The `Decimal` type in Spark supports a precision up to 38 digits (128-bits). ")
println("The RAPIDS Accelerator supports 128-bit starting from version 21.12 and decimals are ")
println("enabled by default.")
println("Please check [Decimal Support](compatibility.md#decimal-support) for more details.")
println()
println("`Decimal` precision and scale follow the same rule as CPU mode in Apache Spark:")
println()
println("```")
println(" * In particular, if we have expressions e1 and e2 with precision/scale p1/s1 and p2/s2")
Expand Down

0 comments on commit f3ab6ae

Please sign in to comment.