Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug in Spark Decimal type to Exasol Decimal type conversion #44

Closed
morazow opened this issue Jun 23, 2019 · 5 comments
Closed

Bug in Spark Decimal type to Exasol Decimal type conversion #44

morazow opened this issue Jun 23, 2019 · 5 comments
Labels
bug Unwanted / harmful behavior

Comments

@morazow
Copy link
Contributor

morazow commented Jun 23, 2019

Situation

Currently any Spark decimal type, even if it contains different precision and scale is mapped to Exasol default Decimal type, which by default is Decimal(18, 0).

Solution

Spark Decimal type should be correctly mapped to Exasol decimal type with exact precision and scale values.

Test code

val df = spark.sql("SELECT 1 AS A_INT, CAST(1.11 as DECIMAL(5,2)) AS A_DECIMALE")
val exasolWriteAndReadDF = exasolWriteAndReadFn(df)
exasolWriteAndReadDF.schema.map(_.dataType) shouldBe Array(LongType, DecimalType(5, 2)) 
@morazow morazow changed the title Fix Spark Decimal type to Exasol Decimal type conversion Bug in Spark Decimal type to Exasol Decimal type conversion Jun 24, 2019
@morazow morazow added the bug Unwanted / harmful behavior label Jun 24, 2019
@jpizagno
Copy link
Contributor

@morazow Can I take this ticket, or is somebody already working on it? I think I am familiar with this area of the code.

@morazow
Copy link
Contributor Author

morazow commented Jun 24, 2019

Hey @jpizagno,

Good to hear from you! Yes sure, please feel free to work on this task. Thanks a lot!

Only to make sure that we are on the same page, the Exasol decimal type mapping should be changed here. The Spark decimal type dt should already have precision and scale values, e.g, dt.precision (or similar function).

@jpizagno
Copy link
Contributor

jpizagno commented Jul 3, 2019

@morazow
Copy link
Contributor Author

morazow commented Jul 3, 2019

Hey @jpizagno,

This looks great!

Please send a pull request. I can also suggest some changes before pr. Could you please use the min on precisions, similar like inside boundedDecimal function?

val precision = math.min(decimalType.precision, MAX_EXASOL_PRECISION)

similar for the scale value also, on these lines.

Thanks for working on this! Please send a pull request once you are finished.

morazow pushed a commit that referenced this issue Jul 10, 2019
* Fixes the bug with Decimal type when saving the Spark dataframe #44 
* Updated `Types.boundedDecimal` that considers both Exasol and Spark maximum precision and scale values
@morazow
Copy link
Contributor Author

morazow commented Jul 10, 2019

Fixed in #47.

@morazow morazow closed this as completed Jul 10, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Unwanted / harmful behavior
Projects
None yet
Development

No branches or pull requests

2 participants