You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem or challenge? Please describe what you are trying to do.
There's an open issue in the datafusion repository with the CSV schema inference. The current implementation in arrow will return Int64 as the datatype for any numeric columns that have no decimal and don't match a date format. This circumstance is causing problems when the CSV is read later, should the value overflow the Int64 data type.
Describe the solution you'd like
Maybe arrow could try to support the UInt64 and Decimal128 datatypes as well, should it notice the values inside the CSV are too large. Or even default to String should it notice that even these are too small to ensure the CSV can be read without problems.
Describe alternatives you've considered
Alternatively, I imagine the column's type could be "upgraded" when reading the CSV, should there be any parsing errors due to overflows. I imagine this would need all previously parsed values to be casted, which could hopefully be avoided given better inference results.
Additional context
I'd be open to implementing this change. My naive approach would be something like this: 4b3104e in case anyone here has any suggestions on how to improve it, I would be very happy.
The text was updated successfully, but these errors were encountered:
The approach you describe seems sensible to me, I would perhaps caution that the Decimal support within arrow is fairly limited at the moment, but perhaps that is a separate problem to solve 😄
Is your feature request related to a problem or challenge? Please describe what you are trying to do.
There's an open issue in the datafusion repository with the CSV schema inference. The current implementation in arrow will return
Int64
as the datatype for any numeric columns that have no decimal and don't match a date format. This circumstance is causing problems when the CSV is read later, should the value overflow theInt64
data type.Here's the datafusion issue apache/datafusion#3174
Describe the solution you'd like
Maybe arrow could try to support the
UInt64
andDecimal128
datatypes as well, should it notice the values inside the CSV are too large. Or even default toString
should it notice that even these are too small to ensure the CSV can be read without problems.Describe alternatives you've considered
Alternatively, I imagine the column's type could be "upgraded" when reading the CSV, should there be any parsing errors due to overflows. I imagine this would need all previously parsed values to be casted, which could hopefully be avoided given better inference results.
Additional context
I'd be open to implementing this change. My naive approach would be something like this: 4b3104e in case anyone here has any suggestions on how to improve it, I would be very happy.
The text was updated successfully, but these errors were encountered: