-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NullPointerException while sync when source-s3 CSV -> destination-s3 PARQUET #6871
Comments
The line 139 in The line number is wrong due to the license update. This means the json schema passed into the s3 destination misses a Still investigating. |
@Phlair, is it possible that the json schema generated by the s3 source misses the Looks like this line can be the root cause? |
@tuliren the |
I see. Avro schema requires a definitive type for each field. Currently our Avro to Json schema converter does not support |
@tuliren can we close this issue? |
Yes, we can close. Sorry that I missed your comment. |
Enviroment
11645689431a69c689a15b620e4a2b6bc7b045c3
Current Behavior
I have small CSV file on S3: 3.7MB, 4k rows x 150 columns generated by Python Faker lib.
File is attached
sample_synth_4K_150.csv
I want to save it on S3 as Parquet file using destination-s3.
I get
java.lang.NullPointerException
forio.airbyte.integrations.destination.s3.avro.JsonToAvroSchemaConverter.getAvroSchema(JsonToAvroSchemaConverter.java:139)
Expected Behavior
Sync should infer schema correctly and finish correctly with Parquet file output
Logs
Please see attached
logs-2-0.txt
The text was updated successfully, but these errors were encountered: