You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I create a table using Spark-SQL, the column names are all uppercase letters, and the connector runs normally. However, when using Trino to create a table, the column names are all lowercase letters, and the connector will throw an exception, and all data is null.
Table demo created using trino.
CREATE TABLE demo (
VBELN varchar
) WITH (
format = 'PARQUET',
location = 's3a://uat-warehouse/demo.db'
);
Table ods_zpp004 created using spark sql. This is running normally.
CREATE TABLE IF NOT EXISTS `ODS_ZPP004` (
`AUFNR` STRING COMMENT "Order"
)
USING iceberg
TBLPROPERTIES ('format-version'='2')
COMMENT 'Issue order info' ;
"iceberg.tables.default-id-columns": "VBELN"
The text was updated successfully, but these errors were encountered:
I opened a PR to add case-insensitive lookups of columns during record conversion, to handle this case where the column name cases between the Kafka record and Iceberg schema don't match.
When I create a table using Spark-SQL, the column names are all uppercase letters, and the connector runs normally. However, when using Trino to create a table, the column names are all lowercase letters, and the connector will throw an exception, and all data is null.
Table demo created using trino.
Table ods_zpp004 created using spark sql. This is running normally.
"iceberg.tables.default-id-columns": "VBELN"
The text was updated successfully, but these errors were encountered: