Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flaky Parquet tests by ParquetMemoryManagerRuntimeException in Hive connector #13633

Closed
ebyhr opened this issue Aug 12, 2022 · 19 comments · Fixed by #15742
Closed

Flaky Parquet tests by ParquetMemoryManagerRuntimeException in Hive connector #13633

ebyhr opened this issue Aug 12, 2022 · 19 comments · Fixed by #15742
Labels
bug Something isn't working

Comments

@ebyhr
Copy link
Member

ebyhr commented Aug 12, 2022

Error:  Failures: 
Error:    TestHiveConnectorTest>BaseHiveConnectorTest.testScaleWriters:3853->BaseHiveConnectorTest.testWithAllStorageFormats:8360->BaseHiveConnectorTest.testWithStorageFormat:8373 Failure for format PARQUET with properties {hive={experimental_parquet_optimized_writer_enabled=false}}
Error:    TestHiveConnectorTest>BaseHiveConnectorTest.testSelectWithNoColumns:7621->BaseHiveConnectorTest.testWithAllStorageFormats:8360->BaseHiveConnectorTest.testWithStorageFormat:8373 Failure for format PARQUET with properties {hive={experimental_parquet_optimized_writer_enabled=false}}
Error:    TestHiveConnectorTest>BaseHiveConnectorTest.testSubfieldReordering:5210->AbstractTestQueryFramework.assertUpdate:318->AbstractTestQueryFramework.assertUpdate:323 » QueryFailed
Error:    TestHiveConnectorTest>BaseHiveConnectorTest.testTimestampPrecisionCtas:7992->BaseHiveConnectorTest.testWithAllStorageFormats:8360->BaseHiveConnectorTest.testWithStorageFormat:8373 Failure for format PARQUET with properties {hive={experimental_parquet_optimized_writer_enabled=false}}
Error:    TestHiveConnectorTest>BaseHiveConnectorTest.testTimestampPrecisionInsert:7965->BaseHiveConnectorTest.testWithAllStorageFormats:8360->BaseHiveConnectorTest.testWithStorageFormat:8373 Failure for format PARQUET with properties {hive={experimental_parquet_optimized_writer_enabled=false}}
Error:    TestHiveConnectorTest>BaseHiveConnectorTest.testUseColumnAddDrop:8195->AbstractTestQueryFramework.assertUpdate:323 » QueryFailed
Error:    TestHiveConnectorTest>BaseHiveConnectorTest.testUseColumnNames:8140->AbstractTestQueryFramework.assertUpdate:323 » QueryFailed
Error:    TestHiveConnectorTest>BaseHiveConnectorTest.testUseColumnNames:8140->AbstractTestQueryFramework.assertUpdate:323 » QueryFailed
Error:    TestParquetPageSkipping.testPageSkippingWithNonSequentialOffsets:102->AbstractTestQueryFramework.assertUpdate:318->AbstractTestQueryFramework.assertUpdate:323 » QueryFailed
Error:    TestParquetPageSkipping.testPageSkipping:139->buildSortedTables:74->AbstractTestQueryFramework.assertUpdate:323 » QueryFailed
Error:  io.trino.plugin.hive.TestHiveConnectorTest.testUseColumnNames[PARQUET, false](4)  Time elapsed: 0.098 s  <<< FAILURE!
io.trino.testing.QueryFailedException: New Memory allocation 636336 bytes is smaller than the minimum allocation size of 1048576 bytes.
	at io.trino.testing.AbstractTestingTrinoClient.execute(AbstractTestingTrinoClient.java:123)
	at io.trino.testing.DistributedQueryRunner.execute(DistributedQueryRunner.java:479)
	at io.trino.testing.QueryAssertions.assertUpdate(QueryAssertions.java:71)
	at io.trino.testing.AbstractTestQueryFramework.assertUpdate(AbstractTestQueryFramework.java:323)
	at io.trino.plugin.hive.BaseHiveConnectorTest.testUseColumnNames(BaseHiveConnectorTest.java:8140)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:568)
	at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:104)
	at org.testng.internal.Invoker.invokeMethod(Invoker.java:645)
	at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:851)
	at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1177)
	at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:129)
	at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:112)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
	at java.base/java.lang.Thread.run(Thread.java:833)
	Suppressed: java.lang.Exception: SQL: INSERT INTO test_renames_parquet_false_1036p9fb8h VALUES(111, 'Katy', 57, 'CA')
		at io.trino.testing.DistributedQueryRunner.execute(DistributedQueryRunner.java:482)
		... 16 more
Caused by: org.apache.parquet.hadoop.ParquetMemoryManagerRuntimeException: New Memory allocation 636336 bytes is smaller than the minimum allocation size of 1048576 bytes.
	at org.apache.parquet.hadoop.MemoryManager.updateAllocation(MemoryManager.java:132)
	at org.apache.parquet.hadoop.MemoryManager.addWriter(MemoryManager.java:86)
	at org.apache.parquet.hadoop.ParquetRecordWriter.<init>(ParquetRecordWriter.java:155)
	at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:501)
	at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:430)
	at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:425)
	at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.<init>(ParquetRecordWriterWrapper.java:70)
	at org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat.getParquerRecordWriterWrapper(MapredParquetOutputFormat.java:137)
	at org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat.getHiveRecordWriter(MapredParquetOutputFormat.java:126)
	at io.trino.plugin.hive.parquet.ParquetRecordWriter.create(ParquetRecordWriter.java:66)
	at io.trino.plugin.hive.util.HiveWriteUtils.createRecordWriter(HiveWriteUtils.java:185)
	at io.trino.plugin.hive.RecordFileWriter.<init>(RecordFileWriter.java:113)
	at io.trino.plugin.hive.HiveWriterFactory.createWriter(HiveWriterFactory.java:538)
	at io.trino.plugin.hive.HivePageSink.getWriterIndexes(HivePageSink.java:402)
	at io.trino.plugin.hive.HivePageSink.writePage(HivePageSink.java:301)
	at io.trino.plugin.hive.HivePageSink.doAppend(HivePageSink.java:296)
	at io.trino.plugin.hive.HivePageSink.lambda$appendPage$2(HivePageSink.java:282)
	at io.trino.plugin.hive.authentication.HdfsAuthentication.lambda$doAs$0(HdfsAuthentication.java:26)
	at io.trino.plugin.hive.authentication.NoHdfsAuthentication.doAs(NoHdfsAuthentication.java:25)
	at io.trino.plugin.hive.authentication.HdfsAuthentication.doAs(HdfsAuthentication.java:25)
	at io.trino.plugin.hive.HdfsEnvironment.doAs(HdfsEnvironment.java:102)
	at io.trino.plugin.hive.HivePageSink.appendPage(HivePageSink.java:282)
	at io.trino.plugin.base.classloader.ClassLoaderSafeConnectorPageSink.appendPage(ClassLoaderSafeConnectorPageSink.java:69)
	at io.trino.operator.TableWriterOperator.addInput(TableWriterOperator.java:257)
	at io.trino.operator.Driver.processInternal(Driver.java:415)
	at io.trino.operator.Driver.lambda$process$10(Driver.java:313)
	at io.trino.operator.Driver.tryWithLock(Driver.java:698)
	at io.trino.operator.Driver.process(Driver.java:305)
	at io.trino.operator.Driver.processForDuration(Driver.java:276)
	at io.trino.execution.SqlTaskExecution$DriverSplitRunner.processFor(SqlTaskExecution.java:740)
	at io.trino.execution.executor.PrioritizedSplitRunner.process(PrioritizedSplitRunner.java:164)
	at io.trino.execution.executor.TaskExecutor$TaskRunner.run(TaskExecutor.java:490)
	at io.trino.$gen.Trino_testversion____20220812_060842_14042.run(Unknown Source)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
	at java.base/java.lang.Thread.run(Thread.java:833)

https://github.com/trinodb/trino/runs/7800774821

@findepi
Copy link
Member

findepi commented Aug 12, 2022

i tentatively added release-bocker, as it looks like a regression, but please @trinodb/maintainers take a look. maybe i am over cautious here.

@findepi
Copy link
Member

findepi commented Aug 24, 2022

@electrum
Copy link
Member

This error is strange. It’s inside the Hadoop Parquet code. Did we upgrade that recently? I can’t see why this would suddenly start failing.

@electrum
Copy link
Member

The last upgrade was several releases ago: 1aa7c98

@martint
Copy link
Member

martint commented Aug 25, 2022

@ebyhr
Copy link
Member Author

ebyhr commented Aug 25, 2022

The first failure was Aug 3rd. Upgrading to Java 17 might be related 4473226

@martint
Copy link
Member

martint commented Aug 25, 2022

I don't see how updating the language level should have any effect. The tests had been running with Java 17 before that change went in.

@phd3
Copy link
Member

phd3 commented Sep 7, 2022

@findepi
Copy link
Member

findepi commented Sep 20, 2022

@findepi
Copy link
Member

findepi commented Oct 10, 2022

https://github.com/trinodb/trino/actions/runs/3218319569/jobs/5262761921

Error:  io.trino.plugin.hive.TestHiveConnectorTest.testScaleWriters  Time elapsed: 1.294 s  <<< FAILURE!
java.lang.AssertionError: Failure for format PARQUET with properties {hive={parquet_optimized_writer_enabled=false}}
	at org.testng.Assert.fail(Assert.java:83)
	at io.trino.plugin.hive.BaseHiveConnectorTest.testWithStorageFormat(BaseHiveConnectorTest.java:8410)
	at io.trino.plugin.hive.BaseHiveConnectorTest.testWithAllStorageFormats(BaseHiveConnectorTest.java:8397)
	at io.trino.plugin.hive.BaseHiveConnectorTest.testScaleWriters(BaseHiveConnectorTest.java:3910)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:568)
	at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:104)
	at org.testng.internal.Invoker.invokeMethod(Invoker.java:645)
	at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:851)
	at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1177)
	at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:129)
	at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:112)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
	at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: io.trino.testing.QueryFailedException: New Memory allocation 119532 bytes is smaller than the minimum allocation size of 1048576 bytes.
	at io.trino.testing.AbstractTestingTrinoClient.execute(AbstractTestingTrinoClient.java:123)
	at io.trino.testing.DistributedQueryRunner.execute(DistributedQueryRunner.java:480)
	at io.trino.testing.QueryAssertions.assertUpdate(QueryAssertions.java:71)
	at io.trino.testing.AbstractTestQueryFramework.assertUpdate(AbstractTestQueryFramework.java:373)
	at io.trino.plugin.hive.BaseHiveConnectorTest.testMultipleWriters(BaseHiveConnectorTest.java:3945)
	at io.trino.plugin.hive.BaseHiveConnectorTest.testWithStorageFormat(BaseHiveConnectorTest.java:8407)
	... 15 more
	Suppressed: java.lang.Exception: SQL: CREATE TABLE scale_writers_large WITH (format = 'PARQUET') AS SELECT * FROM tpch.sf1.orders
		at io.trino.testing.DistributedQueryRunner.execute(DistributedQueryRunner.java:483)
		... 19 more
Caused by: org.apache.parquet.hadoop.ParquetMemoryManagerRuntimeException: New Memory allocation 119532 bytes is smaller than the minimum allocation size of 1048576 bytes.
	at org.apache.parquet.hadoop.MemoryManager.updateAllocation(MemoryManager.java:132)
	at org.apache.parquet.hadoop.MemoryManager.addWriter(MemoryManager.java:86)
	at org.apache.parquet.hadoop.ParquetRecordWriter.<init>(ParquetRecordWriter.java:155)
	at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:501)
	at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:430)
	at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:425)
	at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.<init>(ParquetRecordWriterWrapper.java:70)
	at org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat.getParquerRecordWriterWrapper(MapredParquetOutputFormat.java:137)
	at org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat.getHiveRecordWriter(MapredParquetOutputFormat.java:126)
	at io.trino.plugin.hive.parquet.ParquetRecordWriter.create(ParquetRecordWriter.java:66)
	at io.trino.plugin.hive.util.HiveWriteUtils.createRecordWriter(HiveWriteUtils.java:185)
	at io.trino.plugin.hive.RecordFileWriter.<init>(RecordFileWriter.java:114)
	at io.trino.plugin.hive.HiveWriterFactory.createWriter(HiveWriterFactory.java:536)
	at io.trino.plugin.hive.HivePageSink.getWriterIndexes(HivePageSink.java:403)
	at io.trino.plugin.hive.HivePageSink.writePage(HivePageSink.java:302)
	at io.trino.plugin.hive.HivePageSink.doAppend(HivePageSink.java:297)
	at io.trino.plugin.hive.HivePageSink.lambda$appendPage$2(HivePageSink.java:283)
	at io.trino.hdfs.authentication.HdfsAuthentication.lambda$doAs$0(HdfsAuthentication.java:26)
	at org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat.getHiveRecordWriter(MapredParquetOutputFormat.java:126)
	at io.trino.plugin.hive.parquet.ParquetRecordWriter.create(ParquetRecordWriter.java:66)
	at io.trino.plugin.hive.util.HiveWriteUtils.createRecordWriter(HiveWriteUtils.java:185)
	at io.trino.plugin.hive.RecordFileWriter.<init>(RecordFileWriter.java:114)
	at io.trino.plugin.hive.HiveWriterFactory.createWriter(HiveWriterFactory.java:536)
	at io.trino.plugin.hive.HivePageSink.getWriterIndexes(HivePageSink.java:403)
	at io.trino.plugin.hive.HivePageSink.writePage(HivePageSink.java:302)
	at io.trino.plugin.hive.HivePageSink.doAppend(HivePageSink.java:297)
	at io.trino.plugin.hive.HivePageSink.lambda$appendPage$2(HivePageSink.java:283)
	at io.trino.hdfs.authentication.HdfsAuthentication.lambda$doAs$0(HdfsAuthentication.java:26)
	at io.trino.hdfs.authentication.NoHdfsAuthentication.doAs(NoHdfsAuthentication.java:25)
	at io.trino.hdfs.authentication.HdfsAuthentication.doAs(HdfsAuthentication.java:25)
	at io.trino.hdfs.HdfsEnvironment.doAs(HdfsEnvironment.java:98)
	at io.trino.plugin.hive.HivePageSink.appendPage(HivePageSink.java:283)
	at io.trino.plugin.base.classloader.ClassLoaderSafeConnectorPageSink.appendPage(ClassLoaderSafeConnectorPageSink.java:69)
	at io.trino.operator.TableWriterOperator.addInput(TableWriterOperator.java:257)
	at io.trino.operator.Driver.processInternal(Driver.java:416)
	at io.trino.operator.Driver.lambda$process$10(Driver.java:314)
	at io.trino.operator.Driver.tryWithLock(Driver.java:706)
	at io.trino.operator.Driver.process(Driver.java:306)
	at io.trino.operator.Driver.processForDuration(Driver.java:277)
	at io.trino.execution.SqlTaskExecution$DriverSplitRunner.processFor(SqlTaskExecution.java:736)
	at io.trino.execution.executor.PrioritizedSplitRunner.process(PrioritizedSplitRunner.java:164)
	at io.trino.execution.executor.TaskExecutor$TaskRunner.run(TaskExecutor.java:515)
	at io.trino.$gen.Trino_testversion____20221010_115841_12924.run(Unknown Source)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
	at java.base/java.lang.Thread.run(Thread.java:833)

Error:  io.trino.plugin.hive.TestParquetPageSkipping.testPageSkipping[custkey, smallint, [[Ljava.lang.Object;@32c3bfd5](11)  Time elapsed: 0.103 s  <<< FAILURE!
io.trino.testing.QueryFailedException: New Memory allocation 19922 bytes is smaller than the minimum allocation size of 1048576 bytes.
	at io.trino.testing.AbstractTestingTrinoClient.execute(AbstractTestingTrinoClient.java:123)
	at io.trino.testing.DistributedQueryRunner.execute(DistributedQueryRunner.java:480)
	at io.trino.testing.QueryAssertions.assertUpdate(QueryAssertions.java:71)
	at io.trino.testing.AbstractTestQueryFramework.assertUpdate(AbstractTestQueryFramework.java:373)
	at io.trino.plugin.hive.TestParquetPageSkipping.buildSortedTables(TestParquetPageSkipping.java:85)
	at io.trino.plugin.hive.TestParquetPageSkipping.testPageSkipping(TestParquetPageSkipping.java:150)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:568)
	at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:104)
	at org.testng.internal.Invoker.invokeMethod(Invoker.java:645)
	at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:851)
	at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1177)
	at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:129)
	at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:112)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
	at java.base/java.lang.Thread.run(Thread.java:833)
	Suppressed: java.lang.Exception: SQL: INSERT INTO test_page_skipping_p67gnszlyf SELECT *, ARRAY[rand(), rand(), rand()] FROM tpch.tiny.orders
		at io.trino.testing.DistributedQueryRunner.execute(DistributedQueryRunner.java:483)
		... 17 more
Caused by: org.apache.parquet.hadoop.ParquetMemoryManagerRuntimeException: New Memory allocation 19922 bytes is smaller than the minimum allocation size of 1048576 bytes.
	at org.apache.parquet.hadoop.MemoryManager.updateAllocation(MemoryManager.java:132)
	at org.apache.parquet.hadoop.MemoryManager.addWriter(MemoryManager.java:86)
	at org.apache.parquet.hadoop.ParquetRecordWriter.<init>(ParquetRecordWriter.java:155)
	at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:501)
	at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:430)
	at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:425)
	at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.<init>(ParquetRecordWriterWrapper.java:70)
	at org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat.getParquerRecordWriterWrapper(MapredParquetOutputFormat.java:137)
	at org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat.getHiveRecordWriter(MapredParquetOutputFormat.java:126)
	at io.trino.plugin.hive.parquet.ParquetRecordWriter.create(ParquetRecordWriter.java:66)
	at io.trino.plugin.hive.util.HiveWriteUtils.createRecordWriter(HiveWriteUtils.java:185)
	at io.trino.plugin.hive.RecordFileWriter.<init>(RecordFileWriter.java:114)
	at io.trino.plugin.hive.HiveWriterFactory.createWriter(HiveWriterFactory.java:536)
	at io.trino.plugin.hive.HivePageSink.getWriterIndexes(HivePageSink.java:403)
	at io.trino.plugin.hive.HivePageSink.writePage(HivePageSink.java:302)
	at io.trino.plugin.hive.HivePageSink.doAppend(HivePageSink.java:297)
	at io.trino.plugin.hive.HivePageSink.lambda$appendPage$2(HivePageSink.java:283)
	at io.trino.hdfs.authentication.HdfsAuthentication.lambda$doAs$0(HdfsAuthentication.java:26)
	at io.trino.hdfs.authentication.NoHdfsAuthentication.doAs(NoHdfsAuthentication.java:25)
	at io.trino.hdfs.authentication.HdfsAuthentication.doAs(HdfsAuthentication.java:25)
	at io.trino.hdfs.HdfsEnvironment.doAs(HdfsEnvironment.java:98)
	at io.trino.plugin.hive.HivePageSink.appendPage(HivePageSink.java:283)
	at io.trino.plugin.base.classloader.ClassLoaderSafeConnectorPageSink.appendPage(ClassLoaderSafeConnectorPageSink.java:69)
	at io.trino.operator.TableWriterOperator.addInput(TableWriterOperator.java:257)
	at io.trino.operator.Driver.processInternal(Driver.java:416)
	at io.trino.operator.Driver.lambda$process$10(Driver.java:314)
	at io.trino.operator.Driver.tryWithLock(Driver.java:706)
	at io.trino.operator.Driver.process(Driver.java:306)
	at io.trino.operator.Driver.processForDuration(Driver.java:277)
	at io.trino.execution.SqlTaskExecution$DriverSplitRunner.processFor(SqlTaskExecution.java:736)
	at io.trino.execution.executor.PrioritizedSplitRunner.process(PrioritizedSplitRunner.java:164)
	at io.trino.execution.executor.TaskExecutor$TaskRunner.run(TaskExecutor.java:515)
	at io.trino.$gen.Trino_testversion____20221010_115838_12854.run(Unknown Source)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
	at java.base/java.lang.Thread.run(Thread.java:833)

(and more)

@lukasz-stec
Copy link
Member

@nevillelyh
Copy link
Member

@findepi
Copy link
Member

findepi commented Dec 5, 2022

@findinpath
Copy link
Contributor

@phd3
Copy link
Member

phd3 commented Dec 27, 2022

@findepi
Copy link
Member

findepi commented Jan 13, 2023

https://github.com/trinodb/trino/actions/runs/3911618479/jobs/6686351347

2023-01-13T16:02:13.7341098Z [ERROR]   TestOptimizedParquetReader>AbstractTestParquetReader.testParquetShortDecimalWriteToTrinoTinyintBlock:1021 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7342346Z [ERROR]   TestOptimizedParquetReader>AbstractTestParquetReader.testSchemaWithOptionalOptionalRequiredFields:1251 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7343584Z [ERROR]   TestOptimizedParquetReader>AbstractTestParquetReader.testSchemaWithOptionalRequiredOptionalFields:1277 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7344801Z [ERROR]   TestOptimizedParquetReader>AbstractTestParquetReader.testSchemaWithRepeatedOptionalRequiredFields:1219 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7346003Z [ERROR]   TestOptimizedParquetReader>AbstractTestParquetReader.testSchemaWithRequiredOptionalOptionalFields:1329 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7347234Z [ERROR]   TestOptimizedParquetReader>AbstractTestParquetReader.testSchemaWithRequiredOptionalRequired2Fields:1426 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7348456Z [ERROR]   TestOptimizedParquetReader>AbstractTestParquetReader.testSchemaWithRequiredOptionalRequiredFields:1355 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7349800Z [ERROR]   TestOptimizedParquetReader>AbstractTestParquetReader.testSchemaWithRequiredRequiredOptionalFields:1303 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7350922Z [ERROR]   TestOptimizedParquetReader>AbstractTestParquetReader.testSchemaWithRequiredStruct:1380 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7351987Z [ERROR]   TestOptimizedParquetReader>AbstractTestParquetReader.testSingleLevelArrayOfMapOfArray:475 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7353209Z [ERROR]   TestOptimizedParquetReader>AbstractTestParquetReader.testSingleLevelArrayOfMapOfStruct:407 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7354402Z [ERROR]   TestOptimizedParquetReader>AbstractTestParquetReader.testSingleLevelArrayOfStructOfSingleElement:422 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7355662Z [ERROR]   TestOptimizedParquetReader>AbstractTestParquetReader.testSingleLevelArrayOfStructOfStructOfSingleElement:442 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7356922Z [ERROR]   TestOptimizedParquetReader>AbstractTestParquetReader.testSingleLevelSchemaArrayOfArrayOfStructOfArray:291 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7358171Z [ERROR]   TestOptimizedParquetReader>AbstractTestParquetReader.testSingleLevelSchemaArrayOfMaps:377 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7359307Z [ERROR]   TestOptimizedParquetReader>AbstractTestParquetReader.testSingleLevelSchemaArrayOfStructOfArray:326 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7360445Z [ERROR]   TestOptimizedParquetReader>AbstractTestParquetReader.testSingleLevelSchemaArrayOfStructs:259 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7361542Z [ERROR]   TestOptimizedParquetReader>AbstractTestParquetReader.testSingleLevelSchemaNestedArrays:199 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7362599Z [ERROR]   TestOptimizedParquetReader>AbstractTestParquetReader.testStringDictionarySequence:1767 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7363617Z [ERROR]   TestOptimizedParquetReader>AbstractTestParquetReader.testStringDirectSequence:1756 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7364640Z [ERROR]   TestOptimizedParquetReader>AbstractTestParquetReader.testStringStrideDictionary:1778 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7365602Z [ERROR]   TestOptimizedParquetReader>AbstractTestParquetReader.testStringUnicode:1745 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7366489Z [ERROR]   TestOptimizedParquetReader>AbstractTestParquetReader.testStruct:556 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7367431Z [ERROR]   TestOptimizedParquetReader>AbstractTestParquetReader.testStructMaxReadBytes:1871 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7368451Z [ERROR]   TestOptimizedParquetReader>AbstractTestParquetReader.testStructOfArrayAndPrimitive:726 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7369420Z [ERROR]   TestOptimizedParquetReader>AbstractTestParquetReader.testStructOfMaps:671 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7370527Z [ERROR]   TestOptimizedParquetReader>AbstractTestParquetReader.testStructOfNullableArrayBetweenNonNullFields:709 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7371744Z [ERROR]   TestOptimizedParquetReader>AbstractTestParquetReader.testStructOfNullableMapBetweenNonNullFields:690 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7372852Z [ERROR]   TestOptimizedParquetReader>AbstractTestParquetReader.testStructOfPrimitiveAndArray:753 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7373975Z [ERROR]   TestOptimizedParquetReader>AbstractTestParquetReader.testStructOfPrimitiveAndSingleLevelArray:767 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7375134Z [ERROR]   TestOptimizedParquetReader>AbstractTestParquetReader.testStructOfSingleLevelArrayAndPrimitive:739 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7376173Z [ERROR]   TestOptimizedParquetReader>AbstractTestParquetReader.testStructOfTwoArrays:781 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7377173Z [ERROR]   TestOptimizedParquetReader>AbstractTestParquetReader.testStructOfTwoNestedArrays:794 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7378317Z [ERROR]   TestOptimizedParquetReader>AbstractTestParquetReader.testStructOfTwoNestedSingleLevelSchemaArrays:815 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7379334Z [ERROR]   TestOptimizedParquetReader>AbstractTestParquetReader.testTimestamp:1162 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7380228Z [ERROR]   TestOptimizedParquetReader>AbstractTestParquetReader.testTimestamp:1162 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7381208Z [ERROR]   TestOptimizedParquetReader>AbstractTestParquetReader.testTimestamp:1162 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7382365Z [ERROR]   TestParquetDecimalScaling.testParquetLongFixedLenByteArrayWithTrinoShortDecimal:349->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7383818Z [ERROR]   TestParquetDecimalScaling.testParquetLongFixedLenByteArrayWithTrinoShortDecimal:349->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7385094Z [ERROR]   TestParquetDecimalScaling.testParquetLongFixedLenByteArrayWithTrinoShortDecimal:349->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7386461Z [ERROR]   TestParquetDecimalScaling.testParquetLongFixedLenByteArrayWithTrinoShortDecimal:349->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7387739Z [ERROR]   TestParquetDecimalScaling.testParquetLongFixedLenByteArrayWithTrinoShortDecimal:349->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7389198Z [ERROR]   TestParquetDecimalScaling.testParquetLongFixedLenByteArrayWithTrinoShortDecimal:349->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7390479Z [ERROR]   TestParquetDecimalScaling.testParquetLongFixedLenByteArrayWithTrinoShortDecimal:349->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7391750Z [ERROR]   TestParquetDecimalScaling.testParquetLongFixedLenByteArrayWithTrinoShortDecimal:349->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7393013Z [ERROR]   TestParquetDecimalScaling.testParquetLongFixedLenByteArrayWithTrinoShortDecimal:349->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7394292Z [ERROR]   TestParquetDecimalScaling.testParquetLongFixedLenByteArrayWithTrinoShortDecimal:349->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7395571Z [ERROR]   TestParquetDecimalScaling.testParquetLongFixedLenByteArrayWithTrinoShortDecimal:349->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7396830Z [ERROR]   TestParquetDecimalScaling.testParquetLongFixedLenByteArrayWithTrinoShortDecimal:349->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7398096Z [ERROR]   TestParquetDecimalScaling.testParquetLongFixedLenByteArrayWithTrinoShortDecimal:349->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7399365Z [ERROR]   TestParquetDecimalScaling.testParquetLongFixedLenByteArrayWithTrinoShortDecimal:349->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7400639Z [ERROR]   TestParquetDecimalScaling.testParquetLongFixedLenByteArrayWithTrinoShortDecimal:349->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7401899Z [ERROR]   TestParquetDecimalScaling.testParquetLongFixedLenByteArrayWithTrinoShortDecimal:349->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7403159Z [ERROR]   TestParquetDecimalScaling.testReadingMatchingPrecision:116->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7404349Z [ERROR]   TestParquetDecimalScaling.testReadingMatchingPrecision:116->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7405530Z [ERROR]   TestParquetDecimalScaling.testReadingMatchingPrecision:116->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7406718Z [ERROR]   TestParquetDecimalScaling.testReadingMatchingPrecision:116->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7407899Z [ERROR]   TestParquetDecimalScaling.testReadingMatchingPrecision:116->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7409061Z [ERROR]   TestParquetDecimalScaling.testReadingMatchingPrecision:116->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7410361Z [ERROR]   TestParquetDecimalScaling.testReadingMatchingPrecision:116->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7411534Z [ERROR]   TestParquetDecimalScaling.testReadingMatchingPrecision:116->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7412709Z [ERROR]   TestParquetDecimalScaling.testReadingMatchingPrecision:116->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7413885Z [ERROR]   TestParquetDecimalScaling.testReadingMatchingPrecision:116->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7415151Z [ERROR]   TestParquetDecimalScaling.testReadingMatchingPrecision:116->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7416327Z [ERROR]   TestParquetDecimalScaling.testReadingMatchingPrecision:116->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7417498Z [ERROR]   TestParquetDecimalScaling.testReadingMatchingPrecision:116->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7418668Z [ERROR]   TestParquetDecimalScaling.testReadingMatchingPrecision:116->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7419834Z [ERROR]   TestParquetDecimalScaling.testReadingMatchingPrecision:116->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7421003Z [ERROR]   TestParquetDecimalScaling.testReadingMatchingPrecision:116->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7422206Z [ERROR]   TestParquetDecimalScaling.testReadingNonRescalableDecimals:300->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7423431Z [ERROR]   TestParquetDecimalScaling.testReadingNonRescalableDecimals:300->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7424647Z [ERROR]   TestParquetDecimalScaling.testReadingNonRescalableDecimals:300->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7425868Z [ERROR]   TestParquetDecimalScaling.testReadingNonRescalableDecimals:300->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7427119Z [ERROR]   TestParquetDecimalScaling.testReadingNonRescalableDecimals:300->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7428345Z [ERROR]   TestParquetDecimalScaling.testReadingNonRescalableDecimals:300->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7429751Z [ERROR]   TestParquetDecimalScaling.testReadingNonRescalableDecimals:300->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7430965Z [ERROR]   TestParquetDecimalScaling.testReadingNonRescalableDecimals:300->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7432177Z [ERROR]   TestParquetDecimalScaling.testReadingNonRescalableDecimals:300->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7433378Z [ERROR]   TestParquetDecimalScaling.testReadingNonRescalableDecimals:300->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7434573Z [ERROR]   TestParquetDecimalScaling.testReadingNonRescalableDecimals:300->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7435781Z [ERROR]   TestParquetDecimalScaling.testReadingNonRescalableDecimals:300->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7437124Z [ERROR]   TestParquetDecimalScaling.testReadingNonRescalableDecimals:300->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7438354Z [ERROR]   TestParquetDecimalScaling.testReadingNonRescalableDecimals:300->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7439532Z [ERROR]   TestParquetDecimalScaling.testReadingRescaledDecimals:176->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7440710Z [ERROR]   TestParquetDecimalScaling.testReadingRescaledDecimals:176->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7441972Z [ERROR]   TestParquetDecimalScaling.testReadingRescaledDecimals:176->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7443140Z [ERROR]   TestParquetDecimalScaling.testReadingRescaledDecimals:176->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7444308Z [ERROR]   TestParquetDecimalScaling.testReadingRescaledDecimals:176->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7445468Z [ERROR]   TestParquetDecimalScaling.testReadingRescaledDecimals:176->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7446618Z [ERROR]   TestParquetDecimalScaling.testReadingRescaledDecimals:176->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7447788Z [ERROR]   TestParquetDecimalScaling.testReadingRescaledDecimals:176->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7448949Z [ERROR]   TestParquetDecimalScaling.testReadingRescaledDecimals:176->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7450109Z [ERROR]   TestParquetDecimalScaling.testReadingRescaledDecimals:176->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7451259Z [ERROR]   TestParquetDecimalScaling.testReadingRescaledDecimals:176->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7452408Z [ERROR]   TestParquetDecimalScaling.testReadingRescaledDecimals:176->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7453566Z [ERROR]   TestParquetDecimalScaling.testReadingRescaledDecimals:176->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7454724Z [ERROR]   TestParquetDecimalScaling.testReadingRescaledDecimals:176->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7455874Z [ERROR]   TestParquetDecimalScaling.testReadingRescaledDecimals:176->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7457027Z [ERROR]   TestParquetDecimalScaling.testReadingRescaledDecimals:176->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7458174Z [ERROR]   TestParquetDecimalScaling.testReadingRescaledDecimals:176->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7459328Z [ERROR]   TestParquetDecimalScaling.testReadingRescaledDecimals:176->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7460479Z [ERROR]   TestParquetDecimalScaling.testReadingRescaledDecimals:176->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7461641Z [ERROR]   TestParquetDecimalScaling.testReadingRescaledDecimals:176->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7462875Z [ERROR]   TestParquetDecimalScaling.testReadingRescaledDecimals:176->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7464048Z [ERROR]   TestParquetDecimalScaling.testReadingRescaledDecimals:176->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7465190Z [ERROR]   TestParquetDecimalScaling.testReadingRescaledDecimals:176->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7466350Z [ERROR]   TestParquetDecimalScaling.testReadingRescaledDecimals:176->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7467597Z [ERROR]   TestParquetDecimalScaling.testReadingRescaledDecimals:176->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7468893Z [ERROR]   TestParquetDecimalScaling.testReadingRescaledDecimals:176->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7470048Z [ERROR]   TestParquetDecimalScaling.testReadingRoundedDecimals:252->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7471211Z [ERROR]   TestParquetDecimalScaling.testReadingRoundedDecimals:252->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7472368Z [ERROR]   TestParquetDecimalScaling.testReadingRoundedDecimals:252->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7473528Z [ERROR]   TestParquetDecimalScaling.testReadingRoundedDecimals:252->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7474693Z [ERROR]   TestParquetDecimalScaling.testReadingRoundedDecimals:252->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7475848Z [ERROR]   TestParquetDecimalScaling.testReadingRoundedDecimals:252->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7477007Z [ERROR]   TestParquetDecimalScaling.testReadingRoundedDecimals:252->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7478154Z [ERROR]   TestParquetDecimalScaling.testReadingRoundedDecimals:252->writeParquetDecimalsRecord:515->createParquetFile:477 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7479087Z [ERROR]   TestTimestamp.testTimestampBackedByInt64:87->testRoundTrip:120 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7479903Z [ERROR]   TestTimestamp.testTimestampBackedByInt64:87->testRoundTrip:120 » ParquetMemoryManagerRuntime
2023-01-13T16:02:13.7480733Z [ERROR]   TestTimestamp.testTimestampBackedByInt64:87->testRoundTrip:120 » ParquetMemoryManagerRuntime

@findepi
Copy link
Member

findepi commented Jan 16, 2023

@raunaqmorarka
Copy link
Member

#15742 disables the hadoop parquet MemoryManager, it should fix this problem.

@findepi
Copy link
Member

findepi commented Jan 19, 2023

Thank you @raunaqmorarka for fixing this!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Development

Successfully merging a pull request may close this issue.

10 participants