You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Spark application job doesn't stop when exception thrown from underlying API layer (FlintSpark).
What is the expected behavior?
Spark application job can exit. Instead of stopping Spark context explicitly, FlintJob register a shutdown hook. However, FlintREPL clean up resource explicitly.
Do you have any additional context?
Error log:
# Exception thrown out
24/02/29 06:49:52 ERROR JobOperator: Fail to run query, cause: Failed to recover Flint index
java.lang.IllegalStateException: Failed to recover Flint index
at org.opensearch.flint.spark.FlintSpark.recoverIndex(FlintSpark.scala:301)
at org.opensearch.flint.spark.sql.job.FlintSparkIndexJobAstBuilder.$anonfun$visitRecoverIndexJobStatement$1(FlintSparkIndexJobAstBuilder.scala:22)
at org.opensearch.flint.spark.sql.FlintSparkSqlCommand.run(FlintSparkSqlCommand.scala:27)
...
# FlintJob clean up resource
24/02/29 06:49:55 INFO JobOperator: shut down thread threadpool
# Shutdown hook not triggered. Job hung up for 29 mins
...
24/02/29 07:18:57 INFO BlockManagerInfo: Removed broadcast_0_piece0
The text was updated successfully, but these errors were encountered:
What is the bug?
Spark application job doesn't stop when exception thrown from underlying API layer (
FlintSpark
).What is the expected behavior?
Spark application job can exit. Instead of stopping Spark context explicitly,
FlintJob
register a shutdown hook. However,FlintREPL
clean up resource explicitly.Do you have any additional context?
Error log:
The text was updated successfully, but these errors were encountered: