Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

【ERROR】spark job fail due to scala.NotImplementedError: cannot convert partitioning to native #359

Open
rouxiaomin opened this issue Jan 11, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@rouxiaomin
Copy link

rouxiaomin commented Jan 11, 2024

Describe the bug
spark job fail because of this error。other unImplemented operator are 【WARN】,but this operator is 【ERROR】

env
spark version:spark 3.3.3
blaze version:v2.0.7

the full log is below

24/01/10 18:33:09 ERROR ApplicationMaster: User class threw exception: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 268.0 failed 4 times, most recent failure: Lost task 0.3 in stage 268.0 (TID 20389) (yuntu-qiye-e-010058011010.hz.td executor 3): scala.NotImplementedError: cannot convert partitioning to native: rangepartitioning(inDegree#2307 ASC NULLS FIRST, outDegree#2321 ASC NULLS FIRST, shortPath#2291 ASC NULLS FIRST, 1)
at org.apache.spark.sql.execution.blaze.plan.NativeShuffleExchangeBase.$anonfun$prepareNativeShuffleDependency$2(NativeShuffleExchangeBase.scala:205)
at org.apache.spark.sql.execution.blaze.shuffle.BlazeShuffleWriterBase.nativeShuffleWrite(BlazeShuffleWriterBase.scala:71)
at org.apache.spark.sql.execution.blaze.plan.NativeShuffleExchangeExec$$anon$1.write(NativeShuffleExchangeExec.scala:154)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
at org.apache.spark.scheduler.Task.run(Task.scala:136)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)

Driver stacktrace:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 268.0 failed 4 times, most recent failure: Lost task 0.3 in stage 268.0 (TID 20389) (yuntu-qiye-e-010058011010.hz.td executor 3): scala.NotImplementedError: cannot convert partitioning to native: rangepartitioning(inDegree#2307 ASC NULLS FIRST, outDegree#2321 ASC NULLS FIRST, shortPath#2291 ASC NULLS FIRST, 1)
at org.apache.spark.sql.execution.blaze.plan.NativeShuffleExchangeBase.$anonfun$prepareNativeShuffleDependency$2(NativeShuffleExchangeBase.scala:205)
at org.apache.spark.sql.execution.blaze.shuffle.BlazeShuffleWriterBase.nativeShuffleWrite(BlazeShuffleWriterBase.scala:71)
at org.apache.spark.sql.execution.blaze.plan.NativeShuffleExchangeExec$$anon$1.write(NativeShuffleExchangeExec.scala:154)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
at org.apache.spark.scheduler.Task.run(Task.scala:136)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)

@rouxiaomin rouxiaomin changed the title 【ERROR】scala.NotImplementedError: cannot convert partitioning to native 【ERROR】spark job fail due to scala.NotImplementedError: cannot convert partitioning to native Jan 11, 2024
@richox
Copy link
Collaborator

richox commented Jan 12, 2024

range partitioning is not supported at this moment, we should fallback this exchange to spark. it seems the falling-back logic is somehow not working.

@richox richox added the bug Something isn't working label Jan 12, 2024
@rouxiaomin
Copy link
Author

rouxiaomin commented Jan 15, 2024

range partitioning is not supported at this moment, we should fallback this exchange to spark. it seems the falling-back logic is somehow not working.

hi,could you please tell me when and which verison this bug can be fixed @richox

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants