Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CH] allow all parquet write jobs (especially for hive ETL jobs) to use native parquet writer #1925

Closed
4 of 7 tasks
binmahone opened this issue Jun 13, 2023 · 0 comments
Closed
4 of 7 tasks
Labels
enhancement New feature or request stale stale

Comments

@binmahone
Copy link
Contributor

binmahone commented Jun 13, 2023

Is your feature request related to a problem or challenge? Please describe what you are trying to do.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

this is a follow-up issue for #1595 (which is a basic prototype), the aim of this issue is to complete native parquet writer and make it the default choice.

  • support different types of insert, e.g. insert into dir, insert into table, dataframe saveAsTable, CTAS, etc.
  • support bucketing and partitioning hive table (p.s. partitioning/bucketing on complex type is not supported)
  • support complex type
  • support HDFS and local FS
  • support S3 and azure FS (just need to test)
  • support ansi mode
  • support reporting write metrics

supporting ORC is NOT considered in this issue

Describe the solution you'd like
A clear and concise description of what you want to happen.

  1. added a new config called spark.gluten.sql.native.parquet.writer.enabled to let Gluten use native parquet writer where possible. e.g.
    • spark.write.parquet(path)
    • spark.sql("insert into parquet_table")
    • spark.sql("insert into directory stored as parquet")
  2. by default, spark.gluten.sql.native.parquet.writer.enabled is false, because currently native parquet writer is still incomplete
  3. We copied&modified ParquetFileFormat, HiveFileFormat, FileFormatWriter, FileFormatDataWriter. In order to minimize maitenance efforts all the copied files are put into shim
  4. the usage of spark.write.format("native_parquet") or spark.write.format("velox") ,i.e. a specicial souce name indicating native parquet, is no longer suppported
  5. Because of 3, we have to make sure overwritten spark classes are loaded first. (https://github.com/oap-project/gluten/blob/main/docs/developers/NewToGluten.md#how-to-prioritize-loading-gluten-jars-in-spark)

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request stale stale
Projects
None yet
Development

No branches or pull requests

1 participant