Coral is a SQL translation, analysis, and rewrite engine. It establishes a standard intermediate representation, Coral IR, which captures the semantics of relational algebraic expressions independently of any SQL dialect. Coral IR is defined in two forms: one is the at the abstract syntax tree (AST) layer, and the other is at the logical plan layer. Both forms are isomorphic and convertible to each other.
Coral exposes APIs for implementing conversions between SQL dialects and Coral IR in both directions. Currently, Coral supports converting HiveQL and Spark SQL to Coral IR, and converting Coral IR to HiveQL, Spark SQL, and Trino SQL. With multiple SQL dialects supported, Coral can be used to translate SQL statements and views defined in one dialect to equivalent ones in another dialect. It can also be used to interoperate between engines and SQL-powered data sources. For dialect conversion examples, see the modules coral-hive, coral-spark, and coral-trino.
Coral also exposes APIs for Coral IR rewrite and manipulation. This includes rewriting Coral IR expressions to produce semantically equivalent, but more performant expressions. For example, Coral automates incremental view maintenance by rewriting a view definition to an incremental one. See the module coral-incremental for more details. Other Coral rewrite applications include data governance and policy enforcement.
Coral can be used as a library in other projects, or as a service. See instructions below for more details.
- Join the discussion with the community on Slack here!
Coral consists of following modules:
- Coral-Hive: Converts HiveQL to Coral IR (can be typically used with Spark SQL as well).
- Coral-Trino: Converts Coral IR to Trino SQL. Converting Trino SQL to Coral IR is WIP.
- Coral-Spark: Converts Coral IR to Spark SQL (can be typically used with HiveQL as well).
- Coral-Dbt: Integrates Coral with DBT. It enables applying Coral transformations on DBT models.
- Coral-Incremental: Derives an incremental query from input SQL for incremental view maintenance.
- Coral-Schema: Derives Avro schema of view using view logical plan and input Avro schemas of base tables.
- Coral-Spark-Plan [WIP]: Converts Spark plan strings to equivalent logical plan.
- Coral-Visualization: Visualizes Coral SqlNode and RelNode trees and renders them to an output file.
- Coral-Service: Service that exposes REST APIs that allow users to interact with Coral (see Coral-as-a-Service for more details).
This project adheres to semantic versioning, where the format x.y.z represents major, minor, and patch version upgrades. Consideration should be given to potential changes required when integrating different versions of this project.
Major version Upgrade
A major version upgrade represents a version change that introduces backward incompatibility by removal or renaming of classes.
Minor version Upgrade
A minor version upgrade represents a version change that introduces backward incompatibility by removal or renaming of methods.
Please carefully review the release notes and documentation accompanying each version upgrade to understand the specific changes and the recommended steps for migration.
Clone the repository:
git clone https://github.com/linkedin/coral.git
Build:
Please note that this project requires Python 3 and Java 8 to run. Set JAVA_HOME
to the home of an appropriate version and then use:
./gradlew clean build
or, set the org.gradle.java.home
gradle property to the Java home of an appropriate version as below:
./gradlew -Dorg.gradle.java.home=/path/to/java/home clean build
The project is under active development and we welcome contributions of different forms. Please see the Contribution Agreement.
- Coral: A SQL translation, analysis, and rewrite engine for modern data lakehouses, LinkedIn Engineering Blog, 12/10/2020.
- Incremental View Maintenance with Coral, DBT, and Iceberg, Tech Talk, Iceberg Meetup, 5/11/2023.
- Coral & Transport UDFs: Building Blocks of a Postmodern Data Warehouse, Tech-talk, Facebook HQ, 2/28/2020.
- Transport: Towards Logical Independence Using Translatable Portable UDFs, LinkedIn Engineering Blog, 11/14/2018.
- Dali Views: Functions as a Service for Big Data, LinkedIn Engineering Blog, 11/9/2017.
Coral-as-a-Service or simply, Coral Service is a service that exposes REST APIs that allow users to interact with Coral without necessarily coming from a compute engine. Currently, the service supports an API for query translation between different dialects and another for interacting with a local Hive Metastore to create example databases, tables, and views so they can be referenced in the translation API. The service can be used in two modes: remote Hive Metastore mode, and local Hive Metastore mode. The remote mode uses an existing (already deployed) Hive Metastore to resolve tables and views, while the local one creates an empty embedded Hive Metastore so users can add their own table and view definitions.
A POST API which takes JSON request body containing following parameters and returns the translated query:
sourceLanguage
: Input dialect (e.g., spark, trino, hive -- see below for supported inputs)targetLanguage
: Output dialect (e.g., spark, trino, hive -- see below for supported outputs)query
: SQL query to translate between two dialects- [Optional]
rewriteType
: Type of Coral IR rewrite (e.g, incremental)
A POST API which takes a SQL statement to create a database/table/view in the local metastore (note: this endpoint is only available with Coral Service in local metastore mode).
- Clone Coral repo
git clone https://github.com/linkedin/coral.git
- From the root directory of Coral, access the coral-service module
cd coral-service
- Build
../gradlew clean build
- Run
../gradlew bootRun --args='--spring.profiles.active=localMetastore'
- Add your kerberos client keytab file to
coral-service/src/main/resources
- Appropriately replace all instances of
SET_ME
incoral-service/src/main/resources/hive.properties
- Run
../gradlew bootRun
You can also specify a custom location of hive.properties
file through --hivePropsLocation
as follows
./gradlew bootRun --args='--hivePropsLocation=/tmp/hive.properties'
Then you can interact with the service using your browser or the CLI.
After running ../gradlew bootRun --args='--spring.profiles.active=localMetastore'
(for local metastore mode)
or ../gradlew bootRun
(for remote metastore mode) from coral-service module, configure and start the UI.
Please note: The backend service runs on port 8080 (by default) and the web UI runs on port 3000 (by default).
- Create a
.env.local
file in the frontend project's root directory - Copy over the template from
.env.local.example
into the new.env.local
file - Fill in the environment variable values in
.env.local
npm install
npm run dev
Once compiled, the UI can be accessed from the browser at http://localhost:3000.
The UI provides 3 features:This feature is only available with Coral Service in local metastore mode, it calls /api/catalog-ops/execute
API above.
You can enter a SQL statement to create a database/table/view in the local metastore.
This feature is available with Coral Service in both local and remote metastore modes, it calls /api/translations/translate
API above.
You can enter a SQL query and specify the source and target language to use Coral translation service. You can also specify the rewrite type to apply on the input query.
During translation, graphs of the Coral intermediate representations will also be generated and shown on screen. This will also include any post-rewrite nodes.
npm run lint:fix
npm run format
Apart from the UI above, you can also interact with the service using the CLI.
Example workflow for local metastore mode:
- Create a database called
db1
in local metastore using the/api/catalog-ops/execute
endpoint
curl --header "Content-Type: application/json" \
--request POST \
--data "CREATE DATABASE IF NOT EXISTS db1" \
http://localhost:8080/api/catalog-ops/execute
Creation successful
- Create a table called
airport
withindb1
in local metastore using the/api/catalog-ops/execute
endpoint
curl --header "Content-Type: application/json" \
--request POST \
--data "CREATE TABLE IF NOT EXISTS db1.airport(name string, country string, area_code int, code string, datepartition string)" \
http://localhost:8080/api/catalog-ops/execute
Creation successful
- Translate a query on
db1.airport
in local metastore using the/api/translations/translate
endpoint
curl --header "Content-Type: application/json" \
--request POST \
--data '{
"sourceLanguage":"hive",
"targetLanguage":"trino",
"query":"SELECT * FROM db1.airport"
}' \
http://localhost:8080/api/translations/translate
The translation result is:
Original query in HiveQL:
SELECT * FROM db1.airport
Translated to Trino SQL:
SELECT "name", "country", "area_code", "code", "datepartition"
FROM "db1"."airport"
- Hive to Trino
- Hive to Spark
- Trino to Spark
Note: During Trino to Spark translations, views referenced in queries are considered to be defined in HiveQL and hence cannot be used when translating a view from Trino. Currently, only referencing base tables is supported in Trino queries. This translation path is currently a POC and may need further improvements. - Spark to Trino