Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for case sensitive identifiers #17

Open
martint opened this issue Jan 21, 2019 · 15 comments
Open

Add support for case sensitive identifiers #17

martint opened this issue Jan 21, 2019 · 15 comments
Labels
enhancement New feature or request roadmap Top level issues for major efforts in the project

Comments

@martint
Copy link
Member

martint commented Jan 21, 2019

<delimited identifier> ::=
  <double quote> <delimited identifier body> <double quote>

<delimited identifier body> ::=  <delimited identifier part>...
<delimited identifier part> ::=
    <nondoublequote character>
  | <doublequote symbol>

<Unicode delimited identifier> ::=
  U <ampersand> <double quote> <Unicode delimiter body> <double quote>
      <Unicode escape specifier>
<Unicode escape specifier> ::=
  [ UESCAPE <quote> <Unicode escape character> <quote> ]
<Unicode delimiter body> ::=
  <Unicode identifier part>...
<Unicode identifier part> ::=
    <delimited identifier part>
  | <Unicode escape value>
24) For every <identifier body> IB there is exactly one corresponding case-normal form CNF. CNF is an <identifier body> derived from IB as follows:
Let n be the number of characters in IB. For i ranging from 1 (one) to n, the i-th character Mi of IB is transliterated into the corresponding character 
or characters of CNF as follows:
Case:
   a) If Mi is a lower case character or a title case character for which an equivalent upper case sequence U is de ned by Unicode, then let j be th
       e number of characters in U; the next j characters of CNF are U.
   b) Otherwise, the next character of CNF is Mi.
25) The case-normal form of the <identifier body> of a <regular identifier> is used for purposes such as and including determination of identifier 
      equivalence, representation in the Definition and Information Schemas, and representation in diagnostics areas.

...

27) Two <regular identifier>s are equivalent if the case-normal forms of their <identifier body>s, considered as the repetition of a <character string literal> 
that specifies a <character set specification> of SQL_IDENTIFIER and an implementation-defined collation IDC that is sensitive to case, compare equally 
according to the comparison rules in Subclause 8.2, “<comparison predicate>”.

28) A <regular identifier> and a <delimited identifier> are equivalent if the case-normal form of the <identifier body> of the <regular identifier> and the 
<delimited identifier body> of the <delimited identifier> (with all occurrences of <quote> replaced by <quote symbol> and all occurrences of 
<doublequote symbol> replaced by <double quote>), considered as the repetition of a <character string literal> that specifies a <character set specification>
 of SQL_IDENTIFIER and IDC, compare equally according to the comparison rules in Subclause 8.2, “<comparison predicate>”.


29) Two<delimited identifier>s are equivalent if their <delimited identifierbody>s,considered as the repetition of a <character string literal> that specifies
 a <character set specification> of SQL_IDENTIFIER and an implementation-defined collation that is sensitive to case, compare equally according to the
 comparison rules in Subclause 8.2, “<comparison predicate>”.

30) Two <Unicode delimited identifier>s are equivalent if their <Unicode delimiter body>s, considered as the repetition of a <character string literal> that
 specifies a <character set specification> of SQL_IDENTIFIER and an implementation-defined collation that is sensitive to case, compare equally according
 to the comparison rules in Subclause 8.2, “<comparison predicate>”.

31) A <Unicode delimited identifier> and a <delimited identifier> are equivalent if their <Unicode delimiter body> and <delimited identifier body>, 
respectively, each considered as the repetition of a <character string literal> that specifies a <character set specification> of SQL_IDENTIFIER and 
an implementation-defined collation that is sensitive to case, compare equally according to the comparison rules in Subclause 8.2, “<comparison predicate>”.

32) A <regular identifier> and a <Unicode delimited identifier> are equivalent if the case-normal form of the <identifier body> of the <regular identifier> 
and the <Unicode delimiter body> of the <Unicode delimited identifier> considered as the repetition of a <character string literal>, each specifying a
 <character set specification> of SQL_IDENTIFIER and an implementation-defined collation that is sensitive to case, compare equally according to the 
comparison rules in Subclause 8.2, “<comparison predicate>”.

The approach and design is being captured here: https://github.com/prestosql/presto/wiki/Delimited-Identifiers

@martint martint added the enhancement New feature or request label Jan 21, 2019
@martint
Copy link
Member Author

martint commented Mar 4, 2019

We need to decide on what (if anything) we want to do for backwards compatibility around this change. There are some points for consideration:

  • The engine needs to communicate to connectors whether names are delimited or non-delimited. This requires changing how table names, schema names, user names, roles, etc., are represented from a String to a wrapper class (Name?) that contains the value and whether it's delimited or not.
  • There are some methods in the connector SPIs that return List<String> (e.g., List<String> listSchemaNames(ConnectorSession session))
  • In the long term, we want the "good" method names to be used for the implementation that supports delimited/non-delimited names. We don't want to get stuck with all the APIs having some long-winded or non-obvious method names because the current APIs already took them for what will eventually be the legacy implementation. For example, SchemaTable has a String getCatalog() method. The new method would need to be named differently, such as getOriginalCatalog() if we wanted to preserve backward compatibility.

So, some options to approach this change:

  1. Add new parallel "legacy" APIs by copying the existing APIs and mark the current ones as deprecated. Connectors that don't want to support the new API can just do a simple rename to use the newly introduced "legacy" versions. After a some grace period, replace the current APIs with the variant that understands delimited names. Eventually, remove all the legacy APIs.
  2. Add new APIs and mark the current ones as deprecated. Connectors can migrate to new APIs at will. Eventually, the current APIs get removed.
  3. Forgo backward compatibility and update all APIs to support delimited names.

Option 1) satisfies all the considerations listed above, but requires connectors to make a change at some point, even to keep the current behavior. Updating the connector implementations to understand delimited names is a mostly mechanical affair. So, updating connectors to work against the newly introduced legacy APIs is a waste of effort.

Option 2) leaves us with the poor choice of names for the long term.

Option 3), well, doesn't give us backward compatibility. On the other hand, since the change is mostly mechanical, especially if the connector wants to keep existing semantics, it seems like this wouldn't be too much of a burden.

The more I think about it, the more I lean towards option 3. Thoughts? Any other alternatives I haven't listed?

@findepi
Copy link
Member

findepi commented Mar 5, 2019

Option 2) leaves us with the poor choice of names for the long term.

Note that you can extend option 2 -- after the transition period, when old APIs are removed, you can rename new APIs to have better method names.

Updating the connector implementations to understand delimited names is a mostly mechanical affair.

@martint i think JDBC connectors require some thought here. Consider SHOW TABLES FROM schema example.
If schema is delimited, this is actually easy, java.sql.DatabaseMetaData#getTables should do the trick.
If schema is non-delimited, you need to find the actual schema name (case-insensitively) before calling java.sql.DatabaseMetaData#getTables. (We actually have something like that implemented, and can share this)

For this reason, I think we need some kind of transition period. Although such a transition might be implemented in each connector individually (under option 3), i mean leaning towards option 1.

@martint
Copy link
Member Author

martint commented Mar 5, 2019

One downside of one is that during the transition period we'd have an API that's marked as deprecated and one that's considered legacy. Which one should new connector writers use? Clearly, not the deprecated one. But using the legacy one feels silly ("why can't I use a non-legacy one?") and forces them to adjust their usages later.

On the other hand, as you pointed out, with option 3 every connector has the choice of migrating to support delimited identifiers or preserve the current semantics (treat everything as non-delimited as today) and deal with it later.

@dain dain added the roadmap Top level issues for major efforts in the project label Mar 13, 2019
@findepi
Copy link
Member

findepi commented Apr 23, 2019

@Praveen2112 @martint @kokosing

i was thinking about this today and came to conclusion that there are these concepts:

  1. object name (where object is a catalog, schema, table, view, role, function, user, etc.)
  2. identifier name in a query (can be quoted or not)

We need to distinguish them, because they are different.

  • table name abc is just the three lower-case letters; a table name cannot decide whether it should be matched case-insensitively or not
  • identifier name in a query can be delimited (quoted) or not; query's "abc" matches table name abc, and query's abc matches table names abc, aBC, etc.

So far #354 apparently uses one class (Name) for both these concepts (symmetrically) and uses Name#equals to match them.
I would rather see these concepts distinguished in the code.

Proposed representation

  • use String (or simple String wrapper class Name(String name)) to represent object names
  • use NameSelector(String name, boolean delimited) to represent identifier name in a query
  • have NameSelector#matches(String objectName) (or NameSelector#matches(Name objectName)) to match identifier name in a query against object name

Examples:

  • ConnectorMetadata#schemaExists will take NameSelector
  • ConnectorMetadata#listSchemaNames will return List<String> (or List<Name>)
  • ConnectorMetadata#listTables will take String (or Name) -- i.e. resolved, actual schema name

@martint
Copy link
Member Author

martint commented Apr 23, 2019

The spec talks about object names being identifiers. This is pervasive across the spec document:

Let T be the table defined by the <table definition> TD. Let TN be the <table name> simply contained in TD.

A table descriptor TDS is created that describes T. TDS includes:
a) The table name TN.

A base table descriptor describes a base table. In addition to the components of every table descriptor, a base table descriptor includes:
- The name of the base table.

The grantee is <authorization identifier> A.

The row type RT of the table T defined by the <table definition> is the set of pairs (<field name>, <data type>) where <field name> is the name of a column C of T and <data type> is the declared type of C.

However, it then describes the domain of the columns that represent object names in the INFORMATION_SCHEMA and DEFINITION_SCHEMA tables using:

SQL_IDENTIFIER domain

Define a domain that contains all valid <identifier body>s and <delimited identifier body>s.

CREATE DOMAIN SQL_IDENTIFIER AS CHARACTER VARYING (L) CHARACTER SET SQL_IDENTIFIER

This domain specifies all variable-length character values that conform to the rules for formation and representation of an SQL <identifier body> or an SQL <delimited identifier body>

And:

The representation of an <identifier> in the base tables and views of the Information Schema is by a character string corresponding to its <identifier body> (in the case of a <regular identifier>) or its <delimited identifier body> (in the case of a ). Within this character string, any lower-case letter appearing in a <regular identifier> is replaced by the equivalent upper-case letter, and any <doublequote symbol> appearing in a <delimited identifier body> is replaced by a <double quote>.

As an example, this is how the SQL_IDENTIFIER domain is used in the definition of the CATALOG_NAME base table in DEFINITION_SCHEMA:

CREATE TABLE CATALOG_NAME (
    CATALOG_NAME INFORMATION_SCHEMA.SQL_IDENTIFIER,
    CONSTRAINT CATALOG_NAME_PRIMARY_KEY
    PRIMARY KEY ( CATALOG_NAME )
)

This means that when presented in the contents of INFORMATION_SCHEMA and DEFINITION_SCHEMA tables, identifiers lose any indicators of whether they were delimited or not when the objects were created. This is ok, though, since the upper-case normalization above guarantees that if you used such identifiers in a query, they'd match the corresponding object even if they are surrounded with quotes.

It also hints to the fact that only the body of the identifiers is stored, but it's not explicit about it. The following statement does nothing to clarify this:

Where an <actual identifier> has multiple forms that are equal according to the rules of Subclause 8.2, “<comparison predicate>”, in ISO/IEC 9075-2, the form stored is that encountered at definition time.

Unfortunately, it doesn't clarify whether the "form stored" includes the quotes, if present.

The book "SQL: 1999: Understanding Relational Language Components" by Jim Melton, one of the editors of the SQL spec says:

In effect, SQL:1999 changes all lowercase letters in regular identifiers to their uppercase-equivalent letters. This is especially important for identifiers that are stored as data values in the views of the Information Schema.
[...]
When a delimited identifier is stored into the views of the Information Schema, the double quotes are not stored, but all the other characters are, just as they appeared in the delimited identifier. Therefore, the regular identifier TITLES and the delimited identifier "TITLES" are stored identically and are therefore completely equivalent.

PostgreSQL behaves exactly like that, except that it normalizes to lower-case before storing, opposite of what the spec describes.

So, I think you're partly right in that, in theory, all we need when connectors present a name to the engine is a string. However there are some things to consider:

  • NameSelector may not be an appropriate concept for all usages. When creating a table, how do we convey the table and column names to the connector? We could use a string and normalize the name according to the rules in the SQL spec before providing it to connectors, but that would break connectors that don't play by the SQL rules. We'd either have to 1) add "delimited" to name, which brings us back to square one 2) add yet another entity to represent a "sql identifier".
  • When connectors return a name to the engine, do they first need to normalize it according to SQL rules? This could be impossible for connectors like PostgreSQL, which normalizes the names in the opposite case.

@findepi
Copy link
Member

findepi commented Apr 23, 2019

@martint thanks for looking into this.

When connectors return a name to the engine, do they first need to normalize it according to SQL rules?

For existing objects, names should be returned as-is.
If the remote storage is case insensitive (like Hive is, right?), we could add some normalization here.
But for JDBC connectors we generally shouldn't.
Table names are case-sensitive in eg Postgres, MySQL, SQL Server or Oracle. It is only "less convenient" (requires "-delimiting) to create tables with case different than the default.

NameSelector may not be an appropriate concept for all usages. When creating a table, how do we convey the table and column names to the connector?

Good point. When creating a table, we want to pass "identifier name from a query" to the connector.
We can pass the value normalized to lower- (or upper-) -case unless it was "-delimited.
(Even if we normalize to upper when talking to Postgres connector, it will still normalize to lower because the name is not delimited.)
Plus, we need to retain information whether it was delimited.

My envisioned NameSelector(String name, boolean delimited) fits here perfectly... except for its name.

So:

  • let's call this concept SqlIdentifier(String name, boolean delimited)
  • let's use it when looking for a table, or creating new table
  • let's use case-sensitive String (or Name, or ObjectName) for names of existing objects
  • let's use it when listing objects (eg listing schemas, listing tables)

@Praveen2112
Copy link
Member

@findepi Thanks for your insights.

let's call this concept SqlIdentifier(String name, boolean delimited)

But currently Name object also captures the same right i.e it maintains the rawName and the info whether it is delimited or not

@kokosing
Copy link
Member

kokosing commented Sep 2, 2020

Just a reminder, that while fixing this issue we should also make sure that values returned from system catalog and information_schema have proper data and that predicate pushdown works for them. For example see: #4965

yuuteng added a commit to yuuteng/trino that referenced this issue Jun 30, 2023
# This is the 1st commit message:

Add Snowflake JDBC Connector

# This is the commit message #2:

Update trino snapshot version to 372

# This is the commit message trinodb#3:

Update format of the doc of snowflake

# This is the commit message trinodb#4:

Update trino jdbc library import

# This is the commit message trinodb#5:

Fix date formatter from yyyy to uuuu

# This is the commit message trinodb#6:

Fix date test case

# This is the commit message trinodb#7:

Remove defunct property allow-drop-table

# This is the commit message trinodb#8:

Update trino version to 374

# This is the commit message trinodb#9:

Update snowflake config to adapt 374

# This is the commit message trinodb#10:

Update the range of the test of Date type

# This is the commit message trinodb#11:

Update to version 375

# This is the commit message trinodb#12:

Fix snowflake after updating to 375

# This is the commit message trinodb#13:

Update to 381

# This is the commit message trinodb#14:

Fix mvn pom import

# This is the commit message trinodb#15:

Format snowflake.rst

# This is the commit message trinodb#16:

Reorderd Data tests in type mapping

# This is the commit message trinodb#17:

Update function code

# This is the commit message trinodb#18:

Add product test

# This is the commit message trinodb#19:

Rename product test tablename

# This is the commit message trinodb#20:

Add Env, Suite and properties of Snowflake for production test

# This is the commit message trinodb#21:

Add trinoCreateAndInsert()

# This is the commit message trinodb#22:

Refactor snowflake from single node to multi node

# This is the commit message trinodb#23:

Pass product tests

# This is the commit message trinodb#24:

Removed snowflake.properties in trino server dev

# This is the commit message trinodb#25:

Resolved issues 19 05 2022 and fixed tests

# This is the commit message trinodb#26:

Remove Types.VARBINARY

# This is the commit message trinodb#27:

Add private static SliceWriteFunction charWriteFunction

# This is the commit message trinodb#28:

Update test case

# This is the commit message trinodb#29:

Update plugin/trino-snowflake/src/main/java/io/trino/plugin/snowflake/SnowflakeClient.java

Co-authored-by: Yuya Ebihara <ebyhry@gmail.com>
# This is the commit message trinodb#30:

Update docs/src/main/sphinx/connector/snowflake.rst

Co-authored-by: Yuya Ebihara <ebyhry@gmail.com>
# This is the commit message trinodb#31:

Update plugin/trino-snowflake/pom.xml

Co-authored-by: Yuya Ebihara <ebyhry@gmail.com>
# This is the commit message trinodb#32:

Update plugin/trino-snowflake/src/main/java/io/trino/plugin/snowflake/SnowflakeClient.java

Co-authored-by: Yuya Ebihara <ebyhry@gmail.com>
# This is the commit message trinodb#33:

Resolved review open issues

# This is the commit message trinodb#34:

Disabled JDBC_TREAT_DECIMAL_AS_INT and fixed test case

# This is the commit message trinodb#35:

Updated properties file

# This is the commit message trinodb#36:

Updated properties files

# This is the commit message trinodb#37:

Renamed properties in Testing

# This is the commit message trinodb#38:

Revert "Renamed properties in Testing"

This reverts commit 82f9eb3f3811e8d90a482f5359e98e7c729afa17.

# This is the commit message trinodb#39:

Renamed properties and fixed tests

# This is the commit message trinodb#40:

Update the way to pass ENV values for production test

# This is the commit message trinodb#41:

Update trino version to 388

# This is the commit message trinodb#42:

Update Trino version 391

# This is the commit message trinodb#43:

Update trino version to 394

# This is the commit message trinodb#44:

Update to 395

# This is the commit message trinodb#45:

Update to 411

# This is the commit message trinodb#46:

Update and fix errors

# This is the commit message trinodb#47:

Build successfully with 411

# This is the commit message trinodb#48:

Adding Bloomberg Snowflake connector.

Fully tested with Trino 406. Untested with 410.

Fix version number problem with build.

Adding varchar type to mapping Snowflake.

Adding --add-opens to enable support for Apache Arrow via shared memory buffers.

Fixing some tests.

Fix TestSnowflakeConnectorTest.

TODO: testDataMappingSmokeTest: time and timestamp
testSetColumnTypes: time and timestamp

Fix type mapper

Fix testconnector

Remove unused argument from DeltaLakeMetastore.getTableLocation

Extract removeS3Directory into DeltaLakeTestUtils

Additionally, replace toUnmodifiableList with toImmutableList.

Extract method in HiveMetastoreBackedDeltaLakeMetastore

Don't install flush_metadata_cache procedure in Iceberg

The procedure was unusable because Iceberg connector always
disables caching metastore.

Flush transaction log cache in Delta flush_metadata_cache procedure

Co-Authored-By: Marius Grama <findinpath@gmail.com>

Remove extra digest copy during Digest merge

Reduce number of TDigestHistogram allocations

On coordinator operators stats from all tasks
will be merged. It does make sense to perform
merging as bulk operation.

Tune JDBC fetch-size automatically based on column count

PostgreSQL, Redshift and Oracle connectors had hard-coded fetch-size
value of 1000. The value was found not to be optimal when server is far
(high latency) or when number of columns selected is low. This commit
improves in the latter case by picking fetch size automatically based on
number of columns projected. After the change, the fetch size will be
automatically picked in the range 1000 to 100,000.

Remove redundant LongDoubleState interface

Fix import of wrong Preconditions class

Test Iceberg cost-based plans with small files on TPC-DS

Test against unpartitioned small Parquet files

Upgrade Pinot libraries to 0.12.1

Simplify MaxDataSizeForStats and SumDataSizeForStats

Block#getEstimatedDataSizeForStats is well defined for null positions.
We can use this to replace NullableLongState with LongState.

Fix formatting and simplify condition in HiveMetadata

Skip listing Glue tables with invalid column types

Exclude all synthetic columns in applyProjection validation

DefaultJdbcMetadata#applyProjection already excludes the delete row id
from validation. Same is now done for the merge row id column as well.

Fail fast on unexpected case

Fail, instead of returning, on an impossible case that was supposed to
be handled earlier in a method.

Remove redundant accessor calls

Leverage information already had within the method.

Remove Iceberg, Delta $data system table

It was not intentional to expose a table's data as `a_table$data`
"system" table. This commit removes support for these tables.

Encapsulate table name class constructor

Encapsulate constructors of IcebergTableName and DeltaLakeTableName. The
classes are used primarily as utility classes. Constructor encapsulation
is preparation to convert them into proper utility classes.

Remove table name/type `from` parsing method

After recent changes it became used only in tests. This also converts
the IcebergTableName, DeltaLakeTableName into utility classes.

Add more mirrors

Future-proof against Ubuntu codename update

This approach works as long the following assumptions hold:
- the location of `/etc/os-release` file does NOT change
- the name of the UBUNTU_CODENAME environment variable does NOT change
- `eclipse-temurin` still uses Ubuntu as it's base

Bring Docker build.sh help message in-line with reality

Add retries in TestImpersonation

Decrease number of old JDBC drivers tested

Test parquet column mapping using name based and index based mapping

Change t_char in AbstractTestHiveFileFormats to unpartitioned column

Set thread name for DDL queries

Before the change, the DDL tasks where being executed with a thread name
of `dispatch-query-%d`.

Fix thread names in TestEventDrivenTaskSource

Remove unused constant in environment definition

Capture column names in LocalQueryRunner

Remove un-necessary projected call with assertThat in redshift connector

Introduce new methods projected & exceptColumns take string varargs

For better readability, replace projected(int... columns) with
projected(String... columnNamesToInclude) and introduce
exceptColumns(String... columnNamesToExclude) leveraging
MaterializedResult.getColumnNames

Access fields directly in DeltaLakeTableHandle.withProjectedColumns

This makes it consistent with other methods like that, in particular
with `DeltaLakeTableHandle.forOptimize`.

Provide schema name in exception when Delta table lacks metadata

Test Delta connector behavior for a corrupted table

Handle corrupted Delta Lake tables with explicit table handle

Previously, a corrupted table handle was represented with a
`DeltaLakeTableHandle` missing a `MetadataEntry`.  When drop support for
corrupted tables was implemented, a connector could have only one table
handle class.  This commit improves the distinction by introducing a
dedicated table handle class to represent corrupted tables. As a
necessity, this class is explicitly handled in multiple
`DeltaLakeMetadata` methods. This sets viable example to follow for
implementing drop support for corrupted Iceberg tables as a follow-up.

Note: `testGetInsertLayoutTableNotFound` test was removed, instead of
being updated, since `ConnectorMetadata.getInsertLayout` cannot be
reached for a corrupted table, as getting column list will fail earlier.

Remove unnecessary override method in phoenix metadata

Make queryModifier final

Refactor testRenameTableToLongTableName to remove Iceberg override

Refactor `testRenameTableToLongTableName` test so that Iceberg tests do
not have to override the test method.

Improve Iceberg test testCreateTableLikeForFormat code formatting

Improve retries in AbstractTestHiveViews

Previously we were retrying on "Execution Error, return code 2 from
org.apache.hadoop.hive.ql.exec.mr.MapRedTask". As CI shown, the return
code may vary, sometimes it is e.g. 1.

In the meantime we introduced broader retry patterns for Hive query
failures, so let's use these.

Fix TestIcebergInsert.testIcebergConcurrentInsert timeout on CI

The test seems dominated by `finishInsert` time. since
bf04a72 `finishInsert` is slower as we
commit twice (first committing data then statistics).

Reduce lock contention in SqlStage

Threads in the application were blocked on locks for a total of 4 h 19 min before this patch and 2 h 59 min after in
concurrent benchmark with 40 nodes and 64 queries in parallel.

Reduce lock contention in Query

Use keySet in execution-query-purger

Make setup of bucketed tables in HiveQueryRunner optional

Bucketed tables are unnecessary in many tests

Remove superfluous accumulator add function

 - Enhance the test case as well

Add test for partitioned by non-lowercase column in Delta

Fix failure when partition column contains uppercase in Iceberg

Remove duplicate getParquetType method from ParquetPageSourceFactory

Remove redundant boolean state from LongDecimalWithOverflowAndLongState

Reduce synchronization on PipelinedStageExecution

Removes synchronization from beginScheduling() and
transitionToSchedulingSplits() both of which only perform state machine
updates (if necessary) and do not require accessing synchronized state.

Add NullablePosition to SumDataSizeForStats

Avoids an extra null check as block.getEstimatedDataSizeForStats
will also check for null

Remove unused HiveHudiPartitionInfo.getTable method

Co-Authored-By: Will Zhang <56012782+willzgw@users.noreply.github.com>

Support DELETE statement in Ignite connector

Add an example JDBC connector plugin

Fix typo

Split createDriverRunner method for partitioned and unpartitioned cases

Use checkArgument formatting in StatementUtils

Avoids an eager and unnecessary String.format call by letting
checkArgument perform the required formatting only when the check
fails.

Avoid String.format in ExpressionFormatter

Also replaces unnecessary usages of Guava's Joiner in favor of
Collectors.joining where appropriate.

Replace String.format with String concat

Replaces simple String.format usages in non-exceptional code paths
with simple string concatenations where applicable.

Fix bad parameter count in code generator when column uses two slots

Document hive.max-outstanding-splits-size property

Add missing groups to testMigrateHiveBucketedOnMultipleColumns

Remove unused updatedColumns from IcebergTableHandle

Remove OkHttp as a runtime dependency for the engine

Remove unused dependencies from discovery-server

These are not used in embedded mode.

Update to ASM 9.4

Update object storage definition in glossary

Improve size accounting of SingleLongDecimalWithOverflowState

Make classes in LongDecimalWithOverflowAndLongStateFactory private final

Improve size accounting of SingleLongDecimalWithOverflowAndLongState

Simplify DecimalAverageAggregation#inputShortDecimal

Remove unsed NullableBooleanState

Fix Kerberos ticket refresh

The Hadoop UGI class handles ticket refresh only if the Subject is not
provided externally. For external Subject UGI expects the refresh will
be handled by the creator of the Subject which in our case we did not
do.

Because of this before this change any Trino query which ran longer than
the ticket_lifetime failed with errors like

    GSS initiate failed [Caused by GSSException: No valid credentials
    provided (Mechanism level: Failed to find any Kerberos tgt)].

In Hadoop code the UGI instance also gets re-used in some places (e.g.
DFSClient) which means we cannot just create a new UGI with refreshed
credentials and return that since other parts of code will keep using
the old UGI with expired credentials. So the fix is to create a new UGI,
extract the credentials from it and update the existing UGI's
credentials with them so that all users of the existing UGI also observe
the new valid credentials.

Extend the list of status codes retried in client

This commit extends list of codes on which client will retry to:
 * HTTP_BAD_GATEWAY (502)
 * HTTP_UNAVAILABLE (503)
 * HTTP_GATEWAY_TIMEOUT(504)

Allow listening for single container events

Make environment listener always required

Used enhanced switch

Remove redundant local variable

Remove redundant throws

Use StringBuilder instead of string concatenation

Fix typo in hive parquet doc

Add test for trailing space in location in hive metadata

Remove unnecessary and brittle tests

Column names coming out of the query are not necessarily
related to the column names in the table function. These
tests are testing behavior that is not necessarily expected
or guaranteed, so they are brittle and can break at any time.

A couple of reasons why it's problematic:
* Trino doesn't (yet) follow standard SQL identifier semantics. The
  column names might change between the output of the table function
  and the query output
* At the query output all columns have names. Within the query they
  might not. A table function can produce an anonymous column, but
  the test will see "_col0".

Upgrade Confluent version to 7.3.1

Updates transitive dependencies for Avro and ZooKeeper.
Wire 4.x is required for Confluent 7.3.1 and is updated
in the modules that need it, but leaves Wire at 3.x for
the remaining modules.

Fix potential Kerberos failure with SymlinkTextInputFormat

Add benchmark for array filter object

Optimize filter function performance with copyPositions

Before the change:
Benchmark                             (name)  Mode  Cnt   Score   Error  Units
BenchmarkArrayFilter.benchmark        filter  avgt   20  22.543 ± 0.979  ns/op
BenchmarkArrayFilter.benchmarkObject  filter  avgt   20  42.045 ± 2.088  ns/op

After the change:
Benchmark                             (name)  Mode  Cnt   Score   Error  Units
BenchmarkArrayFilter.benchmark        filter  avgt   20  13.327 ± 0.359  ns/op
BenchmarkArrayFilter.benchmarkObject  filter  avgt   20  34.443 ± 1.943  ns/op

Add quantile_at_value function

Co-authored-by: Peizhen Guo <pguo@fb.com>

Use parent partitioning for aggregations

If parent partitioning provides enough parallelism,
and is a subset of the current node preferred
partitioning (grouping keys for the aggregation) we
can use the parent partitioning to skip data shuffle
required by the parent.

Extract MappedPageSource and MappedRecordSet to toolkit

Introduce BaseJdbcConnectorTableHandle

Extract methods in BaseJdbcClient

These methods can be reused for Procedures PTF
- Extract building columns from ResultSetMetaData as a separate method.
- Extract creating connection based on session

Add table function to execute stored procedure in SQLServer

Use URI path for Glue location in tests

Glue started throwing "InvalidInputException: One or more inputs failed validation"
when getting a table if the table location doesn't have "file:" prefix
in case of local file system.

Test trino-main with JDK 20

Clarify comment in BigQuery ReadSessionCreator

Consistently handle table types across BigQuery connector

This also fixes a bug where createEmptyProjection failed for non-TABLE
and non-VIEW even though those could be supported.

Combine some redundant tests in BigQuery

Remove duplicate test case

Disable CSV quoting when quote character is zero

Disable CSV escaping when escape character is zero

Fix race condition in the hive table stats cache

putIfAbsent method is not implemented in the EvictableCache
because of race condition with invalidation so to avoid the race
condition we use AtomicReference that at some cases can be thrown
away, but it makes cached value fresh even if invalidation happens
during value load

Provide convenience overload to get MV storage table in test

Setup global state before test methods

`storageSchemaName` is defined on class level, so the storage schema
should be created in `@BeforeClass`, not within a test.

Allow Iceberg MV with partitioning transforms on timestamptz

Allow creation of Iceberg Materialized Views partitioned with a
temporal partitioning function on a `timestamp with time zone` column.

In MVs, the `timestamp with time zone` columns are generally stored as
text to preserve time zone information. However, this prevents use of
temporal partitioning functions on these columns. The commit keeps
`timestamp with time zone` columns with partitioning applied on them as
`timestamp with time zone` in the storage table.

An obvious downside to this approach is that the time zone information
is erased and it is not known whether this aligns with user intention or
not. A better solution would be to introduce a point-in-time type
(trinodb#2273) to discern between the
cases where time zone information is important (like Java's
`ZonedDateTime`) from cases where only point-in-time matters (like
Java's `Instant`).

Remove backticks from backtick-unrelated test cases

They were probably copied over from the preceding backtick test case.

Reuse TrackingFileSystemFactory between connectors

Move TrackingFileSystemFactory out of Iceberg tests to allow reuse e.g.
with Delta Lake tests.

Refactor Delta file operations tests to encapsulate checked code

Pair tested operation and expected filesystem access counts in a single
assertion calls. Similar to how it's done in
`TestIcebergMetadataFileOperations`.

Convert TestIcebergMetadataFileOperations helper to record

Add Trino 411 release notes

[maven-release-plugin] prepare release 411

[maven-release-plugin] prepare for next development iteration

Enhance test for managed and external delta table location validation

The purpose of deleting the transaction log directory is solely to confirm
that when the DROP TABLE command is used, the table location is also removed
when table is MANAGED TABLE.

Improve naming of methods and fields to match Trino concepts

Fix incorrect result when hidden directory exist in migrate procedure

Add support for ADD COLUMN in Ignite connector

Use a more specific name for all connectors smoke test suite

Document the all connectors smoke test suite

Verify pass-through specifications in tests

Add more detailed check for TableFunctionProcessorNode.passThroughSpecifications
in TableFunctionProcessorMatcher.

Prune unreferenced pass-through columns of table function

Verify required columns in tests

Add check for TableFunctionProcessorNode.requiredSymbols
in TableFunctionProcessorMatcher.

Verify hashSymbol in tests

Add check for TableFunctionProcessorNode.hashSymbol
in TableFunctionProcessorMatcher.

Prune unreferenced columns of table function source

Test table function column pruning in query plan

Remove table function with empty source

Adds an optimizer rule to remove TableFunctionProcessorNode with
source being an empty relation, based on the "prune when empty"
property.

Test pruning of redundant table function in query plan

Test table functions with column and node pruning optimizations

Fix typo in TestDeltaLakePerTransactionMetastoreCache

Extract assertMetastoreInvocations method

Use CountingAccessHiveMetastore in TestDeltaLakePerTransactionMetastoreCache

Make cleanup methods alwaysRun

This attribute says that this after-method will get executed even if the
methods executed previously failed or were skipped. Note that it also
applies to skipped test methods, so if the tests were skipped for some
reason, the cleanup won't run. This attribute will ensure that the
clean-up will run even in this case.

Failure to run clean up may cause secondary effects, especially our
resource leak detector; failure on this will in turn mask other errors,
the ones which caused the tests to be skipped in the first place.

Add a check to enforce alwaysRun = true on test after-methods

See the previous commit for details. This check will enforce that the
`alwaysRun = true` is present.

Remove redundant call toString()

Remove use of deprecated isEqualToComparingFieldByFieldRecursively

Use usingRecursiveComparison instead deprecated isEqualToComparingFieldByFieldRecursively

Remove unused helper methods in delta-lake connector

Add an explict config to define standardSplitSizeInBytes in FTE

Implement adaptive task sizing for arbitrary distribution in FTE

Improve task sizing for hash distribution in FTE

Round up targetPartitionSizeInBytes to a multiple of minTargetPartitionSizeInBytes

For adaptive task sizing in ArbitraryDistributionSplitAssigner

Adjust retry policy for dropping delta tables backed by AWS Glue on Databricks

Fix output rendering issue in docs

Alphabetize glossary entries

Add use_cost_based_partitioning

Use use_cost_based_partitioning instead of use_exact_partitioning to
control the cost based optimization to prefer parent partitioning.
The motivation is to be able to disable the optimization if the NDV
statistics are overestimated and the optimization would hurt parallelism.

Provide injection mechanism for the file system factory

Reorder instance and static fields in FlushMetadataCacheProcedure

Flush extended statistics in Delta's flush_metadata_cache()

Clean up Delta code a bit

Test Delta Lake query file system accesses

Ensure that TestingHydraIdentityProvider is releasing resources

Update maven to 3.9.1

Expose rule stats in QueryStats

Expose optimizer rule statistics per query in QueryInfo JSON. The number of rules exposed could be
adjusted using the `query.reported-rule-stats-limit` configuration parameter.

Cleanup BigintGroupByHash instanceof checks

Include QueryId in assertDistrubutedQuery failure message

Remove TestClickHouseConnectorTest

There are 4 smoke tests and 2 connectors test.
Remove TestClickHouseConnectorTest as a redundant test.

Remove base class for ClickHouse connector test

Run smoke test for BigQuery arrow serialization

We want to verify SELECT behavior for Arrow serialization in BigQuery.
Smoke test and the existing type mapping test should be enough.

Make construction parameters final

Support arithmetic predicate pushdown for Phoenix

Make LeafTableFunctionOperator handle EmptySplit correctly

Remove CatalogHandle from TableFunctionProcessorNode

Exclude snakeyaml from elasticsearch dependencies

It's a transitive dependency of elasticsearch-x-content,
which we use in ElasticsearchLoader to load tpch data
to Elasticsearch with json encoding. Yaml support is not
needed at all.

Pass partition values as optional to DeltaLakePageSource

The partition values list is filled only when row ID column is
projected, so it's a conditional information. When row ID is not
present, pass it as the empty optional, rather than list that happens to
be empty.

Add cleaner FixedPageSource constructors

Previously, the only constructor would take `Iterable`, which is nice,
but it would also materialize it twice (once in the constructor to
calculate memory usage).

The commit adds a constructor taking a `List` (so double iteration is not
a problem) and one taking `Iterator` and delivering on the promise to
iterate once.

The old constructor is kept deprecated, but apparently all usages use
the new, list-based constructor.

Project a data column in MinIO access test

Read a data column to ensure the data file gets read.
This increases number of accesses to a file, because both footer and
data are being read.

Accelerate Delta when reading partition columns only

Regenerate expected test plans with one-click

Traverse into JsonObject members in AST

Before this change, JsonObject members were not visited
in AstVisitor. As a result, aggregations or parameters
inside the  members were not supported.

Traverse into JsonArray elements in AST

Before this change, JsonArray elements were not visited
in AstVisitor. As a result, aggregations or parameters
inside the elements were not supported.

Document avro.schema.literal property use for interpreting table data

Update Oracle JDBC driver version to 21.9.0.0

Document predicate pushdown support for string-type columns in SQL Server

Enable oracle.remarks-reporting.enabled in connector test

Remove unnecessary wrapping of IOException in TransactionLogTail

Translate `The specified key does not exist` to FileNotFoundException

Relax test assertion

It is possible for more than one task to fail due to injected failure.

Remove obsolete assertion

Lowercase bucketing and sort column names

In the metastore, the bucketing and sorting column names can differ
in case from its corresponding table column names.
This change makes certain that, even though a table can be
delivered by the metastore with such inconsistencies, Trino will lowercase
the same bucketing and sort column names to ensure they correspond to the
data column names.

Add test for RenameColumnTask

Migrate assertStatement in TestSqlParser.testRenameColumn

Allow configuring a custom DNS resolver for the JDBC driver

Reorganize Hudi connector documentation

Migrate some assertExpression in TestSqlParser

Look for a non-Trino protocol without using X-User-*

Update ASM to 9.5

Reorganize Iceberg connector documentation

Reorganize Delta Lake connector documentation

Fix handling of Hive ACID tables with hidden directories

Test CREATE TABLE AS SELECT in Ignite type mapping

Additionally, check time zones in setUp method.

Override equals and hashCode in Delta Identifier

Change Object to Row in testCheckConstraintCompatibility

Support arithmetic binary in Delta check constraints

Introduce EmptyTableFunctionHandle as default handle

Before this change, if a table function did not pass
a ConnectorTableFunctionHandle in the TableFunctionAnalysis,
the default handle was used, which was an anonymous
implementation of ConnectorTableFunctionHandle.

It did not work with table functions executed by operator,
due to lack of serialization.

This change introduces EmptyTableFunctionHandle, and sets
it as default.

Support returning anonymous columns by table functions

Per SQL standard, all columns must be named. In Trino,
we support unnamed columns.
This change adjusts table functions so that they can return
anonymous columns.
It is achieved by changing the Descriptor structure so that
field name is optional. This optionality can only be used
for the returned type of table functions. Descriptor arguments
pased by the user as well as default values for descriptor
argumens have mandatory field names.

Add table function `exclude_columns`

Bump spotbugs-annotations version

Airbase already has `4.7.3`

Remove ValidateLimitWithPresortedInput

It's not powerful enough to validate properties of plans
that get modified by predicate pushdown after AddExchanges
runs, resulting in false positives such as trinodb#16768

Use OrcReader#MAX_BATCH_SIZE = 8 * 1024

Previous value 8196 was bigger than PageProcessor#MAX_BATCH_SIZE
causing PageProcessor to create small, 4 position pages every other page.

Bring DistinguishedNameParser from okhttp3

Inline OkHostnameVerifier and Util.verifyAsIpAddress

They were removed in OkHttp 4.x and we still rely on the legacy SSL hostname verification

Update okhttp to 4.10.0

Fix testTableWithNonNullableColumns to update NOT NULL column in Delta

Add  for support creating table with comment for more Jdbc based connectors (trinodb#16135)

Allow PostgreSQL create table with comment

Also allow to set a comment for PostgreSQL tables

Support sum(distinct) for jdbc connectors

Add Trino 412 release notes

[maven-release-plugin] prepare release 412

[maven-release-plugin] prepare for next development iteration

Add doc for ignite join pushdown

Add missing config properties to Hive docs

Co-Authored-By: Marius Grama <findinpath@gmail.com>

Fix layout in Iceberg documentation

Add docs for property to skip glue archive

Fix typo

Support nested timestamp with time zone in Delta Lake

Add test for duplicated partition statistics on Thrift metastore

Support table comment for oracle connector

Allow Oracle create table with comment

Also allow to set table comment for Oracle tables

Support MERGE for Phoenix connector

Remove unused class FallbackToFullNodePartitionMemoryEstimator.

Remove unnecessary dependency management in trino-pinot

The `protobuf-java` one was overriding a corresponding declaration in
the parent POM, and was effectively downgrading it. The other two were
not used at all.

Bump Protobuf version

Add Snowflake JDBC Connector

Update trino snapshot version to 372

Update format of the doc of snowflake

Update trino jdbc library import

Fix date formatter from yyyy to uuuu

Fix date test case

Remove defunct property allow-drop-table

Update trino version to 374

Update snowflake config to adapt 374

Update the range of the test of Date type

Update to version 375

Fix snowflake after updating to 375

Update to 381

Fix mvn pom import

Format snowflake.rst

Reorderd Data tests in type mapping

Update function code

Add product test

Rename product test tablename

Add Env, Suite and properties of Snowflake for production test

Add trinoCreateAndInsert()

Refactor snowflake from single node to multi node

Pass product tests

Removed snowflake.properties in trino server dev

Resolved issues 19 05 2022 and fixed tests

Remove Types.VARBINARY

Add private static SliceWriteFunction charWriteFunction

Update test case

Update plugin/trino-snowflake/src/main/java/io/trino/plugin/snowflake/SnowflakeClient.java

Co-authored-by: Yuya Ebihara <ebyhry@gmail.com>

Update docs/src/main/sphinx/connector/snowflake.rst

Co-authored-by: Yuya Ebihara <ebyhry@gmail.com>

Update plugin/trino-snowflake/pom.xml

Co-authored-by: Yuya Ebihara <ebyhry@gmail.com>

Update plugin/trino-snowflake/src/main/java/io/trino/plugin/snowflake/SnowflakeClient.java

Co-authored-by: Yuya Ebihara <ebyhry@gmail.com>

Resolved review open issues

Disabled JDBC_TREAT_DECIMAL_AS_INT and fixed test case

Updated properties file

Updated properties files

Renamed properties in Testing

Revert "Renamed properties in Testing"

This reverts commit 82f9eb3f3811e8d90a482f5359e98e7c729afa17.

Renamed properties and fixed tests

Update the way to pass ENV values for production test

Update trino version to 388

Update Trino version 391

Update trino version to 394

Update to 395

Update to 411

Update and fix errors

Build successfully with 411

Adding Bloomberg Snowflake connector.

Fully tested with Trino 406. Untested with 410.

Adding varchar type to mapping Snowflake.

Adding --add-opens to enable support for Apache Arrow via shared memory buffers.

Fixing some tests.

Fix TestSnowflakeConnectorTest.

TODO: testDataMappingSmokeTest: time and timestamp
testSetColumnTypes: time and timestamp

Fix type mapper

Fix testconnector

Update version to 413-SNAPSHOT

Added support for HTTP_PROXY testing.

Connector doesnt support setColumnType. This causes lots of problems with Snowflake server.

Disabled the testsetColumnTypes testing.

Fixed and skipped error tests
yuuteng added a commit to yuuteng/trino that referenced this issue Dec 4, 2023
# This is the 1st commit message:

Add Snowflake JDBC Connector

# This is the commit message #2:

Update trino snapshot version to 372

# This is the commit message trinodb#3:

Update format of the doc of snowflake

# This is the commit message trinodb#4:

Update trino jdbc library import

# This is the commit message trinodb#5:

Fix date formatter from yyyy to uuuu

# This is the commit message trinodb#6:

Fix date test case

# This is the commit message trinodb#7:

Remove defunct property allow-drop-table

# This is the commit message trinodb#8:

Update trino version to 374

# This is the commit message trinodb#9:

Update snowflake config to adapt 374

# This is the commit message trinodb#10:

Update the range of the test of Date type

# This is the commit message trinodb#11:

Update to version 375

# This is the commit message trinodb#12:

Fix snowflake after updating to 375

# This is the commit message trinodb#13:

Update to 381

# This is the commit message trinodb#14:

Fix mvn pom import

# This is the commit message trinodb#15:

Format snowflake.rst

# This is the commit message trinodb#16:

Reorderd Data tests in type mapping

# This is the commit message trinodb#17:

Update function code

# This is the commit message trinodb#18:

Add product test

# This is the commit message trinodb#19:

Rename product test tablename

# This is the commit message trinodb#20:

Add Env, Suite and properties of Snowflake for production test

# This is the commit message trinodb#21:

Add trinoCreateAndInsert()

# This is the commit message trinodb#22:

Refactor snowflake from single node to multi node

# This is the commit message trinodb#23:

Pass product tests

# This is the commit message trinodb#24:

Removed snowflake.properties in trino server dev

# This is the commit message trinodb#25:

Resolved issues 19 05 2022 and fixed tests

# This is the commit message trinodb#26:

Remove Types.VARBINARY

# This is the commit message trinodb#27:

Add private static SliceWriteFunction charWriteFunction

# This is the commit message trinodb#28:

Update test case

# This is the commit message trinodb#29:

Update plugin/trino-snowflake/src/main/java/io/trino/plugin/snowflake/SnowflakeClient.java

Co-authored-by: Yuya Ebihara <ebyhry@gmail.com>
# This is the commit message trinodb#30:

Update docs/src/main/sphinx/connector/snowflake.rst

Co-authored-by: Yuya Ebihara <ebyhry@gmail.com>
# This is the commit message trinodb#31:

Update plugin/trino-snowflake/pom.xml

Co-authored-by: Yuya Ebihara <ebyhry@gmail.com>
# This is the commit message trinodb#32:

Update plugin/trino-snowflake/src/main/java/io/trino/plugin/snowflake/SnowflakeClient.java

Co-authored-by: Yuya Ebihara <ebyhry@gmail.com>
# This is the commit message trinodb#33:

Resolved review open issues

# This is the commit message trinodb#34:

Disabled JDBC_TREAT_DECIMAL_AS_INT and fixed test case

# This is the commit message trinodb#35:

Updated properties file

# This is the commit message trinodb#36:

Updated properties files

# This is the commit message trinodb#37:

Renamed properties in Testing

# This is the commit message trinodb#38:

Revert "Renamed properties in Testing"

This reverts commit 82f9eb3f3811e8d90a482f5359e98e7c729afa17.

# This is the commit message trinodb#39:

Renamed properties and fixed tests

# This is the commit message trinodb#40:

Update the way to pass ENV values for production test

# This is the commit message trinodb#41:

Update trino version to 388

# This is the commit message trinodb#42:

Update Trino version 391

# This is the commit message trinodb#43:

Update trino version to 394

# This is the commit message trinodb#44:

Update to 395

# This is the commit message trinodb#45:

Update to 411

# This is the commit message trinodb#46:

Update and fix errors

# This is the commit message trinodb#47:

Build successfully with 411

# This is the commit message trinodb#48:

Adding Bloomberg Snowflake connector.

Fully tested with Trino 406. Untested with 410.

Fix version number problem with build.

Adding varchar type to mapping Snowflake.

Adding --add-opens to enable support for Apache Arrow via shared memory buffers.

Fixing some tests.

Fix TestSnowflakeConnectorTest.

TODO: testDataMappingSmokeTest: time and timestamp
testSetColumnTypes: time and timestamp

Fix type mapper

Fix testconnector

Remove unused argument from DeltaLakeMetastore.getTableLocation

Extract removeS3Directory into DeltaLakeTestUtils

Additionally, replace toUnmodifiableList with toImmutableList.

Extract method in HiveMetastoreBackedDeltaLakeMetastore

Don't install flush_metadata_cache procedure in Iceberg

The procedure was unusable because Iceberg connector always
disables caching metastore.

Flush transaction log cache in Delta flush_metadata_cache procedure

Co-Authored-By: Marius Grama <findinpath@gmail.com>

Remove extra digest copy during Digest merge

Reduce number of TDigestHistogram allocations

On coordinator operators stats from all tasks
will be merged. It does make sense to perform
merging as bulk operation.

Tune JDBC fetch-size automatically based on column count

PostgreSQL, Redshift and Oracle connectors had hard-coded fetch-size
value of 1000. The value was found not to be optimal when server is far
(high latency) or when number of columns selected is low. This commit
improves in the latter case by picking fetch size automatically based on
number of columns projected. After the change, the fetch size will be
automatically picked in the range 1000 to 100,000.

Remove redundant LongDoubleState interface

Fix import of wrong Preconditions class

Test Iceberg cost-based plans with small files on TPC-DS

Test against unpartitioned small Parquet files

Upgrade Pinot libraries to 0.12.1

Simplify MaxDataSizeForStats and SumDataSizeForStats

Block#getEstimatedDataSizeForStats is well defined for null positions.
We can use this to replace NullableLongState with LongState.

Fix formatting and simplify condition in HiveMetadata

Skip listing Glue tables with invalid column types

Exclude all synthetic columns in applyProjection validation

DefaultJdbcMetadata#applyProjection already excludes the delete row id
from validation. Same is now done for the merge row id column as well.

Fail fast on unexpected case

Fail, instead of returning, on an impossible case that was supposed to
be handled earlier in a method.

Remove redundant accessor calls

Leverage information already had within the method.

Remove Iceberg, Delta $data system table

It was not intentional to expose a table's data as `a_table$data`
"system" table. This commit removes support for these tables.

Encapsulate table name class constructor

Encapsulate constructors of IcebergTableName and DeltaLakeTableName. The
classes are used primarily as utility classes. Constructor encapsulation
is preparation to convert them into proper utility classes.

Remove table name/type `from` parsing method

After recent changes it became used only in tests. This also converts
the IcebergTableName, DeltaLakeTableName into utility classes.

Add more mirrors

Future-proof against Ubuntu codename update

This approach works as long the following assumptions hold:
- the location of `/etc/os-release` file does NOT change
- the name of the UBUNTU_CODENAME environment variable does NOT change
- `eclipse-temurin` still uses Ubuntu as it's base

Bring Docker build.sh help message in-line with reality

Add retries in TestImpersonation

Decrease number of old JDBC drivers tested

Test parquet column mapping using name based and index based mapping

Change t_char in AbstractTestHiveFileFormats to unpartitioned column

Set thread name for DDL queries

Before the change, the DDL tasks where being executed with a thread name
of `dispatch-query-%d`.

Fix thread names in TestEventDrivenTaskSource

Remove unused constant in environment definition

Capture column names in LocalQueryRunner

Remove un-necessary projected call with assertThat in redshift connector

Introduce new methods projected & exceptColumns take string varargs

For better readability, replace projected(int... columns) with
projected(String... columnNamesToInclude) and introduce
exceptColumns(String... columnNamesToExclude) leveraging
MaterializedResult.getColumnNames

Access fields directly in DeltaLakeTableHandle.withProjectedColumns

This makes it consistent with other methods like that, in particular
with `DeltaLakeTableHandle.forOptimize`.

Provide schema name in exception when Delta table lacks metadata

Test Delta connector behavior for a corrupted table

Handle corrupted Delta Lake tables with explicit table handle

Previously, a corrupted table handle was represented with a
`DeltaLakeTableHandle` missing a `MetadataEntry`.  When drop support for
corrupted tables was implemented, a connector could have only one table
handle class.  This commit improves the distinction by introducing a
dedicated table handle class to represent corrupted tables. As a
necessity, this class is explicitly handled in multiple
`DeltaLakeMetadata` methods. This sets viable example to follow for
implementing drop support for corrupted Iceberg tables as a follow-up.

Note: `testGetInsertLayoutTableNotFound` test was removed, instead of
being updated, since `ConnectorMetadata.getInsertLayout` cannot be
reached for a corrupted table, as getting column list will fail earlier.

Remove unnecessary override method in phoenix metadata

Make queryModifier final

Refactor testRenameTableToLongTableName to remove Iceberg override

Refactor `testRenameTableToLongTableName` test so that Iceberg tests do
not have to override the test method.

Improve Iceberg test testCreateTableLikeForFormat code formatting

Improve retries in AbstractTestHiveViews

Previously we were retrying on "Execution Error, return code 2 from
org.apache.hadoop.hive.ql.exec.mr.MapRedTask". As CI shown, the return
code may vary, sometimes it is e.g. 1.

In the meantime we introduced broader retry patterns for Hive query
failures, so let's use these.

Fix TestIcebergInsert.testIcebergConcurrentInsert timeout on CI

The test seems dominated by `finishInsert` time. since
bf04a72 `finishInsert` is slower as we
commit twice (first committing data then statistics).

Reduce lock contention in SqlStage

Threads in the application were blocked on locks for a total of 4 h 19 min before this patch and 2 h 59 min after in
concurrent benchmark with 40 nodes and 64 queries in parallel.

Reduce lock contention in Query

Use keySet in execution-query-purger

Make setup of bucketed tables in HiveQueryRunner optional

Bucketed tables are unnecessary in many tests

Remove superfluous accumulator add function

 - Enhance the test case as well

Add test for partitioned by non-lowercase column in Delta

Fix failure when partition column contains uppercase in Iceberg

Remove duplicate getParquetType method from ParquetPageSourceFactory

Remove redundant boolean state from LongDecimalWithOverflowAndLongState

Reduce synchronization on PipelinedStageExecution

Removes synchronization from beginScheduling() and
transitionToSchedulingSplits() both of which only perform state machine
updates (if necessary) and do not require accessing synchronized state.

Add NullablePosition to SumDataSizeForStats

Avoids an extra null check as block.getEstimatedDataSizeForStats
will also check for null

Remove unused HiveHudiPartitionInfo.getTable method

Co-Authored-By: Will Zhang <56012782+willzgw@users.noreply.github.com>

Support DELETE statement in Ignite connector

Add an example JDBC connector plugin

Fix typo

Split createDriverRunner method for partitioned and unpartitioned cases

Use checkArgument formatting in StatementUtils

Avoids an eager and unnecessary String.format call by letting
checkArgument perform the required formatting only when the check
fails.

Avoid String.format in ExpressionFormatter

Also replaces unnecessary usages of Guava's Joiner in favor of
Collectors.joining where appropriate.

Replace String.format with String concat

Replaces simple String.format usages in non-exceptional code paths
with simple string concatenations where applicable.

Fix bad parameter count in code generator when column uses two slots

Document hive.max-outstanding-splits-size property

Add missing groups to testMigrateHiveBucketedOnMultipleColumns

Remove unused updatedColumns from IcebergTableHandle

Remove OkHttp as a runtime dependency for the engine

Remove unused dependencies from discovery-server

These are not used in embedded mode.

Update to ASM 9.4

Update object storage definition in glossary

Improve size accounting of SingleLongDecimalWithOverflowState

Make classes in LongDecimalWithOverflowAndLongStateFactory private final

Improve size accounting of SingleLongDecimalWithOverflowAndLongState

Simplify DecimalAverageAggregation#inputShortDecimal

Remove unsed NullableBooleanState

Fix Kerberos ticket refresh

The Hadoop UGI class handles ticket refresh only if the Subject is not
provided externally. For external Subject UGI expects the refresh will
be handled by the creator of the Subject which in our case we did not
do.

Because of this before this change any Trino query which ran longer than
the ticket_lifetime failed with errors like

    GSS initiate failed [Caused by GSSException: No valid credentials
    provided (Mechanism level: Failed to find any Kerberos tgt)].

In Hadoop code the UGI instance also gets re-used in some places (e.g.
DFSClient) which means we cannot just create a new UGI with refreshed
credentials and return that since other parts of code will keep using
the old UGI with expired credentials. So the fix is to create a new UGI,
extract the credentials from it and update the existing UGI's
credentials with them so that all users of the existing UGI also observe
the new valid credentials.

Extend the list of status codes retried in client

This commit extends list of codes on which client will retry to:
 * HTTP_BAD_GATEWAY (502)
 * HTTP_UNAVAILABLE (503)
 * HTTP_GATEWAY_TIMEOUT(504)

Allow listening for single container events

Make environment listener always required

Used enhanced switch

Remove redundant local variable

Remove redundant throws

Use StringBuilder instead of string concatenation

Fix typo in hive parquet doc

Add test for trailing space in location in hive metadata

Remove unnecessary and brittle tests

Column names coming out of the query are not necessarily
related to the column names in the table function. These
tests are testing behavior that is not necessarily expected
or guaranteed, so they are brittle and can break at any time.

A couple of reasons why it's problematic:
* Trino doesn't (yet) follow standard SQL identifier semantics. The
  column names might change between the output of the table function
  and the query output
* At the query output all columns have names. Within the query they
  might not. A table function can produce an anonymous column, but
  the test will see "_col0".

Upgrade Confluent version to 7.3.1

Updates transitive dependencies for Avro and ZooKeeper.
Wire 4.x is required for Confluent 7.3.1 and is updated
in the modules that need it, but leaves Wire at 3.x for
the remaining modules.

Fix potential Kerberos failure with SymlinkTextInputFormat

Add benchmark for array filter object

Optimize filter function performance with copyPositions

Before the change:
Benchmark                             (name)  Mode  Cnt   Score   Error  Units
BenchmarkArrayFilter.benchmark        filter  avgt   20  22.543 ± 0.979  ns/op
BenchmarkArrayFilter.benchmarkObject  filter  avgt   20  42.045 ± 2.088  ns/op

After the change:
Benchmark                             (name)  Mode  Cnt   Score   Error  Units
BenchmarkArrayFilter.benchmark        filter  avgt   20  13.327 ± 0.359  ns/op
BenchmarkArrayFilter.benchmarkObject  filter  avgt   20  34.443 ± 1.943  ns/op

Add quantile_at_value function

Co-authored-by: Peizhen Guo <pguo@fb.com>

Use parent partitioning for aggregations

If parent partitioning provides enough parallelism,
and is a subset of the current node preferred
partitioning (grouping keys for the aggregation) we
can use the parent partitioning to skip data shuffle
required by the parent.

Extract MappedPageSource and MappedRecordSet to toolkit

Introduce BaseJdbcConnectorTableHandle

Extract methods in BaseJdbcClient

These methods can be reused for Procedures PTF
- Extract building columns from ResultSetMetaData as a separate method.
- Extract creating connection based on session

Add table function to execute stored procedure in SQLServer

Use URI path for Glue location in tests

Glue started throwing "InvalidInputException: One or more inputs failed validation"
when getting a table if the table location doesn't have "file:" prefix
in case of local file system.

Test trino-main with JDK 20

Clarify comment in BigQuery ReadSessionCreator

Consistently handle table types across BigQuery connector

This also fixes a bug where createEmptyProjection failed for non-TABLE
and non-VIEW even though those could be supported.

Combine some redundant tests in BigQuery

Remove duplicate test case

Disable CSV quoting when quote character is zero

Disable CSV escaping when escape character is zero

Fix race condition in the hive table stats cache

putIfAbsent method is not implemented in the EvictableCache
because of race condition with invalidation so to avoid the race
condition we use AtomicReference that at some cases can be thrown
away, but it makes cached value fresh even if invalidation happens
during value load

Provide convenience overload to get MV storage table in test

Setup global state before test methods

`storageSchemaName` is defined on class level, so the storage schema
should be created in `@BeforeClass`, not within a test.

Allow Iceberg MV with partitioning transforms on timestamptz

Allow creation of Iceberg Materialized Views partitioned with a
temporal partitioning function on a `timestamp with time zone` column.

In MVs, the `timestamp with time zone` columns are generally stored as
text to preserve time zone information. However, this prevents use of
temporal partitioning functions on these columns. The commit keeps
`timestamp with time zone` columns with partitioning applied on them as
`timestamp with time zone` in the storage table.

An obvious downside to this approach is that the time zone information
is erased and it is not known whether this aligns with user intention or
not. A better solution would be to introduce a point-in-time type
(trinodb#2273) to discern between the
cases where time zone information is important (like Java's
`ZonedDateTime`) from cases where only point-in-time matters (like
Java's `Instant`).

Remove backticks from backtick-unrelated test cases

They were probably copied over from the preceding backtick test case.

Reuse TrackingFileSystemFactory between connectors

Move TrackingFileSystemFactory out of Iceberg tests to allow reuse e.g.
with Delta Lake tests.

Refactor Delta file operations tests to encapsulate checked code

Pair tested operation and expected filesystem access counts in a single
assertion calls. Similar to how it's done in
`TestIcebergMetadataFileOperations`.

Convert TestIcebergMetadataFileOperations helper to record

Add Trino 411 release notes

[maven-release-plugin] prepare release 411

[maven-release-plugin] prepare for next development iteration

Enhance test for managed and external delta table location validation

The purpose of deleting the transaction log directory is solely to confirm
that when the DROP TABLE command is used, the table location is also removed
when table is MANAGED TABLE.

Improve naming of methods and fields to match Trino concepts

Fix incorrect result when hidden directory exist in migrate procedure

Add support for ADD COLUMN in Ignite connector

Use a more specific name for all connectors smoke test suite

Document the all connectors smoke test suite

Verify pass-through specifications in tests

Add more detailed check for TableFunctionProcessorNode.passThroughSpecifications
in TableFunctionProcessorMatcher.

Prune unreferenced pass-through columns of table function

Verify required columns in tests

Add check for TableFunctionProcessorNode.requiredSymbols
in TableFunctionProcessorMatcher.

Verify hashSymbol in tests

Add check for TableFunctionProcessorNode.hashSymbol
in TableFunctionProcessorMatcher.

Prune unreferenced columns of table function source

Test table function column pruning in query plan

Remove table function with empty source

Adds an optimizer rule to remove TableFunctionProcessorNode with
source being an empty relation, based on the "prune when empty"
property.

Test pruning of redundant table function in query plan

Test table functions with column and node pruning optimizations

Fix typo in TestDeltaLakePerTransactionMetastoreCache

Extract assertMetastoreInvocations method

Use CountingAccessHiveMetastore in TestDeltaLakePerTransactionMetastoreCache

Make cleanup methods alwaysRun

This attribute says that this after-method will get executed even if the
methods executed previously failed or were skipped. Note that it also
applies to skipped test methods, so if the tests were skipped for some
reason, the cleanup won't run. This attribute will ensure that the
clean-up will run even in this case.

Failure to run clean up may cause secondary effects, especially our
resource leak detector; failure on this will in turn mask other errors,
the ones which caused the tests to be skipped in the first place.

Add a check to enforce alwaysRun = true on test after-methods

See the previous commit for details. This check will enforce that the
`alwaysRun = true` is present.

Remove redundant call toString()

Remove use of deprecated isEqualToComparingFieldByFieldRecursively

Use usingRecursiveComparison instead deprecated isEqualToComparingFieldByFieldRecursively

Remove unused helper methods in delta-lake connector

Add an explict config to define standardSplitSizeInBytes in FTE

Implement adaptive task sizing for arbitrary distribution in FTE

Improve task sizing for hash distribution in FTE

Round up targetPartitionSizeInBytes to a multiple of minTargetPartitionSizeInBytes

For adaptive task sizing in ArbitraryDistributionSplitAssigner

Adjust retry policy for dropping delta tables backed by AWS Glue on Databricks

Fix output rendering issue in docs

Alphabetize glossary entries

Add use_cost_based_partitioning

Use use_cost_based_partitioning instead of use_exact_partitioning to
control the cost based optimization to prefer parent partitioning.
The motivation is to be able to disable the optimization if the NDV
statistics are overestimated and the optimization would hurt parallelism.

Provide injection mechanism for the file system factory

Reorder instance and static fields in FlushMetadataCacheProcedure

Flush extended statistics in Delta's flush_metadata_cache()

Clean up Delta code a bit

Test Delta Lake query file system accesses

Ensure that TestingHydraIdentityProvider is releasing resources

Update maven to 3.9.1

Expose rule stats in QueryStats

Expose optimizer rule statistics per query in QueryInfo JSON. The number of rules exposed could be
adjusted using the `query.reported-rule-stats-limit` configuration parameter.

Cleanup BigintGroupByHash instanceof checks

Include QueryId in assertDistrubutedQuery failure message

Remove TestClickHouseConnectorTest

There are 4 smoke tests and 2 connectors test.
Remove TestClickHouseConnectorTest as a redundant test.

Remove base class for ClickHouse connector test

Run smoke test for BigQuery arrow serialization

We want to verify SELECT behavior for Arrow serialization in BigQuery.
Smoke test and the existing type mapping test should be enough.

Make construction parameters final

Support arithmetic predicate pushdown for Phoenix

Make LeafTableFunctionOperator handle EmptySplit correctly

Remove CatalogHandle from TableFunctionProcessorNode

Exclude snakeyaml from elasticsearch dependencies

It's a transitive dependency of elasticsearch-x-content,
which we use in ElasticsearchLoader to load tpch data
to Elasticsearch with json encoding. Yaml support is not
needed at all.

Pass partition values as optional to DeltaLakePageSource

The partition values list is filled only when row ID column is
projected, so it's a conditional information. When row ID is not
present, pass it as the empty optional, rather than list that happens to
be empty.

Add cleaner FixedPageSource constructors

Previously, the only constructor would take `Iterable`, which is nice,
but it would also materialize it twice (once in the constructor to
calculate memory usage).

The commit adds a constructor taking a `List` (so double iteration is not
a problem) and one taking `Iterator` and delivering on the promise to
iterate once.

The old constructor is kept deprecated, but apparently all usages use
the new, list-based constructor.

Project a data column in MinIO access test

Read a data column to ensure the data file gets read.
This increases number of accesses to a file, because both footer and
data are being read.

Accelerate Delta when reading partition columns only

Regenerate expected test plans with one-click

Traverse into JsonObject members in AST

Before this change, JsonObject members were not visited
in AstVisitor. As a result, aggregations or parameters
inside the  members were not supported.

Traverse into JsonArray elements in AST

Before this change, JsonArray elements were not visited
in AstVisitor. As a result, aggregations or parameters
inside the elements were not supported.

Document avro.schema.literal property use for interpreting table data

Update Oracle JDBC driver version to 21.9.0.0

Document predicate pushdown support for string-type columns in SQL Server

Enable oracle.remarks-reporting.enabled in connector test

Remove unnecessary wrapping of IOException in TransactionLogTail

Translate `The specified key does not exist` to FileNotFoundException

Relax test assertion

It is possible for more than one task to fail due to injected failure.

Remove obsolete assertion

Lowercase bucketing and sort column names

In the metastore, the bucketing and sorting column names can differ
in case from its corresponding table column names.
This change makes certain that, even though a table can be
delivered by the metastore with such inconsistencies, Trino will lowercase
the same bucketing and sort column names to ensure they correspond to the
data column names.

Add test for RenameColumnTask

Migrate assertStatement in TestSqlParser.testRenameColumn

Allow configuring a custom DNS resolver for the JDBC driver

Reorganize Hudi connector documentation

Migrate some assertExpression in TestSqlParser

Look for a non-Trino protocol without using X-User-*

Update ASM to 9.5

Reorganize Iceberg connector documentation

Reorganize Delta Lake connector documentation

Fix handling of Hive ACID tables with hidden directories

Test CREATE TABLE AS SELECT in Ignite type mapping

Additionally, check time zones in setUp method.

Override equals and hashCode in Delta Identifier

Change Object to Row in testCheckConstraintCompatibility

Support arithmetic binary in Delta check constraints

Introduce EmptyTableFunctionHandle as default handle

Before this change, if a table function did not pass
a ConnectorTableFunctionHandle in the TableFunctionAnalysis,
the default handle was used, which was an anonymous
implementation of ConnectorTableFunctionHandle.

It did not work with table functions executed by operator,
due to lack of serialization.

This change introduces EmptyTableFunctionHandle, and sets
it as default.

Support returning anonymous columns by table functions

Per SQL standard, all columns must be named. In Trino,
we support unnamed columns.
This change adjusts table functions so that they can return
anonymous columns.
It is achieved by changing the Descriptor structure so that
field name is optional. This optionality can only be used
for the returned type of table functions. Descriptor arguments
pased by the user as well as default values for descriptor
argumens have mandatory field names.

Add table function `exclude_columns`

Bump spotbugs-annotations version

Airbase already has `4.7.3`

Remove ValidateLimitWithPresortedInput

It's not powerful enough to validate properties of plans
that get modified by predicate pushdown after AddExchanges
runs, resulting in false positives such as trinodb#16768

Use OrcReader#MAX_BATCH_SIZE = 8 * 1024

Previous value 8196 was bigger than PageProcessor#MAX_BATCH_SIZE
causing PageProcessor to create small, 4 position pages every other page.

Bring DistinguishedNameParser from okhttp3

Inline OkHostnameVerifier and Util.verifyAsIpAddress

They were removed in OkHttp 4.x and we still rely on the legacy SSL hostname verification

Update okhttp to 4.10.0

Fix testTableWithNonNullableColumns to update NOT NULL column in Delta

Add  for support creating table with comment for more Jdbc based connectors (trinodb#16135)

Allow PostgreSQL create table with comment

Also allow to set a comment for PostgreSQL tables

Support sum(distinct) for jdbc connectors

Add Trino 412 release notes

[maven-release-plugin] prepare release 412

[maven-release-plugin] prepare for next development iteration

Add doc for ignite join pushdown

Add missing config properties to Hive docs

Co-Authored-By: Marius Grama <findinpath@gmail.com>

Fix layout in Iceberg documentation

Add docs for property to skip glue archive

Fix typo

Support nested timestamp with time zone in Delta Lake

Add test for duplicated partition statistics on Thrift metastore

Support table comment for oracle connector

Allow Oracle create table with comment

Also allow to set table comment for Oracle tables

Support MERGE for Phoenix connector

Remove unused class FallbackToFullNodePartitionMemoryEstimator.

Remove unnecessary dependency management in trino-pinot

The `protobuf-java` one was overriding a corresponding declaration in
the parent POM, and was effectively downgrading it. The other two were
not used at all.

Bump Protobuf version

Add Snowflake JDBC Connector

Update trino snapshot version to 372

Update format of the doc of snowflake

Update trino jdbc library import

Fix date formatter from yyyy to uuuu

Fix date test case

Remove defunct property allow-drop-table

Update trino version to 374

Update snowflake config to adapt 374

Update the range of the test of Date type

Update to version 375

Fix snowflake after updating to 375

Update to 381

Fix mvn pom import

Format snowflake.rst

Reorderd Data tests in type mapping

Update function code

Add product test

Rename product test tablename

Add Env, Suite and properties of Snowflake for production test

Add trinoCreateAndInsert()

Refactor snowflake from single node to multi node

Pass product tests

Removed snowflake.properties in trino server dev

Resolved issues 19 05 2022 and fixed tests

Remove Types.VARBINARY

Add private static SliceWriteFunction charWriteFunction

Update test case

Update plugin/trino-snowflake/src/main/java/io/trino/plugin/snowflake/SnowflakeClient.java

Co-authored-by: Yuya Ebihara <ebyhry@gmail.com>

Update docs/src/main/sphinx/connector/snowflake.rst

Co-authored-by: Yuya Ebihara <ebyhry@gmail.com>

Update plugin/trino-snowflake/pom.xml

Co-authored-by: Yuya Ebihara <ebyhry@gmail.com>

Update plugin/trino-snowflake/src/main/java/io/trino/plugin/snowflake/SnowflakeClient.java

Co-authored-by: Yuya Ebihara <ebyhry@gmail.com>

Resolved review open issues

Disabled JDBC_TREAT_DECIMAL_AS_INT and fixed test case

Updated properties file

Updated properties files

Renamed properties in Testing

Revert "Renamed properties in Testing"

This reverts commit 82f9eb3f3811e8d90a482f5359e98e7c729afa17.

Renamed properties and fixed tests

Update the way to pass ENV values for production test

Update trino version to 388

Update Trino version 391

Update trino version to 394

Update to 395

Update to 411

Update and fix errors

Build successfully with 411

Adding Bloomberg Snowflake connector.

Fully tested with Trino 406. Untested with 410.

Adding varchar type to mapping Snowflake.

Adding --add-opens to enable support for Apache Arrow via shared memory buffers.

Fixing some tests.

Fix TestSnowflakeConnectorTest.

TODO: testDataMappingSmokeTest: time and timestamp
testSetColumnTypes: time and timestamp

Fix type mapper

Fix testconnector

Update version to 413-SNAPSHOT

Added support for HTTP_PROXY testing.

Connector doesnt support setColumnType. This causes lots of problems with Snowflake server.

Disabled the testsetColumnTypes testing.

Fixed and skipped error tests
yuuteng added a commit to yuuteng/trino that referenced this issue Jan 11, 2024
Co-authored-by: Martin Traverso <mtraverso@gmail.com>
yuuteng added a commit to yuuteng/trino that referenced this issue Jan 11, 2024
Co-authored-by: Martin Traverso <mtraverso@gmail.com>
ebyhr pushed a commit that referenced this issue Jan 14, 2024
Co-authored-by: Martin Traverso <mtraverso@gmail.com>
ebyhr pushed a commit that referenced this issue Jan 14, 2024
Co-authored-by: Martin Traverso <mtraverso@gmail.com>
ebyhr pushed a commit that referenced this issue Feb 15, 2024
Various style fixes and cleanup (#15) (#17)

Co-authored-by: Martin Traverso <mtraverso@gmail.com>
Various style fixes and cleanup (#15)

Update the github CI (#12)

* Add Snowflake JDBC Connector

* Add snowflake in the ci
Add Snowflake JDBC Connector (#11)

Had to redo the connector because all the rebases caused havoc
yuuteng added a commit to yuuteng/trino that referenced this issue Feb 22, 2024
Co-authored-by: Martin Traverso <mtraverso@gmail.com>
ebyhr pushed a commit that referenced this issue Mar 1, 2024
Various style fixes and cleanup (#15) (#17)

Co-authored-by: Martin Traverso <mtraverso@gmail.com>
Various style fixes and cleanup (#15)

Update the github CI (#12)

* Add Snowflake JDBC Connector

* Add snowflake in the ci
Add Snowflake JDBC Connector (#11)

Had to redo the connector because all the rebases caused havoc
ebyhr pushed a commit that referenced this issue Mar 4, 2024
*****

Update ci and delete no use lines (#20)

Approved
Update according to reviews 11/01/2024

Various style fixes and cleanup (#15) (#17)

Co-authored-by: Martin Traverso <mtraverso@gmail.com>
Various style fixes and cleanup (#15)

Update the github CI (#12)

* Add Snowflake JDBC Connector

* Add snowflake in the ci
Add Snowflake JDBC Connector (#11)

Had to redo the connector because all the rebases caused havoc

(cherry picked from commit dafaa5d)
yuuteng added a commit to yuuteng/trino that referenced this issue Mar 5, 2024
*****

Update ci and delete no use lines (trinodb#20)

Approved
Update according to reviews 11/01/2024

Various style fixes and cleanup (trinodb#15) (trinodb#17)

Co-authored-by: Martin Traverso <mtraverso@gmail.com>
Various style fixes and cleanup (trinodb#15)

Update the github CI (trinodb#12)

* Add Snowflake JDBC Connector

* Add snowflake in the ci
Add Snowflake JDBC Connector (trinodb#11)

Had to redo the connector because all the rebases caused havoc
ebyhr pushed a commit that referenced this issue Mar 6, 2024
*****

Update ci and delete no use lines (#20)

Approved
Update according to reviews 11/01/2024

Various style fixes and cleanup (#15) (#17)

Co-authored-by: Martin Traverso <mtraverso@gmail.com>
Various style fixes and cleanup (#15)

Update the github CI (#12)

* Add Snowflake JDBC Connector

* Add snowflake in the ci
Add Snowflake JDBC Connector (#11)

Had to redo the connector because all the rebases caused havoc
@prrvchr
Copy link
Member

prrvchr commented May 31, 2024

The workaround is to only create tables in lower case but do you have anything planned on your side to resolve this problem?

@shantanu-dahiya
Copy link

Does the contributing team have plans to pick this up anytime soon?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request roadmap Top level issues for major efforts in the project
Development

No branches or pull requests

8 participants