Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add SQL Support for ADBC Drivers #53869

Merged
merged 78 commits into from
Nov 22, 2023
Merged
Show file tree
Hide file tree
Changes from 29 commits
Commits
Show all changes
78 commits
Select commit Hold shift + click to select a range
4f2b760
close to complete implementation
WillAyd Jun 26, 2023
a4ebbb5
working implementation for postgres
WillAyd Jun 26, 2023
b2cd149
sqlite implementation
WillAyd Jun 26, 2023
512bd00
Added ADBC to CI
WillAyd Jun 26, 2023
f49115c
Doc updates
WillAyd Jun 26, 2023
a8512b5
Whatsnew update
WillAyd Jun 26, 2023
c1c68ef
Better optional dependency import
WillAyd Jun 26, 2023
3d7fb15
min versions fix
WillAyd Jun 26, 2023
1093bc8
import updates
WillAyd Jun 27, 2023
926e567
docstring fix
WillAyd Jun 27, 2023
093dd86
Merge remote-tracking branch 'upstream/main' into adbc-integration
WillAyd Jun 27, 2023
fcc21a8
doc fixup
WillAyd Jun 27, 2023
88642f7
Merge remote-tracking branch 'upstream/main' into adbc-integration
WillAyd Jul 14, 2023
156096d
Updates for 0.6.0
WillAyd Jul 14, 2023
dd26edb
fix sqlite name escaping
WillAyd Jul 20, 2023
4d8a233
more cleanups
WillAyd Jul 20, 2023
5238e69
more 0.6.0 updates
WillAyd Aug 2, 2023
51c6c98
typo
WillAyd Aug 2, 2023
39b462b
Merge remote-tracking branch 'upstream/main' into adbc-integration
WillAyd Aug 28, 2023
428c4f7
remove warning
WillAyd Aug 28, 2023
84d95bb
test_sql expectations
WillAyd Aug 28, 2023
a4d5b31
revert whatsnew issues
WillAyd Aug 28, 2023
21b35f6
pip deps
WillAyd Aug 28, 2023
e709d52
Suppress pyarrow warning
WillAyd Aug 28, 2023
6077fa9
Updated docs
WillAyd Aug 28, 2023
5bba566
mypy fixes
WillAyd Aug 28, 2023
236e12b
Remove stacklevel check from test
WillAyd Aug 29, 2023
b35374c
typo fix
WillAyd Aug 29, 2023
8d814e1
compat
WillAyd Aug 30, 2023
cfac2c7
Joris feedback
WillAyd Aug 31, 2023
47caaf1
Merge remote-tracking branch 'upstream/main' into adbc-integration
WillAyd Aug 31, 2023
a22e5d1
Better test coverage with ADBC
WillAyd Aug 31, 2023
c51b7f4
cleanups
WillAyd Aug 31, 2023
7f5e6ac
feedback
WillAyd Sep 1, 2023
9ee6255
Merge remote-tracking branch 'upstream/main' into adbc-integration
WillAyd Sep 19, 2023
a8b645f
checkpoint
WillAyd Sep 19, 2023
902df4f
more checkpoint
WillAyd Sep 19, 2023
90ca2cb
more skips
WillAyd Sep 20, 2023
d753c3c
updates
WillAyd Sep 20, 2023
d469e24
implement more
WillAyd Sep 21, 2023
2bc11a1
bump to 0.7.0
WillAyd Sep 24, 2023
f205f90
fixups
WillAyd Oct 2, 2023
2755100
Merge remote-tracking branch 'upstream/main' into adbc-integration
WillAyd Oct 2, 2023
3577a59
cleanups
WillAyd Oct 2, 2023
c5bf7f8
sqlite fixups
WillAyd Oct 2, 2023
98d22ce
pyarrow compat
WillAyd Oct 2, 2023
4f72010
revert to using pip instead of conda
WillAyd Oct 2, 2023
7223e63
documentation cleanups
WillAyd Oct 2, 2023
c2cd90a
compat fixups
WillAyd Oct 3, 2023
de65ec0
Fix stacklevel
WillAyd Oct 3, 2023
7645727
remove unneeded code
WillAyd Oct 3, 2023
3dc914c
Merge remote-tracking branch 'upstream/main' into adbc-integration
WillAyd Oct 16, 2023
6dbaae5
commit after drop in fixtures
WillAyd Oct 16, 2023
3bf550c
close cursor
WillAyd Oct 17, 2023
492301f
Merge branch 'main' into adbc-integration
WillAyd Oct 23, 2023
fc463a4
Merge remote-tracking branch 'upstream/main' into adbc-integration
WillAyd Oct 23, 2023
cc72ecd
Merge branch 'main' into adbc-integration
WillAyd Oct 25, 2023
f5fd529
Merge branch 'main' into adbc-integration
WillAyd Oct 30, 2023
1207bc4
fix table dropping
WillAyd Oct 30, 2023
e8d93c7
Merge branch 'main' into adbc-integration
WillAyd Nov 10, 2023
3eed897
Bumped ADBC min to 0.8.0
WillAyd Nov 10, 2023
adef2f2
Merge remote-tracking branch 'upstream/main' into adbc-integration
WillAyd Nov 10, 2023
67101fd
documentation
WillAyd Nov 10, 2023
ea5dcb9
doc updates
WillAyd Nov 10, 2023
fb38411
more fixups
WillAyd Nov 10, 2023
a0bed67
documentation fixups
WillAyd Nov 11, 2023
150e267
Merge branch 'main' into adbc-integration
WillAyd Nov 13, 2023
1e77f2b
fixes
WillAyd Nov 13, 2023
97ed24f
more documentation
WillAyd Nov 13, 2023
7dc07da
doc spacing
WillAyd Nov 13, 2023
52ee8d3
doc target fix
WillAyd Nov 14, 2023
1de8488
pyarrow warning compat
WillAyd Nov 14, 2023
21edaea
Merge branch 'main' into adbc-integration
WillAyd Nov 17, 2023
2d077e9
feedback
WillAyd Nov 17, 2023
accbd49
updated io documentation
WillAyd Nov 17, 2023
64b63bd
Merge branch 'main' into adbc-integration
WillAyd Nov 17, 2023
f84f63a
install updates
WillAyd Nov 18, 2023
391d045
Merge remote-tracking branch 'upstream/main' into adbc-integration
WillAyd Nov 21, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions ci/deps/actions-310.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -57,5 +57,7 @@ dependencies:
- zstandard>=0.17.0

- pip:
- adbc_driver_postgresql>=0.6.0
- adbc_driver_sqlite>=0.6.0
- pyqt5>=5.15.6
- tzdata>=2022.1
2 changes: 2 additions & 0 deletions ci/deps/actions-311-downstream_compat.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,8 @@ dependencies:
- pyyaml
- py
- pip:
- adbc_driver_postgresql>=0.6.0
- adbc_driver_sqlite>=0.6.0
- dataframe-api-compat>=0.1.7
- pyqt5>=5.15.6
- tzdata>=2022.1
2 changes: 2 additions & 0 deletions ci/deps/actions-311.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -57,5 +57,7 @@ dependencies:
- zstandard>=0.17.0

- pip:
- adbc_driver_postgresql>=0.6.0
- adbc_driver_sqlite>=0.6.0
- pyqt5>=5.15.6
- tzdata>=2022.1
2 changes: 2 additions & 0 deletions ci/deps/actions-39-minimum_versions.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,8 @@ dependencies:
- zstandard=0.17.0

- pip:
- adbc_driver_postgresql==0.6.0
- adbc_driver_sqlite==0.6.0
- dataframe-api-compat==0.1.7
- pyqt5==5.15.6
- tzdata==2022.1
2 changes: 2 additions & 0 deletions ci/deps/actions-39.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -57,5 +57,7 @@ dependencies:
- zstandard>=0.17.0

- pip:
- adbc_driver_postgresql>=0.6.0
- adbc_driver_sqlite>=0.6.0
- pyqt5>=5.15.6
- tzdata>=2022.1
4 changes: 4 additions & 0 deletions ci/deps/circle-310-arm64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -56,3 +56,7 @@ dependencies:
- xlrd>=2.0.1
- xlsxwriter>=3.0.3
- zstandard>=0.17.0

- pip:
- adbc_driver_postgresql>=0.6.0
- adbc_driver_sqlite>=0.6.0
2 changes: 2 additions & 0 deletions doc/source/getting_started/install.rst
Original file line number Diff line number Diff line change
Expand Up @@ -344,6 +344,8 @@ SQLAlchemy 1.4.36 postgresql, SQL support for dat
sql-other
psycopg2 2.9.3 postgresql PostgreSQL engine for sqlalchemy
pymysql 1.0.2 mysql MySQL engine for sqlalchemy
adbc-driver-postgresql 0.6.0 ADBC Driver for PostgreSQL
adbc-driver-sqlite 0.6.0 ADBC Driver for SQLite
jorisvandenbossche marked this conversation as resolved.
Show resolved Hide resolved
========================= ================== =============== =============================================================

Other data sources
Expand Down
2 changes: 2 additions & 0 deletions environment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -113,6 +113,8 @@ dependencies:
- pygments # Code highlighting

- pip:
- adbc_driver_postgresql>=0.6.0
- adbc_driver_sqlite>=0.6.0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The conda packages should be available nowadays, so I think you can move it to the normal packages list

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not yet for Windows, so I think can leave it to a follow up to do this after apache/arrow-adbc#1149

- dataframe-api-compat>=0.1.7
- sphinx-toggleprompt # conda-forge version has stricter pins on jinja2
- typing_extensions; python_version<"3.11"
Expand Down
2 changes: 2 additions & 0 deletions pandas/compat/_optional.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,8 @@
# Update install.rst & setup.cfg when updating versions!

VERSIONS = {
"adbc_driver_postgresql": "0.6.0",
"adbc_driver_sqlite": "0.6.0",
"bs4": "4.11.1",
"blosc": "1.21.0",
"bottleneck": "1.3.4",
Expand Down
269 changes: 268 additions & 1 deletion pandas/io/sql.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,10 @@
is_dict_like,
is_list_like,
)
from pandas.core.dtypes.dtypes import DatetimeTZDtype
from pandas.core.dtypes.dtypes import (
ArrowDtype,
DatetimeTZDtype,
)
from pandas.core.dtypes.missing import isna

from pandas import get_option
Expand Down Expand Up @@ -642,6 +645,17 @@ def read_sql(
int_column date_column
0 0 2012-11-10
1 1 2010-11-12

.. versionadded:: 2.1.0
mroeschke marked this conversation as resolved.
Show resolved Hide resolved

pandas now supports reading via ADBC drivers

>>> from adbc_driver_postgresql import dbapi
>>> with dbapi.connect('postgres:///db_name') as conn: # doctest:+SKIP
... pd.read_sql('SELECT int_column FROM test_data', conn)
int_column
0 0
1 1
"""

check_dtype_backend(dtype_backend)
Expand Down Expand Up @@ -850,6 +864,10 @@ def pandasSQL_builder(
if sqlalchemy is not None and isinstance(con, (str, sqlalchemy.engine.Connectable)):
return SQLDatabase(con, schema, need_transaction)

adbc = import_optional_dependency("adbc_driver_manager.dbapi", errors="ignore")
if adbc and isinstance(con, adbc.Connection):
return ADBCDatabase(con)

warnings.warn(
"pandas only supports SQLAlchemy connectable (engine/connection) or "
"database string URI or sqlite3 DBAPI2 connection. Other DBAPI2 "
Expand Down Expand Up @@ -2024,6 +2042,255 @@ def _create_sql_schema(


# ---- SQL without SQLAlchemy ---


class ADBCDatabase(PandasSQL):
"""
This class enables conversion between DataFrame and SQL databases
using ADBC to handle DataBase abstraction.

Parameters
----------
con : adbc_driver_manager.dbapi.Connection
"""

def __init__(self, con) -> None:
self.con = con

def execute(self, sql: str | Select | TextClause, params=None):
with self.con.cursor() as cur:
return cur.execute(sql)

def read_table(
self,
table_name: str,
index_col: str | list[str] | None = None,
coerce_float: bool = True,
parse_dates=None,
columns=None,
schema: str | None = None,
chunksize: int | None = None,
dtype_backend: DtypeBackend | Literal["numpy"] = "numpy",
) -> DataFrame | Iterator[DataFrame]:
"""
Read SQL database table into a DataFrame. Only keyword arguments used
WillAyd marked this conversation as resolved.
Show resolved Hide resolved
are table_name and schema. The rest are silently discarded.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They raise an error now, I think


Parameters
----------
table_name : str
Name of SQL table in database.
schema : string, default None
Name of SQL schema in database to read from

Returns
-------
DataFrame

See Also
--------
pandas.read_sql_table
SQLDatabase.read_query

"""
if index_col:
raise NotImplementedError("'index_col' is not implemented for ADBC drivers")
if coerce_float is not True:
raise NotImplementedError(
"'coerce_float' is not implemented for ADBC drivers"
)
if parse_dates:
raise NotImplementedError(
"'parse_dates' is not implemented for ADBC drivers"
)
if columns:
raise NotImplementedError("'columns' is not implemented for ADBC drivers")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't necessarily need to happen for this initial PR, but since we are generating the SQL query string below, it should be relatively straightforward to support selecting a subset of columns, instead of selecting *?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea that's a good point. I can't remember why I didn't do this in the first place. Should be straightforward to add here or in a follow up though, especially since ADBC should handle sanitizing

if chunksize:
raise NotImplementedError("'chunksize' is not implemented for ADBC drivers")
Comment on lines +2189 to +2190
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again not for this PR, but something to note for future improvements: I think it should be possible to support chunksize? Because we can get an RecordBatchReader from ADBC, and then read from that iterator and convert to pandas in chunks?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Its a good question. I'm not sure I see anything in the ADBC specification around batch / chunk handling. Might be overlooking the general approach to that. @lidavidm always knows best

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not necessarily in ABDC itself, but you should be able to get a RecordBatchReader instead of a materialized Table. And then pyarrow provides APIs to consume that reader chunk by chunk. It might not exactly support the user-specified chunksize, but it does give you a similar result: a generator of pandas DataFrames.

(a RecordBatchReader allows you to read_next_batch() at a time, this returns a RecordBatch, and then this could be further split if necessary based on chunksize, and then those chunks can be converted to pandas.DataFrame. The main issue is that if the native batch size you get from the database is much larger than the specified chunksize, you get a larger memory usage than expected)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops, I missed this.

Drivers have various parameters to request a batch size but perhaps we should standardize on one.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


if schema:
stmt = f"SELECT * FROM {schema}.{table_name}"
else:
stmt = f"SELECT * FROM {table_name}"

mapping: type[ArrowDtype] | None | Callable
if dtype_backend == "pyarrow":
mapping = ArrowDtype
elif dtype_backend == "numpy_nullable":
from pandas.io._util import _arrow_dtype_mapping

mapping = _arrow_dtype_mapping().get
else:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
else:
elif using_pyarrow_string_dtype():
from pandas.io._util import arrow_string_types_mapper
mappping = arrow_string_types_mapper())
else:

with from pandas._config import using_pyarrow_string_dtype

For another PR, but we should probably factor out the above in a helper to give you the mapping based on the dtype_backend keyword + options.

mapping = None

with self.con.cursor() as cur:
return cur(stmt).fetch_arrow_table().to_pandas(types_mapper=mapping)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
return cur(stmt).fetch_arrow_table().to_pandas(types_mapper=mapping)
return cur.execute(stmt).fetch_arrow_table().to_pandas(types_mapper=mapping)

?

(I mentioned it before, but I don't fully understand how this PR is working, because if I try that locally with an adbc dbapi connection, I get "TypeError: 'Cursor' object is not callable")

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I think because you are basically only testing to_sql .. All the read tests seem to rely on additional fixtures that have a connection + iris table set up, and those are not added for adbc.

With a quick hacky patch I can get the tests to fail with the "'Cursor' object is not callable" error:

--- a/pandas/tests/io/test_sql.py
+++ b/pandas/tests/io/test_sql.py
@@ -146,7 +146,9 @@ def create_and_load_iris_sqlite3(conn: sqlite3.Connection, iris_file: Path):
         reader = csv.reader(csvfile)
         next(reader)
         stmt = "INSERT INTO iris VALUES(?, ?, ?, ?, ?)"
-        cur.executemany(stmt, reader)
+        cur.executemany(stmt, list(reader))
+    conn.commit()
+    cur.close()
 
 
 def create_and_load_iris(conn, iris_file: Path, dialect: str):
@@ -532,6 +534,23 @@ def sqlite_iris_conn(sqlite_iris_engine):
         yield conn
 
 
+@pytest.fixture
+def sqlite_iris_adbc_conn(iris_path):
+    if pa_version_under8p0:
+        pytest.skip("ADBC requires pyarrow >= 8.0.0")
+    pytest.importorskip("adbc_driver_sqlite")
+    from adbc_driver_sqlite import dbapi
+
+    with tm.ensure_clean() as name:
+        uri = f"file:{name}"
+        with dbapi.connect(uri) as conn:
+            create_and_load_iris_sqlite3(conn, iris_path)
+            
+            yield conn
+            with conn.cursor() as cur:
+                cur.execute("DROP TABLE IF EXISTS test_frame")
+
+
 @pytest.fixture
 def sqlite_buildin():
     with contextlib.closing(sqlite3.connect(":memory:")) as closing_conn:
@@ -566,6 +585,7 @@ sqlite_iris_connectable = [
     "sqlite_iris_engine",
     "sqlite_iris_conn",
     "sqlite_iris_str",
+    "sqlite_iris_adbc_conn",
 ]
 
 sqlalchemy_connectable = mysql_connectable + postgresql_connectable + sqlite_connectable

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah cool - nice find


def read_query(
self,
sql: str,
index_col: str | list[str] | None = None,
coerce_float: bool = True,
parse_dates=None,
params=None,
chunksize: int | None = None,
dtype: DtypeArg | None = None,
dtype_backend: DtypeBackend | Literal["numpy"] = "numpy",
) -> DataFrame | Iterator[DataFrame]:
"""
Read SQL query into a DataFrame. Keyword arguments are discarded.

Parameters
----------
sql : str
SQL query to be executed.

Returns
-------
DataFrame

See Also
--------
read_sql_table : Read SQL database table into a DataFrame.
read_sql

"""
if index_col:
raise NotImplementedError("'index_col' is not implemented for ADBC drivers")
if coerce_float is not True:
raise NotImplementedError(
"'coerce_float' is not implemented for ADBC drivers"
)
if parse_dates:
raise NotImplementedError(
"'parse_dates' is not implemented for ADBC drivers"
)
if params:
raise NotImplementedError("'params' is not implemented for ADBC drivers")
if chunksize:
raise NotImplementedError("'chunksize' is not implemented for ADBC drivers")
if dtype:
raise NotImplementedError("'dtype' is not implemented for ADBC drivers")

mapping: type[ArrowDtype] | None | Callable
if dtype_backend == "pyarrow":
mapping = ArrowDtype
elif dtype_backend == "numpy_nullable":
from pandas.io._util import _arrow_dtype_mapping

mapping = _arrow_dtype_mapping().get
else:
mapping = None

with self.con.cursor() as cur:
return cur(sql).fetch_arrow_table().to_pandas(types_mapper=mapping)

read_sql = read_query

def to_sql(
self,
frame,
name: str,
if_exists: Literal["fail", "replace", "append"] = "fail",
index: bool = True,
index_label=None,
schema: str | None = None,
chunksize: int | None = None,
dtype: DtypeArg | None = None,
method: Literal["multi"] | Callable | None = None,
engine: str = "auto",
**engine_kwargs,
) -> int | None:
"""
Write records stored in a DataFrame to a SQL database.
Only frame, name, if_exists and schema are valid arguments.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

index is now supported as well?


Parameters
----------
frame : DataFrame
name : string
Name of SQL table.
if_exists : {'fail', 'replace', 'append'}, default 'fail'
- fail: If table exists, do nothing.
- replace: If table exists, drop it, recreate it, and insert data.
- append: If table exists, insert data. Create if does not exist.
schema : string, default None
Name of SQL schema in database to write to (if database flavor
supports this). If specified, this overwrites the default
schema of the SQLDatabase object.
"""
if index_label:
raise NotImplementedError(
"'index_label' is not implemented for ADBC drivers"
)
if schema:
raise NotImplementedError("'schema' is not implemented for ADBC drivers")
if chunksize:
raise NotImplementedError("'chunksize' is not implemented for ADBC drivers")
if dtype:
raise NotImplementedError("'dtype' is not implemented for ADBC drivers")
if method:
raise NotImplementedError("'method' is not implemented for ADBC drivers")
if engine != "auto":
raise NotImplementedError("'auto' is not implemented for ADBC drivers")

if schema:
table_name = f"{schema}.{name}"
else:
table_name = name

# TODO: pandas if_exists="append" will still create the
# table if it does not exist; ADBC has append/create
# as applicable modes, so the semantics get blurred across
# the libraries
mode = "create"
if self.has_table(name, schema):
if if_exists == "fail":
raise ValueError(f"Table '{table_name}' already exists.")
elif if_exists == "replace":
with self.con.cursor() as cur:
cur.execute(f"DROP TABLE {table_name}")
elif if_exists == "append":
mode = "append"

import pyarrow as pa

tbl = pa.Table.from_pandas(frame, preserve_index=index)
with self.con.cursor() as cur:
total_inserted = cur.adbc_ingest(table_name, tbl, mode=mode)

self.con.commit()
return total_inserted

def has_table(self, name: str, schema: str | None = None) -> bool:
meta = self.con.adbc_get_objects(
db_schema_filter=schema, table_name_filter=name
).read_all()

for catalog_schema in meta["catalog_db_schemas"].to_pylist():
if not catalog_schema:
continue
for schema_record in catalog_schema:
if not schema_record:
continue

for table_record in schema_record["db_schema_tables"]:
if table_record["table_name"] == name:
return True

return False

def _create_sql_schema(
self,
frame: DataFrame,
table_name: str,
keys: list[str] | None = None,
dtype: DtypeArg | None = None,
schema: str | None = None,
):
raise NotImplementedError("not implemented for adbc")


# sqlite-specific sql strings and handler class
# dictionary used for readability purposes
_SQL_TYPES = {
Expand Down
Loading
Loading