- Fix column-level
persist_docs
on Delta tables, add tests (#180)
- Allow user to specify
use_ssl
(#169) - Allow setting table
OPTIONS
usingconfig
(#171) - Add support for column-level
persist_docs
on Delta tables (#84, #170)
- Cast
table_owner
to string to avoid errors generating docs (#158, #159) - Explicitly cast column types when inserting seeds (#139, #166)
- Parse information returned by
list_relations_without_caching
macro to speed up catalog generation (#93, #160) - More flexible host passing, https:// can be omitted (#153)
- @friendofasquid (#159)
- @franloza (#160)
- @Fokko (#165)
- @rahulgoyal2987 (#169)
- @JCZuurmond (#171)
- @cristianoperez (#170)
- update serialization calls to use new API in dbt-core
0.19.1b2
(#150)
- Incremental models have
incremental_strategy: append
by default. This strategy adds new records without updating or overwriting existing records. For that, usemerge
orinsert_overwrite
instead, depending on the file format, connection method, and attributes of your underlying data. dbt will try to raise a helpful error if you configure a strategy that is not supported for a given file format or connection. (#140, #141)
- Capture hard-deleted records in snapshot merge, when
invalidate_hard_deletes
config is set (#109, #126)
- Users of the
http
andthrift
connection methods need to install extra requirements:pip install dbt-spark[PyHive]
(#109, #126)
- Enable
CREATE OR REPLACE
support when using Delta. Instead of dropping and recreating the table, it will keep the existing table, and add a new version as supported by Delta. This will ensure that the table stays available when running the pipeline, and you can track the history. - Add changelog, issue templates (#119, #120)
- Handle case of 0 retries better for HTTP Spark Connections (#132)
- @danielvdende (#132)
- @Fokko (#125)
- Allows users to specify
auth
andkerberos_service_name
(#107) - Add support for ODBC driver connections to Databricks clusters and endpoints (#116)