Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add merge_update_columns config #184

Merged
merged 1 commit into from
Jun 23, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,9 @@
## dbt-spark 0.20.0 (Release TBD)

### Features

- Add support for `merge_update_columns` config in `merge`-strategy incremental models ([#183](https://github.com/fishtown-analytics/dbt-spark/pull/183), ([#184](https://github.com/fishtown-analytics/dbt-spark/pull/184))

### Fixes

- Fix column-level `persist_docs` on Delta tables, add tests ([#180](https://github.com/fishtown-analytics/dbt-spark/pull/180))
Expand Down
1 change: 1 addition & 0 deletions dbt/adapters/spark/impl.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@ class SparkConfig(AdapterConfig):
clustered_by: Optional[Union[List[str], str]] = None
buckets: Optional[int] = None
options: Optional[Dict[str, str]] = None
merge_update_columns: Optional[str] = None


class SparkAdapter(SQLAdapter):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,8 @@


{% macro spark__get_merge_sql(target, source, unique_key, dest_columns, predicates=none) %}
{# ignore dest_columns - we will just use `*` #}
{# skip dest_columns, use merge_update_columns config if provided, otherwise use "*" #}
{%- set update_columns = config.get("merge_update_columns") -%}

{% set merge_condition %}
{% if unique_key %}
Expand All @@ -32,8 +33,16 @@

merge into {{ target }} as DBT_INTERNAL_DEST
using {{ source.include(schema=false) }} as DBT_INTERNAL_SOURCE

{{ merge_condition }}
when matched then update set *

when matched then update set
{% if update_columns -%}{%- for column_name in update_columns %}
{{ column_name }} = DBT_INTERNAL_SOURCE.{{ column_name }}
{%- if not loop.last %}, {%- endif %}
{%- endfor %}
{%- else %} * {% endif %}

when not matched then insert *
{% endmacro %}

Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
id,msg,color
1,hello,blue
2,yo,red
3,anyway,purple
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
{{ config(
materialized = 'incremental',
incremental_strategy = 'merge',
file_format = 'delta',
unique_key = 'id',
merge_update_columns = ['msg'],
) }}

{% if not is_incremental() %}

select cast(1 as bigint) as id, 'hello' as msg, 'blue' as color
union all
select cast(2 as bigint) as id, 'goodbye' as msg, 'red' as color

{% else %}

-- msg will be updated, color will be ignored
select cast(2 as bigint) as id, 'yo' as msg, 'green' as color
union all
select cast(3 as bigint) as id, 'anyway' as msg, 'purple' as color

{% endif %}
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,7 @@ def run_and_test(self):
self.assertTablesEqual("append_delta", "expected_append")
self.assertTablesEqual("merge_no_key", "expected_append")
self.assertTablesEqual("merge_unique_key", "expected_upsert")
self.assertTablesEqual("merge_update_columns", "expected_partial_upsert")

@use_profile("databricks_cluster")
def test_delta_strategies_databricks_cluster(self):
Expand Down