Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to add columns to Table resource #69

Closed
mattfysh opened this issue Feb 25, 2023 · 3 comments
Closed

Unable to add columns to Table resource #69

mattfysh opened this issue Feb 25, 2023 · 3 comments
Labels
awaiting-feedback Blocked on input from the author kind/bug Some behavior is incorrect or out of spec

Comments

@mattfysh
Copy link

What happened?

Attempting to define a table with the nodejs SDK as:

new databricks.Table('name', {
  // ...
  columns: [{
    name: 'x',
    position: 0,
    typeName: 'string',
    typeText: 'varchar(64)',
  }],
})

This results in the following error:

cannot update table: UpdateTable Missing required field: UpdateTable Missing required field: column_0.type_name

Expected Behavior

The plugin should correctly map typeName to type_name before sending to the databricks API

Steps to reproduce

Create any databricks.Table resource with the nodejs Pulumi SDK

Output of pulumi about

CLI
Version 3.55.0
Go Version go1.19.5
Go Compiler gc

Plugins
NAME VERSION
nodejs unknown

Host
OS darwin
Version 13.2.1
Arch x86_64

This project is written in nodejs: executable='/usr/local/bin/node' version='v19.7.0'

Additional context

No response

Contributing

Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).

@mattfysh mattfysh added kind/bug Some behavior is incorrect or out of spec needs-triage Needs attention from the triage team labels Feb 25, 2023
@mattfysh
Copy link
Author

Perhaps this is not possible? I am migrating from AWS Glue Catalog, which allows you to create a Table without any compute resources (since a catalog should be just metadata anyway)

But after playing with the Databricks console a little more, it appears that you need to use Compute resources to create the table so a new delta log can be written to S3

It's a shame it works this way, Spark jobs don't require any existing files at the table location when they first run, and this seems like a step away from making catalogs "metadata only", which they should be. in theory, metadata catalogs should not require any compute resources to create new schemas, tables and views.

@iwahbe
Copy link
Member

iwahbe commented Feb 27, 2023

Hey @mattfysh, thanks for opening an issue! I'm not sure I understand how your comment relates to this issue. Can you clarify what might be impossible?

@iwahbe iwahbe added awaiting-feedback Blocked on input from the author and removed needs-triage Needs attention from the triage team labels Feb 27, 2023
@mjeffryes
Copy link
Member

Closing due to inactivity.

@mjeffryes mjeffryes closed this as not planned Won't fix, can't repro, duplicate, stale Aug 28, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
awaiting-feedback Blocked on input from the author kind/bug Some behavior is incorrect or out of spec
Projects
None yet
Development

No branches or pull requests

3 participants