Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix describe throwing errors when listing #238

Merged
merged 5 commits into from
Sep 20, 2023

Conversation

william-conti
Copy link
Contributor

Fixes #229

@codecov
Copy link

codecov bot commented Sep 20, 2023

Codecov Report

Merging #238 (af1a314) into main (9ef7ffe) will increase coverage by 0.32%.
Report is 1 commits behind head on main.
The diff coverage is 84.00%.

@@            Coverage Diff             @@
##             main     #238      +/-   ##
==========================================
+ Coverage   82.47%   82.80%   +0.32%     
==========================================
  Files          29       29              
  Lines        1872     1884      +12     
  Branches      303      305       +2     
==========================================
+ Hits         1544     1560      +16     
+ Misses        272      269       -3     
+ Partials       56       55       -1     
Files Changed Coverage Δ
...rc/databricks/labs/ucx/workspace_access/listing.py 91.54% <66.66%> (-5.38%) ⬇️
src/databricks/labs/ucx/hive_metastore/tables.py 96.00% <100.00%> (+11.71%) ⬆️
src/databricks/labs/ucx/install.py 82.05% <100.00%> (+0.05%) ⬆️

Copy link
Contributor

@nfx nfx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you!

@nfx nfx added this pull request to the merge queue Sep 20, 2023
Merged via the queue into main with commit fbb7dd6 Sep 20, 2023
4 checks passed
FastLee pushed a commit that referenced this pull request Sep 20, 2023
@nfx nfx mentioned this pull request Sep 21, 2023
nfx added a commit that referenced this pull request Sep 21, 2023
* Added batched iteration for `INSERT INTO` queries in
`StatementExecutionBackend` with default `max_records_per_batch=1000`
([#237](#237)).
* Added crawler for mount points
([#209](#209)).
* Added crawlers for compatibility of jobs and clusters, along with
basic recommendations for external locations
([#244](#244)).
* Added safe return on grants
([#246](#246)).
* Added ability to specify empty group filter in the installer script
([#216](#216))
([#217](#217)).
* Added ability to install application by multiple different users on
the same workspace ([#235](#235)).
* Added dashboard creation on installation and a requirement for
`warehouse_id` in config, so that the assessment dashboards are
refreshed automatically after job runs
([#214](#214)).
* Added reliance on rate limiting from Databricks SDK for listing
workspace ([#258](#258)).
* Fixed errors in corner cases where Azure Service Principal Credentials
were not available in Spark context
([#254](#254)).
* Fixed `DESCRIBE TABLE` throwing errors when listing Legacy Table ACLs
([#238](#238)).
* Fixed `file already exists` error in the installer script
([#219](#219))
([#222](#222)).
* Fixed `guess_external_locations` failure with `AttributeError:
as_dict` and added an integration test
([#259](#259)).
* Fixed error handling edge cases in `crawl_tables` task
([#243](#243))
([#251](#251)).
* Fixed `crawl_permissions` task failure on folder names containing a
forward slash ([#234](#234)).
* Improved `README` notebook documentation
([#260](#260),
[#228](#228),
[#252](#252),
[#223](#223),
[#225](#225)).
* Removed redundant `.python-version` file
([#221](#221)).
* Removed discovery of account groups from `crawl_permissions` task
([#240](#240)).
* Updated databricks-sdk requirement from ~=0.8.0 to ~=0.9.0
([#245](#245)).
@nfx nfx deleted the fix_describe_table_throwing_errors branch September 21, 2023 20:58
larsgeorge-db pushed a commit that referenced this pull request Sep 23, 2023
* Added batched iteration for `INSERT INTO` queries in
`StatementExecutionBackend` with default `max_records_per_batch=1000`
([#237](#237)).
* Added crawler for mount points
([#209](#209)).
* Added crawlers for compatibility of jobs and clusters, along with
basic recommendations for external locations
([#244](#244)).
* Added safe return on grants
([#246](#246)).
* Added ability to specify empty group filter in the installer script
([#216](#216))
([#217](#217)).
* Added ability to install application by multiple different users on
the same workspace ([#235](#235)).
* Added dashboard creation on installation and a requirement for
`warehouse_id` in config, so that the assessment dashboards are
refreshed automatically after job runs
([#214](#214)).
* Added reliance on rate limiting from Databricks SDK for listing
workspace ([#258](#258)).
* Fixed errors in corner cases where Azure Service Principal Credentials
were not available in Spark context
([#254](#254)).
* Fixed `DESCRIBE TABLE` throwing errors when listing Legacy Table ACLs
([#238](#238)).
* Fixed `file already exists` error in the installer script
([#219](#219))
([#222](#222)).
* Fixed `guess_external_locations` failure with `AttributeError:
as_dict` and added an integration test
([#259](#259)).
* Fixed error handling edge cases in `crawl_tables` task
([#243](#243))
([#251](#251)).
* Fixed `crawl_permissions` task failure on folder names containing a
forward slash ([#234](#234)).
* Improved `README` notebook documentation
([#260](#260),
[#228](#228),
[#252](#252),
[#223](#223),
[#225](#225)).
* Removed redundant `.python-version` file
([#221](#221)).
* Removed discovery of account groups from `crawl_permissions` task
([#240](#240)).
* Updated databricks-sdk requirement from ~=0.8.0 to ~=0.9.0
([#245](#245)).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants