Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update databricks-labs-blueprint requirement from <0.6.0,>=0.4.3 to >=0.4.3,<0.7.0 #1688

Merged

Conversation

dependabot[bot]
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github May 13, 2024

Updates the requirements on databricks-labs-blueprint to permit the latest version.

Release notes

Sourced from databricks-labs-blueprint's releases.

v0.6.0

  • Added upstream wheel uploads for Databricks Workspaces without Public Internet access (#99). This commit introduces a new feature for uploading upstream wheel dependencies to Databricks Workspaces without Public Internet access. A new flag has been added to upload functions, allowing users to include or exclude dependencies in the download list. The WheelsV2 class has been updated with a new method, upload_wheel_dependencies(prefixes), which checks if each wheel's name starts with any of the provided prefixes before uploading it to the Workspace File System (WSFS). This feature also includes two new tests to verify the functionality of uploading the main wheel package and dependent wheel packages, optimizing downloads based on specific use cases. This enables users to more easily use the package in offline environments with restricted internet access, particularly for Databricks Workspaces with extra layers of network security.
  • Fixed bug for double-uploading of unreleased wheels in air-gapped setups (#103). In this release, we have addressed a bug in the upload_wheel_dependencies method of the WheelsV2 class, which caused double-uploading of unreleased wheels in air-gapped setups. This issue occurred due to the condition if wheel.name == self._local_wheel.name not being met, resulting in undefined behavior. We have introduced a cached property _current_version to tackle this bug for unreleased versions uploaded to air-gapped workspaces. We also added a new method, upload_to_wsfs(), that uploads files to the workspace file system (WSFS) in the integration test. This release also includes new tests to ensure that only the Databricks SDK is uploaded and that the number of installation files is correct. These changes have resolved the double-uploading issue, and the number of installation files, Databricks SDK, Blueprint, and version.json metadata are now uploaded correctly to WSFS.

Contributors: @​aminmovahed-db, @​nfx

Changelog

Sourced from databricks-labs-blueprint's changelog.

0.6.0

  • Added upstream wheel uploads for Databricks Workspaces without Public Internet access (#99). This commit introduces a new feature for uploading upstream wheel dependencies to Databricks Workspaces without Public Internet access. A new flag has been added to upload functions, allowing users to include or exclude dependencies in the download list. The WheelsV2 class has been updated with a new method, upload_wheel_dependencies(prefixes), which checks if each wheel's name starts with any of the provided prefixes before uploading it to the Workspace File System (WSFS). This feature also includes two new tests to verify the functionality of uploading the main wheel package and dependent wheel packages, optimizing downloads based on specific use cases. This enables users to more easily use the package in offline environments with restricted internet access, particularly for Databricks Workspaces with extra layers of network security.
  • Fixed bug for double-uploading of unreleased wheels in air-gapped setups (#103). In this release, we have addressed a bug in the upload_wheel_dependencies method of the WheelsV2 class, which caused double-uploading of unreleased wheels in air-gapped setups. This issue occurred due to the condition if wheel.name == self._local_wheel.name not being met, resulting in undefined behavior. We have introduced a cached property _current_version to tackle this bug for unreleased versions uploaded to air-gapped workspaces. We also added a new method, upload_to_wsfs(), that uploads files to the workspace file system (WSFS) in the integration test. This release also includes new tests to ensure that only the Databricks SDK is uploaded and that the number of installation files is correct. These changes have resolved the double-uploading issue, and the number of installation files, Databricks SDK, Blueprint, and version.json metadata are now uploaded correctly to WSFS.

0.5.0

  • Added content assertion for assert_file_uploaded and assert_file_dbfs_uploaded in MockInstallation (#101). The recent commit introduces a content assertion feature to the MockInstallation class, enhancing its testing capabilities. This is achieved by adding an optional expected parameter of type bytes to the assert_file_uploaded and assert_file_dbfs_uploaded methods, allowing users to verify the uploaded content's correctness. The _assert_upload method has also been updated to accept this new parameter, ensuring the actual uploaded content matches the expected content. Furthermore, the commit includes informative docstrings for the new and updated methods, providing clear explanations of their functionality and usage. To support these improvements, new test cases test_assert_file_uploaded and test_load_empty_data_class have been added to the tests/unit/test_installation.py file, enabling more rigorous testing of the MockInstallation class and ensuring that the expected content is uploaded correctly.
  • Added handling for partial functions in parallel.Threads (#93). In this release, we have enhanced the parallel.Threads module with the ability to handle partial functions, addressing issue #93. This improvement includes the addition of a new static method, _get_result_function_signature, to obtain the signature of a function or a string representation of its arguments and keywords if it is a partial function. The _wrap_result class method has also been updated to log an error message with the function's signature if an exception occurs. Furthermore, we have added a new test case, test_odd_partial_failed, to the unit tests, ensuring that the gather function handles partial functions that raise errors correctly. The Python version required for this project remains at 3.10, and the pyproject.toml file has been updated to include "isort", "mypy", "types-PyYAML", and types-requests in the list of dependencies. These adjustments are aimed at improving the functionality and type checking in the parallel.Threads module.
  • Align configurations with UCX project (#96). This commit brings project configurations in line with the UCX project through various fixes and updates, enhancing compatibility and streamlining collaboration. It addresses pylint configuration warnings, adjusts GitHub Actions workflows, and refines the pyproject.toml file. Additionally, the NiceFormatter class in logger.py has been improved for better code readability, and the versioning scheme has been updated to ensure SemVer and PEP440 compliance, making it easier to manage and understand the project's versioning. Developers adopting the project will benefit from these alignments, as they promote adherence to the project's standards and up-to-date best practices.
  • Check backwards compatibility with UCX, Remorph, and LSQL (#84). This release includes an update to the dependabot configuration to check for daily updates in both the pip and github-actions package ecosystems, with a new directory parameter added for the pip ecosystem for more precise update management. Additionally, a new GitHub Actions workflow, "downstreams", has been added to ensure backwards compatibility with UCX, Remorph, and LSQL by running automated downstream checks on pull requests, merge groups, and pushes to the main branch. The workflow has appropriate permissions for writing id-tokens, reading contents, and writing pull-requests, and runs the downstreams action from the databrickslabs/sandbox repository using GITHUB_TOKEN for authentication. These changes improve the security and maintainability of the project by ensuring compatibility with downstream projects and staying up-to-date with the latest package versions, reducing the risk of potential security vulnerabilities and bugs.

Dependency updates:

  • Bump actions/setup-python from 4 to 5 (#89).
  • Bump softprops/action-gh-release from 1 to 2 (#87).
  • Bump actions/checkout from 2.5.0 to 4.1.2 (#88).
  • Bump codecov/codecov-action from 1 to 4 (#85).
  • Bump actions/checkout from 4.1.2 to 4.1.3 (#95).
  • Bump actions/checkout from 4.1.3 to 4.1.5 (#100).

0.4.4

  • If Threads.strict() raises just one error, don't wrap it with ManyError (#79). The strict method in the gather function of the parallel.py module in the databricks/labs/blueprint package has been updated to change the way it handles errors. Previously, if any task in the tasks sequence failed, the strict method would raise a ManyError exception containing all the errors. With this change, if only one error occurs, that error will be raised directly without being wrapped in a ManyError exception. This simplifies error handling and avoids unnecessary nesting of exceptions. Additionally, the __tracebackhide__ dunder variable has been added to the method to improve the readability of tracebacks by hiding it from the user. This update aims to provide a more streamlined and user-friendly experience for handling errors in parallel processing tasks.

0.4.3

  • Fixed marshalling & unmarshalling edge cases (#76). The serialization and deserialization methods in the code have been updated to improve handling of edge cases during marshalling and unmarshalling of data. When encountering certain edge cases, the _marshal_list method will now return an empty list instead of None, and both the _unmarshal and _unmarshal_dict methods will return None as is if the input is None. Additionally, the _unmarshal method has been updated to call _unmarshal_generic instead of checking if the type reference is a dictionary or list when it is a generic alias. The _unmarshal_generic method has also been updated to handle cases where the input is None. A new test case, test_load_empty_data_class(), has been added to the tests/unit/test_installation.py file to verify this behavior, ensuring that the correct behavior is maintained when encountering these edge cases during the marshalling and unmarshalling processes. These changes increase the reliability of the serialization and deserialization processes.

0.4.2

  • Fixed edge cases when loading typing.Dict, typing.List and typing.ClassVar (#74). In this release, we have implemented changes to improve the handling of edge cases related to the Python typing.Dict, typing.List, and typing.ClassVar during serialization and deserialization of dataclasses and generic types. Specifically, we have modified the _marshal and _unmarshal functions to check for the __origin__ attribute to determine whether the type is a ClassVar and skip it if it is. The _marshal_dataclass and _unmarshal_dataclass functions now check for the __dataclass_fields__ attribute to ensure that only dataclass fields are marshaled and unmarshaled. We have also added a new unit test for loading a complex data class using the MockInstallation class, which contains various attributes such as a string, a nested dictionary, a list of Policy objects, and a dictionary mapping string keys to Policy objects. This test case checks that the installation object correctly serializes and deserializes the ComplexClass instance to and from JSON format according to the specified attribute types, including handling of the typing.Dict, typing.List, and typing.ClassVar types. These changes improve the reliability and robustness of our library in handling complex data types defined in the typing module.
  • MockPrompts.extend() now returns a copy (#72). In the latest release, the extend() method in the MockPrompts class of the tui.py module has been enhanced. Previously, extend() would modify the original MockPrompts object, which could lead to issues when reusing the same object in multiple places within the same test, as its state would be altered each time extend() was called. This has been addressed by updating the extend() method to return a copy of the MockPrompts object with the updated patterns and answers, instead of modifying the original object. This change ensures that the original MockPrompts object can be securely reused in multiple test scenarios without unintended side effects, preserving the integrity of the original state. Furthermore, additional tests have been incorporated to verify the correct behavior of both the new and original prompts.

0.4.1

  • Fixed MockInstallation to emulate workspace-global setup (#69). In this release, the MockInstallation class in the installation module has been updated to better replicate a workspace-global setup, enhancing testing and development accuracy. The is_global method now utilizes the product method instead of _product, and a new instance variable _is_global with a default value of True is introduced in the __init__ method. Moreover, a new product method is included, which consistently returns the string "mock". These enhancements resolve issue #69, "Fixed MockInstallation to emulate workspace-global setup", ensuring the MockInstallation instance behaves as a global installation, facilitating precise and reliable testing and development for our software engineering team.
  • Improved MockPrompts with extend() method (#68). In this release, we've added an extend() method to the MockPrompts class in our library's TUI module. This new method allows developers to add new patterns and corresponding answers to the existing list of questions and answers in a MockPrompts object. The added patterns are compiled as regular expressions and the questions and answers list is sorted by the length of the regular expression patterns in descending order. This feature is particularly useful for writing tests where prompt answers need to be changed, as it enables better control and customization of prompt responses during testing. By extending the list of questions and answers, you can handle additional prompts without modifying the existing ones, resulting in more organized and maintainable test code. If a prompt hasn't been mocked, attempting to ask a question with it will raise a ValueError with an appropriate error message.
  • Use Hatch v1.9.4 to as build machine requirement (#70). The Hatch package version for the build machine requirement has been updated from 1.7.0 to 1.9.4 in this change. This update streamlines the Hatch setup and version management, removing the specific installation step and listing hatch directly in the required field. The pre-setup command now only includes "hatch env create". Additionally, the acceptance tool version has been updated to ensure consistent project building and testing with the specified Hatch version. This change is implemented in the acceptance workflow file and the version of the acceptance tool used by the sandbox. This update ensures that the project can utilize the latest features and bug fixes available in Hatch 1.9.4, improving the reliability and efficiency of the build process. This change is part of the resolution of issue #70.

0.4.0

  • Added commands with interactive prompts (#66). This commit introduces a new feature in the Databricks Labs project to support interactive prompts in the command-line interface (CLI) for enhanced user interactivity. The Prompts argument, imported from databricks.labs.blueprint.tui, is now integrated into the @app.command decorator, enabling the creation of commands with user interaction like confirmation prompts. An example of this is the me command, which confirms whether the user wants to proceed before displaying the current username. The commit also refactored the code to make it more efficient and maintainable, removing redundancy in creating client instances. The AccountClient and WorkspaceClient instances can now be provided automatically with the product name and version. These changes improve the CLI by making it more interactive, user-friendly, and adaptable to various use cases while also optimizing the codebase for better efficiency and maintainability.
  • Added more code documentation (#64). This release introduces new features and updates to various files in the open-source library. The cli.py file in the src/databricks/labs/blueprint directory has been updated with a new decorator, command, which registers a function as a command. The entrypoint.py file in the databricks.labs.blueprint module now includes a module-level docstring describing its purpose, as well as documentation for the various standard libraries it imports. The Installation class in the installers.py file has new methods for handling files, such as load, load_or_default, upload, load_local, and files. The installers.py file also includes a new InstallationState dataclass, which is used to track installations. The limiter.py file now includes code documentation for the RateLimiter class and the rate_limited decorator, which are used to limit the rate of requests. The logger.py file includes a new NiceFormatter class, which provides a nicer format for logging messages with colors and bold text if the console supports it. The parallel.py file has been updated with new methods for running tasks in parallel and returning results and errors. The TUI.py file has been documented, and includes imports for logging, regular expressions, and collections abstract base class. Lastly, the upgrades.py file has been updated with additional code documentation and new methods for loading and applying upgrade scripts. Overall, these changes improve the functionality, maintainability, and usability of the open-source library.
  • Fixed init-project command (#65). In this release, the init-project command has been improved with several bug fixes and new functionalities. A new import statement for the sys module has been added, and a docs directory is now included in the copied directories and files during initialization. The init_project function has been updated to open files using the default system encoding, ensuring proper reading and writing of file contents. The relative_paths function in the entrypoint.py file now returns absolute paths if the common path is the root directory, addressing issue #41. Additionally, several test functions have been added to tests/unit/test_entrypoint.py, enhancing the reliability and robustness of the init-project command by providing comprehensive tests for supporting functions. Overall, these changes significantly improve the functionality and reliability of the init-project command, ensuring a more consistent and accurate project initialization process.

... (truncated)

Commits
  • 2b75d24 Release v0.6.0 (#104)
  • 41e4aab Fixed bug for double-uploading of unreleased wheels in air-gapped setups (#103)
  • 50b5474 Added upstream wheel uploads for Databricks Workspaces without Public Interne...
  • c959367 Release v0.5.0 (#102)
  • 47ab384 Bump actions/checkout from 4.1.3 to 4.1.5 (#100)
  • aa3bf8c Added content assertion for assert_file_uploaded and `assert_file_dbfs_uplo...
  • a5a8563 Align configurations with UCX project (#96)
  • 43add0b Bump actions/checkout from 4.1.2 to 4.1.3 (#95)
  • d2ceef7 Handle partial functions in parallel.Threads (#93)
  • ea62287 Bump codecov/codecov-action from 1 to 4 (#85)
  • Additional commits viewable in compare view

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Updates the requirements on [databricks-labs-blueprint](https://github.com/databrickslabs/blueprint) to permit the latest version.
- [Release notes](https://github.com/databrickslabs/blueprint/releases)
- [Changelog](https://github.com/databrickslabs/blueprint/blob/main/CHANGELOG.md)
- [Commits](databrickslabs/blueprint@v0.4.3...v0.6.0)

---
updated-dependencies:
- dependency-name: databricks-labs-blueprint
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
@dependabot dependabot bot requested a review from a team May 13, 2024 15:50
@dependabot dependabot bot added dependencies Pull requests that update a dependency file python Pull requests that update Python code labels May 13, 2024
@dependabot dependabot bot requested a review from william-conti May 13, 2024 15:50
Copy link
Contributor

@nfx nfx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@nfx nfx merged commit 00434ee into main May 21, 2024
5 of 6 checks passed
@nfx nfx deleted the dependabot/pip/databricks-labs-blueprint-gte-0.4.3-and-lt-0.7.0 branch May 21, 2024 10:47
@nfx nfx mentioned this pull request May 27, 2024
nfx added a commit that referenced this pull request May 27, 2024
* Added `%pip` cell resolver ([#1697](#1697)). A newly developed pip resolver has been integrated into the ImportResolver for future use, addressing issue [#1642](#1642) and following up on [#1694](#1694). The resolver installs libraries and modifies the path lookup to make them available for import. This change affects existing workflows but does not introduce new CLI commands, tables, or files. The commit includes modifications to the build_dependency_graph method and the addition of unit tests to verify the new functionality. The resolver has been manually tested and passes the unit tests, ensuring better compatibility and accessibility for libraries used in the project.
* Added downloads of `requirementst.txt` dependency locally to register it to the dependency graph ([#1753](#1753)). This commit introduces support for linting job tasks that require a 'requirements.txt' file for specifying dependencies. It resolves issue [#1644](#1644) and is similar to [#1704](#1704). The changes include the addition of a new CLI command, modification of the existing 'databricks labs ucx ...' command, and modification of the `experimental-workflow-linter` workflow. The `lint_job` method has been updated to handle dependencies specified in a 'requirements.txt' file, checking for their presence in the job's libraries list and flagging any missing dependencies. The code changes include modifications to the 'jobs.py' file to register libraries specified in a 'requirements.txt' file to the dependency graph. Unit and integration tests have been added to verify the new functionality. The changes also include handling of jar libraries. The code includes TODO comments for future enhancements such as downloading the library wheel and adding it to the virtual system path, and handling references to other requirements files and constraints files.
* Added ability to install UCX on workspaces without Public Internet connectivity ([#1566](#1566)). A new flag, `upload_dependencies`, has been added to the WorkspaceConfig to enable users to upload dependencies to air-gapped workspaces without public internet connectivity. This flag is a boolean value that is set to False by default and can be set by the user through the installation prompt. This feature resolves issue [#573](#573) and was co-authored by hari-selvarajan_data. When this flag is set to True, it triggers the upload of specified dependencies during installation, which allows for the installation of UCX on workspaces without public internet access. This change also includes updating the version of `databricks-labs-blueprint` from `<0.7.0` to `>=0.6.0`, which may include changes to existing functionality. Additionally, new test functions have been added to test the functionality of uploading dependencies when the `upload_dependencies` flag is set to True.
* Added initial interface for data comparison framework ([#1695](#1695)). This commit introduces the initial interface for a data comparison framework, which includes classes and methods for managing metadata, profiling data, and comparing schema and data for tables. A new `StandardDataComparator` class has been implemented for comparing the data of two tables, and a `StandardSchemaComparator` class tests the comparison of table schemas. The framework also includes the `DatabricksTableMetadataRetriever` class for retrieving metadata about a given table using a SQL backend. Additional classes and methods will be implemented in future work to provide a robust data comparison framework, such as `StandardDataProfiler` for profiling data, `SchemaComparator` and `DataComparator` for comparing schema and data, and test fixtures and functions for testing the framework. This release lays the groundwork for enabling users to perform comprehensive data comparisons effectively, enhancing the project's capabilities and versatility.
* Added lint local code command ([#1710](#1710)). A new `lint local code` command has been added to the databricks labs ucx tool, allowing users to assess required migrations in a local directory or file. This command detects dependencies and analyzes them, currently supporting Python and SQL files, with an expected runtime of under a minute for code bases up to 50,000 lines of code. The command generates output that includes file links opening the file at the problematic line in modern IDEs, providing a quick and easy way to identify necessary migrations. The `lint-local-code` command is implemented in the `application.py` file, with supporting methods and classes added to the `workspace_cli.py` and `databricks.labs.ucx.source_code` packages, enhancing the linting process and providing valuable feedback for maintaining high code quality standards.
* Added table in mount migration ([#1225](#1225)). This commit introduces new functionality to migrate tables in mounts to the Unity Catalog, including creating a table in the Unity Catalog based on a table mapping CSV file, fixing an issue with include_paths_in_mount not being present in workflows.py, and adding the ability to set default ownership on each created table. A new method ScanTablesInMounts has been added to scan tables in mounts, and a TableMigration class creates tables in the Unity Catalog based on the table mapping. Two new methods, Rule and TableMapping, have been added to manage mappings of tables, and TableToMigrate is used to represent a table that needs to be migrated to Unity Catalog. The commit includes manual, unit, and integration testing to ensure the changes work as expected. The diff shows changes to the workflows.py file and the addition of several new methods, including Rule, TableMapping, TableToMigrate, create_autospec, and MockBackend.
* Added workflows to trigger table reconciliations ([#1721](#1721)). In this release, we've introduced several enhancements to our table migration workflow, focusing on data reconciliation and consistency. We've added a new post-migration data reconciliation task that validates migrated table integrity by comparing the schema, row count, and individual row content of the source and target tables. The new task stores and displays the number of missing rows in the Migration dashboard's `$inventory_database.reconciliation_results` view. Additionally, new workflows have been implemented to automatically trigger table reconciliations, ensuring consistency and integrity between different data sources. These workflows involve modifying relevant functions and modules, and may include new methods for data processing, scheduling, or monitoring based on the project's architecture. Furthermore, new configuration options for table reconciliation are now available in the WorkspaceConfig class, allowing for greater control and flexibility over migration processes. By incorporating these improvements, users can expect enhanced data consistency and more efficient table reconciliation management.
* Always refresh HMS stats when getting table size ([#1713](#1713)). A change has been implemented in the hive_metastore library to enhance the precision of table size calculations by ensuring that HMS stats are always refreshed before being retrieved. This has been achieved by calling the ANALYZE TABLE command with the COMPUTE STATISTICS NOSCAN option before computing the table size, thus preventing the use of stale stats. Specifically, the "backend.queries" list has been updated to include two ANALYZE statements for tables "db1.table1" and "db1.table2", ensuring that their statistics are updated and accurate. The test case `test_table_size_crawler` in the "test_table_size.py" file has been revised to validate the presence of the two ANALYZE statements in the "backend.queries" list and confirm the size of the results for both tables. This commit also includes manual testing, added unit tests, and verification on the staging environment to ensure the functionality.
* Automatically retrieve `aws_account_id` from aws profile instead of prompting ([#1715](#1715)). This commit introduces several improvements to the library's AWS integration, enhancing automation and user experience. It eliminates the need for manual input of `aws_account_id` by automatically retrieving it from the AWS profile. An optional `kms-key` flag has been documented for creating roles, providing more flexibility. The `create-missing-principals` command now accepts optional parameters such as KMS Key, Role Name, Policy Name, and allows creating a single role for all S3 locations, with a default behavior of creating one role per S3 location. These changes have been manually tested and verified in a staging environment, and resolve issue [#1714](#1714). Additionally, tests have been conducted to ensure the changes do not introduce regressions. A new method simulating a successful AWS CLI call has been added, replacing `aws_cli_run_command`, ensuring automated retrieval of `aws_account_id`. A test has also been added to raise an error when AWS CLI is not found in the system path.
* Detect dependencies of libraries installed via pip ([#1703](#1703)). This commit introduces a child dependency graph for libraries resolved via pip using DistInfo data, addressing issues [#1642](#1642) and [#1202](#1202). It modifies certain tests and reduces their execution time. The PipResolver class in `databricks.labs.ucx.source_code.graph` is used to detect and resolve library dependencies installed via pip, with methods to locate, install, and register libraries in a specified folder. A new Whitelist feature and updated DistInfoPackage class are also included. Although unit tests have been added, no new user documentation, CLI commands, workflows, or tables have been added or modified. The previous site_packages attribute has been removed from the GlobalContext class.
* Emit problems with code belonging to job ([#1730](#1730)). In this release, the jobs.py file has been updated with new functionality in the JobProblem class, enabling it to convert itself into a string message using the new as_message() method. The refresh_report() method has been modified to call a new _lint_job() method when provided with a job object, which returns a list of JobProblem instances. The lint_job() method has also been updated to call _lint_job() and return a list of JobProblem instances, with a new behavior to log warning messages when problems are found. The changes include the addition of a new method, `lint_job`, for linting a job and returning any problems found. The changes have been tested through the addition of a new integration test, `test_job_linter_some_notebook_graph_with_problems`, and are manually tested and covered with unit and integration tests. This release addresses issue [#1542](#1542) and improves the job linter functionality, specifically detecting and emitting problems related to code belonging to a job during the lin job. The new `JobProblem` class has an `as_message()` method that returns a string representation of the problem, and a unit test for this method has been added. The `DependencyResolver` in the `DependencyGraph` constructor has also been modified.
* Fixed `create-catalogs-schemas` to allow more than 1 level nesting more than the external location ([#1701](#1701)). The `create-catalogs-schemas` library has been updated to allow for more than one level of nesting beyond the external location, addressing issue [#1700](#1700). This release includes a new CLI command, as well as modifications to the existing `databricks labs ucx ...` command. A new workflow has been added and existing functionality has been changed to support the additional nesting levels. The changes have been thoroughly tested through manual testing, unit tests, and integration tests using the `fnmatch.fnmatch` method for validating location patterns. Software engineers adopting this project will benefit from these enhancements.
* Fixed local file resolver logic with relative paths and site-packages ([#1685](#1685)). This commit addresses an issue ([#1685](#1685)) related to the local file resolver logic for relative paths and site-packages. The resolver's logic has been updated to look for `_package_/__init__.py` instead of relying on `dist-info` metadata, and the resolver has been wired back into the global resolver chain with updated calling code. No changes have been made to user documentation, CLI commands, workflows, or tables. New methods have not been added, but existing functionality has been modified to enhance local file resolution handling. Unit tests have been added and manually verified to ensure proper functionality.
* Fixed look up logic where instance profile name does not match role name ([#1716](#1716)). A fix has been implemented to improve the robustness of the instance profile lookup mechanism in the open-source library. Previously, the code relied on the role name being the same as the instance profile name, which resulted in issues when the names did not match ([#1716](#1716), [#1711](#1711)). This has been addressed by updating the `role_name` method in the `AWSRoleAction` class to use a new regex pattern 'AWSResources.ROLE_NAME_REGEX', and renaming the `get_instance_profile` method in the `AWSResources` class to `get_instance_profile_arn` to reflect the change in return type from a string to an ARN. A new method, 'get_instance_profile_role_arn', has also been added to the `AWSResources` class to retrieve the role ARN from the instance profile. Additionally, new methods `get_instance_profile_arn` and `instance_lookup` have been added to improve testing capabilities.
* Fixed pip install in a multiline cell ([#1728](#1728)). This release includes a fix for an issue where pip install commands with multiline code were not being handled correctly (issue [#1728](#1728), issue [#1642](#1642)). The `build_dependency_graph` function of the `PipCell` class has been updated to properly register the library specified in the pip install command, even if it is spread over multiple lines. The function now splits the original code by spaces or new lines, allowing it to extract the library name correctly. These changes have been thoroughly tested through manual testing and unit tests to ensure that pip install commands with multiline code are now handled correctly, resulting in the library being installed and registered properly.
* README update about Standard workspaces ([#1734](#1734)). In this release, the README file of our open-source library has been updated to provide additional user documentation on compatibility with Standard Workspaces on Databricks. The changes include an outlined incompatibility section, specifically designed for users of Standard Workspaces. It is important to note that these updates are purely informational and do not involve any changes to existing commands, workflows, tables, or functionality within the code. No new methods or modifications have been made to the existing functionality. The commit does not include any tests, as the changes are limited to updating user documentation. The changes have been manually tested to ensure accuracy. The target audience for this release includes software engineers who are adopting the project and may require additional guidance on compatibility with Standard Workspaces. Additionally, please note that a Databricks Premium or Enterprise workspace is now a prerequisite for using this library.
* Show code problems found by workflow linter in the migration dashboard ([#1741](#1741)). This commit introduces a new feature to the migration dashboard: an experimental workflow linter that identifies code compatibility problems for Unity Catalog integration. The feature includes a new CLI command, `migration_report`, which refreshes the migration dashboard after all previous tasks are completed, and an existing command, `databricks labs ucx ...`, has been modified. The `experimental-workflow-linter` workflow has also been changed, and new functionality has been added in the form of a new workflow. A new SQL query for displaying code compatibility problems is located in the file "02_1_code_compatibility_problems.sql". User documentation has been added, and the changes have been manually tested. This feature aims to improve the migration dashboard's functionality and provide a better experience for users. Targeted at software engineers, this feature will help in identifying and resolving code compatibility issues during the migration process.
* Support for s3a/ s3n protocols when using mount point ([#1765](#1765)). In this release, we have added support for s3a and s3n protocols when using mount points in the metastore locations. A new static method, `_get_ext_location_definitions`, has been introduced, which generates a name for a resource defined by the location and now supports additional prefixes "s3a://" and "s3n://" for defining resources in S3. For Azure Blob Storage, the container name is extracted from the location and included in the resource name. If the location does not match the supported formats, a warning is logged, and the script is not generated. These changes offer more flexibility in defining resources and improve the system's ability to handle various cloud storage solutions. Additionally, the `test_save_external_location_mapping_missing_location` function in `test_locations.py` has been updated to include test cases for s3a and s3n protocols, enhancing the software's functionality.
* Support joining an existing collection when installing UCX ([#1675](#1675)). The AccountInstaller class has been updated to include a new functionality that allows users to join an existing collection during UCX installation. This is achieved by presenting the user with a list of workspaces they have access to, allowing them to select one, and then checking if there are existing workspace IDs present in the selected workspace. If so, the installation will join the corresponding collection; otherwise, a new collection will be created. This feature simplifies UCX migration for large organizations with multiple workspaces by allowing them to manage collections instead of individual workspaces. Relevant user documentation and CLI commands have been updated, along with new and modified tests to ensure proper functionality. The commit includes the addition of new methods, `join_collection` and `is_account_install`, as well as updates to the `install_on_account` method to call `join_collection` if specified. Unit tests and integration tests have been added to ensure the proper functioning of the new feature.
* Updated UCX job cluster policy AWS zone_id to `auto` ([#1735](#1735)). In this release, the UCX job cluster policy for AWS has been updated to use `auto` for the zone_id, allowing Databricks to choose the zone based on a default value in the region. This change, which resolves issue [#533](#533), affects the definition method in the policy.py file, where a check has been added to remove 'aws_attributes.zone_id' if an instance pool ID is provided. The tests for this change include manual testing and new unit tests, with modifications to existing workflows. The diff shows updates to the test_policy.py file, where the 'aws_attributes.zone_id' is set to `auto` in several functions. No new CLI commands or documentation have been provided as part of this update.
* Updated assessment.md - `spark.catalog.x` guidance needed updating ([#1708](#1708)). With the release of DBR 14+, the `spark.catalog.*` functions, which were previously not recommended for use on shared compute clusters due to security reasons, are now considered safe to use. This change in guidance is reflected in the updated assessment.md document, which also notes that `spark.sql("<sql command>")` may still be a more suitable alternative for certain common spark.catalog functions like tableExists, listTables, and setDefaultCatalog. The corresponding `spark._jsparkSession.catalog` methods are also mentioned as potential alternatives on DBR 14.1 and above. It is important to note that no new methods or functionality have been added, and no existing functionality has been changed - only the guidance in the documentation has been updated. This update has been manually tested and implemented in the documentation to ensure accuracy and reliability for software engineers.

Dependency updates:

 * Updated sqlglot requirement from <23.15,>=23.9 to >=23.9,<23.16 ([#1681](#1681)).
 * Updated databricks-labs-blueprint requirement from <0.6.0,>=0.4.3 to >=0.4.3,<0.7.0 ([#1688](#1688)).
 * Updated sqlglot requirement from <23.16,>=23.9 to >=23.9,<23.18 ([#1724](#1724)).
 * Updated sqlglot requirement from <23.18,>=23.9 to >=23.9,<24.1 ([#1745](#1745)).
 * Updated databricks-sdk requirement from ~=0.27.0 to >=0.27,<0.29 ([#1756](#1756)).
 * Bump databrickslabs/sandbox from acceptance/v0.2.1 to 0.2.2 ([#1769](#1769)).
nfx added a commit that referenced this pull request May 27, 2024
* Added `%pip` cell resolver
([#1697](#1697)). A newly
developed pip resolver has been integrated into the ImportResolver for
future use, addressing issue
[#1642](#1642) and following
up on [#1694](#1694). The
resolver installs libraries and modifies the path lookup to make them
available for import. This change affects existing workflows but does
not introduce new CLI commands, tables, or files. The commit includes
modifications to the build_dependency_graph method and the addition of
unit tests to verify the new functionality. The resolver has been
manually tested and passes the unit tests, ensuring better compatibility
and accessibility for libraries used in the project.
* Added downloads of `requirementst.txt` dependency locally to register
it to the dependency graph
([#1753](#1753)). This
commit introduces support for linting job tasks that require a
'requirements.txt' file for specifying dependencies. It resolves issue
[#1644](#1644) and is
similar to [#1704](#1704).
The changes include the addition of a new CLI command, modification of
the existing 'databricks labs ucx ...' command, and modification of the
`experimental-workflow-linter` workflow. The `lint_job` method has been
updated to handle dependencies specified in a 'requirements.txt' file,
checking for their presence in the job's libraries list and flagging any
missing dependencies. The code changes include modifications to the
'jobs.py' file to register libraries specified in a 'requirements.txt'
file to the dependency graph. Unit and integration tests have been added
to verify the new functionality. The changes also include handling of
jar libraries. The code includes TODO comments for future enhancements
such as downloading the library wheel and adding it to the virtual
system path, and handling references to other requirements files and
constraints files.
* Added ability to install UCX on workspaces without Public Internet
connectivity
([#1566](#1566)). A new
flag, `upload_dependencies`, has been added to the WorkspaceConfig to
enable users to upload dependencies to air-gapped workspaces without
public internet connectivity. This flag is a boolean value that is set
to False by default and can be set by the user through the installation
prompt. This feature resolves issue
[#573](#573) and was
co-authored by hari-selvarajan_data. When this flag is set to True, it
triggers the upload of specified dependencies during installation, which
allows for the installation of UCX on workspaces without public internet
access. This change also includes updating the version of
`databricks-labs-blueprint` from `<0.7.0` to `>=0.6.0`, which may
include changes to existing functionality. Additionally, new test
functions have been added to test the functionality of uploading
dependencies when the `upload_dependencies` flag is set to True.
* Added initial interface for data comparison framework
([#1695](#1695)). This
commit introduces the initial interface for a data comparison framework,
which includes classes and methods for managing metadata, profiling
data, and comparing schema and data for tables. A new
`StandardDataComparator` class has been implemented for comparing the
data of two tables, and a `StandardSchemaComparator` class tests the
comparison of table schemas. The framework also includes the
`DatabricksTableMetadataRetriever` class for retrieving metadata about a
given table using a SQL backend. Additional classes and methods will be
implemented in future work to provide a robust data comparison
framework, such as `StandardDataProfiler` for profiling data,
`SchemaComparator` and `DataComparator` for comparing schema and data,
and test fixtures and functions for testing the framework. This release
lays the groundwork for enabling users to perform comprehensive data
comparisons effectively, enhancing the project's capabilities and
versatility.
* Added lint local code command
([#1710](#1710)). A new
`lint local code` command has been added to the databricks labs ucx
tool, allowing users to assess required migrations in a local directory
or file. This command detects dependencies and analyzes them, currently
supporting Python and SQL files, with an expected runtime of under a
minute for code bases up to 50,000 lines of code. The command generates
output that includes file links opening the file at the problematic line
in modern IDEs, providing a quick and easy way to identify necessary
migrations. The `lint-local-code` command is implemented in the
`application.py` file, with supporting methods and classes added to the
`workspace_cli.py` and `databricks.labs.ucx.source_code` packages,
enhancing the linting process and providing valuable feedback for
maintaining high code quality standards.
* Added table in mount migration
([#1225](#1225)). This
commit introduces new functionality to migrate tables in mounts to the
Unity Catalog, including creating a table in the Unity Catalog based on
a table mapping CSV file, fixing an issue with include_paths_in_mount
not being present in workflows.py, and adding the ability to set default
ownership on each created table. A new method ScanTablesInMounts has
been added to scan tables in mounts, and a TableMigration class creates
tables in the Unity Catalog based on the table mapping. Two new methods,
Rule and TableMapping, have been added to manage mappings of tables, and
TableToMigrate is used to represent a table that needs to be migrated to
Unity Catalog. The commit includes manual, unit, and integration testing
to ensure the changes work as expected. The diff shows changes to the
workflows.py file and the addition of several new methods, including
Rule, TableMapping, TableToMigrate, create_autospec, and MockBackend.
* Added workflows to trigger table reconciliations
([#1721](#1721)). In this
release, we've introduced several enhancements to our table migration
workflow, focusing on data reconciliation and consistency. We've added a
new post-migration data reconciliation task that validates migrated
table integrity by comparing the schema, row count, and individual row
content of the source and target tables. The new task stores and
displays the number of missing rows in the Migration dashboard's
`$inventory_database.reconciliation_results` view. Additionally, new
workflows have been implemented to automatically trigger table
reconciliations, ensuring consistency and integrity between different
data sources. These workflows involve modifying relevant functions and
modules, and may include new methods for data processing, scheduling, or
monitoring based on the project's architecture. Furthermore, new
configuration options for table reconciliation are now available in the
WorkspaceConfig class, allowing for greater control and flexibility over
migration processes. By incorporating these improvements, users can
expect enhanced data consistency and more efficient table reconciliation
management.
* Always refresh HMS stats when getting table size
([#1713](#1713)). A change
has been implemented in the hive_metastore library to enhance the
precision of table size calculations by ensuring that HMS stats are
always refreshed before being retrieved. This has been achieved by
calling the ANALYZE TABLE command with the COMPUTE STATISTICS NOSCAN
option before computing the table size, thus preventing the use of stale
stats. Specifically, the "backend.queries" list has been updated to
include two ANALYZE statements for tables "db1.table1" and "db1.table2",
ensuring that their statistics are updated and accurate. The test case
`test_table_size_crawler` in the "test_table_size.py" file has been
revised to validate the presence of the two ANALYZE statements in the
"backend.queries" list and confirm the size of the results for both
tables. This commit also includes manual testing, added unit tests, and
verification on the staging environment to ensure the functionality.
* Automatically retrieve `aws_account_id` from aws profile instead of
prompting ([#1715](#1715)).
This commit introduces several improvements to the library's AWS
integration, enhancing automation and user experience. It eliminates the
need for manual input of `aws_account_id` by automatically retrieving it
from the AWS profile. An optional `kms-key` flag has been documented for
creating roles, providing more flexibility. The
`create-missing-principals` command now accepts optional parameters such
as KMS Key, Role Name, Policy Name, and allows creating a single role
for all S3 locations, with a default behavior of creating one role per
S3 location. These changes have been manually tested and verified in a
staging environment, and resolve issue
[#1714](#1714).
Additionally, tests have been conducted to ensure the changes do not
introduce regressions. A new method simulating a successful AWS CLI call
has been added, replacing `aws_cli_run_command`, ensuring automated
retrieval of `aws_account_id`. A test has also been added to raise an
error when AWS CLI is not found in the system path.
* Detect dependencies of libraries installed via pip
([#1703](#1703)). This
commit introduces a child dependency graph for libraries resolved via
pip using DistInfo data, addressing issues
[#1642](#1642) and
[#1202](#1202). It modifies
certain tests and reduces their execution time. The PipResolver class in
`databricks.labs.ucx.source_code.graph` is used to detect and resolve
library dependencies installed via pip, with methods to locate, install,
and register libraries in a specified folder. A new Whitelist feature
and updated DistInfoPackage class are also included. Although unit tests
have been added, no new user documentation, CLI commands, workflows, or
tables have been added or modified. The previous site_packages attribute
has been removed from the GlobalContext class.
* Emit problems with code belonging to job
([#1730](#1730)). In this
release, the jobs.py file has been updated with new functionality in the
JobProblem class, enabling it to convert itself into a string message
using the new as_message() method. The refresh_report() method has been
modified to call a new _lint_job() method when provided with a job
object, which returns a list of JobProblem instances. The lint_job()
method has also been updated to call _lint_job() and return a list of
JobProblem instances, with a new behavior to log warning messages when
problems are found. The changes include the addition of a new method,
`lint_job`, for linting a job and returning any problems found. The
changes have been tested through the addition of a new integration test,
`test_job_linter_some_notebook_graph_with_problems`, and are manually
tested and covered with unit and integration tests. This release
addresses issue
[#1542](#1542) and improves
the job linter functionality, specifically detecting and emitting
problems related to code belonging to a job during the lin job. The new
`JobProblem` class has an `as_message()` method that returns a string
representation of the problem, and a unit test for this method has been
added. The `DependencyResolver` in the `DependencyGraph` constructor has
also been modified.
* Fixed `create-catalogs-schemas` to allow more than 1 level nesting
more than the external location
([#1701](#1701)). The
`create-catalogs-schemas` library has been updated to allow for more
than one level of nesting beyond the external location, addressing issue
[#1700](#1700). This release
includes a new CLI command, as well as modifications to the existing
`databricks labs ucx ...` command. A new workflow has been added and
existing functionality has been changed to support the additional
nesting levels. The changes have been thoroughly tested through manual
testing, unit tests, and integration tests using the `fnmatch.fnmatch`
method for validating location patterns. Software engineers adopting
this project will benefit from these enhancements.
* Fixed local file resolver logic with relative paths and site-packages
([#1685](#1685)). This
commit addresses an issue
([#1685](#1685)) related to
the local file resolver logic for relative paths and site-packages. The
resolver's logic has been updated to look for `_package_/__init__.py`
instead of relying on `dist-info` metadata, and the resolver has been
wired back into the global resolver chain with updated calling code. No
changes have been made to user documentation, CLI commands, workflows,
or tables. New methods have not been added, but existing functionality
has been modified to enhance local file resolution handling. Unit tests
have been added and manually verified to ensure proper functionality.
* Fixed look up logic where instance profile name does not match role
name ([#1716](#1716)). A fix
has been implemented to improve the robustness of the instance profile
lookup mechanism in the open-source library. Previously, the code relied
on the role name being the same as the instance profile name, which
resulted in issues when the names did not match
([#1716](#1716),
[#1711](#1711)). This has
been addressed by updating the `role_name` method in the `AWSRoleAction`
class to use a new regex pattern 'AWSResources.ROLE_NAME_REGEX', and
renaming the `get_instance_profile` method in the `AWSResources` class
to `get_instance_profile_arn` to reflect the change in return type from
a string to an ARN. A new method, 'get_instance_profile_role_arn', has
also been added to the `AWSResources` class to retrieve the role ARN
from the instance profile. Additionally, new methods
`get_instance_profile_arn` and `instance_lookup` have been added to
improve testing capabilities.
* Fixed pip install in a multiline cell
([#1728](#1728)). This
release includes a fix for an issue where pip install commands with
multiline code were not being handled correctly (issue
[#1728](#1728), issue
[#1642](#1642)). The
`build_dependency_graph` function of the `PipCell` class has been
updated to properly register the library specified in the pip install
command, even if it is spread over multiple lines. The function now
splits the original code by spaces or new lines, allowing it to extract
the library name correctly. These changes have been thoroughly tested
through manual testing and unit tests to ensure that pip install
commands with multiline code are now handled correctly, resulting in the
library being installed and registered properly.
* README update about Standard workspaces
([#1734](#1734)). In this
release, the README file of our open-source library has been updated to
provide additional user documentation on compatibility with Standard
Workspaces on Databricks. The changes include an outlined
incompatibility section, specifically designed for users of Standard
Workspaces. It is important to note that these updates are purely
informational and do not involve any changes to existing commands,
workflows, tables, or functionality within the code. No new methods or
modifications have been made to the existing functionality. The commit
does not include any tests, as the changes are limited to updating user
documentation. The changes have been manually tested to ensure accuracy.
The target audience for this release includes software engineers who are
adopting the project and may require additional guidance on
compatibility with Standard Workspaces. Additionally, please note that a
Databricks Premium or Enterprise workspace is now a prerequisite for
using this library.
* Show code problems found by workflow linter in the migration dashboard
([#1741](#1741)). This
commit introduces a new feature to the migration dashboard: an
experimental workflow linter that identifies code compatibility problems
for Unity Catalog integration. The feature includes a new CLI command,
`migration_report`, which refreshes the migration dashboard after all
previous tasks are completed, and an existing command, `databricks labs
ucx ...`, has been modified. The `experimental-workflow-linter` workflow
has also been changed, and new functionality has been added in the form
of a new workflow. A new SQL query for displaying code compatibility
problems is located in the file "02_1_code_compatibility_problems.sql".
User documentation has been added, and the changes have been manually
tested. This feature aims to improve the migration dashboard's
functionality and provide a better experience for users. Targeted at
software engineers, this feature will help in identifying and resolving
code compatibility issues during the migration process.
* Support for s3a/ s3n protocols when using mount point
([#1765](#1765)). In this
release, we have added support for s3a and s3n protocols when using
mount points in the metastore locations. A new static method,
`_get_ext_location_definitions`, has been introduced, which generates a
name for a resource defined by the location and now supports additional
prefixes "s3a://" and "s3n://" for defining resources in S3. For Azure
Blob Storage, the container name is extracted from the location and
included in the resource name. If the location does not match the
supported formats, a warning is logged, and the script is not generated.
These changes offer more flexibility in defining resources and improve
the system's ability to handle various cloud storage solutions.
Additionally, the `test_save_external_location_mapping_missing_location`
function in `test_locations.py` has been updated to include test cases
for s3a and s3n protocols, enhancing the software's functionality.
* Support joining an existing collection when installing UCX
([#1675](#1675)). The
AccountInstaller class has been updated to include a new functionality
that allows users to join an existing collection during UCX
installation. This is achieved by presenting the user with a list of
workspaces they have access to, allowing them to select one, and then
checking if there are existing workspace IDs present in the selected
workspace. If so, the installation will join the corresponding
collection; otherwise, a new collection will be created. This feature
simplifies UCX migration for large organizations with multiple
workspaces by allowing them to manage collections instead of individual
workspaces. Relevant user documentation and CLI commands have been
updated, along with new and modified tests to ensure proper
functionality. The commit includes the addition of new methods,
`join_collection` and `is_account_install`, as well as updates to the
`install_on_account` method to call `join_collection` if specified. Unit
tests and integration tests have been added to ensure the proper
functioning of the new feature.
* Updated UCX job cluster policy AWS zone_id to `auto`
([#1735](#1735)). In this
release, the UCX job cluster policy for AWS has been updated to use
`auto` for the zone_id, allowing Databricks to choose the zone based on
a default value in the region. This change, which resolves issue
[#533](#533), affects the
definition method in the policy.py file, where a check has been added to
remove 'aws_attributes.zone_id' if an instance pool ID is provided. The
tests for this change include manual testing and new unit tests, with
modifications to existing workflows. The diff shows updates to the
test_policy.py file, where the 'aws_attributes.zone_id' is set to `auto`
in several functions. No new CLI commands or documentation have been
provided as part of this update.
* Updated assessment.md - `spark.catalog.x` guidance needed updating
([#1708](#1708)). With the
release of DBR 14+, the `spark.catalog.*` functions, which were
previously not recommended for use on shared compute clusters due to
security reasons, are now considered safe to use. This change in
guidance is reflected in the updated assessment.md document, which also
notes that `spark.sql("<sql command>")` may still be a more suitable
alternative for certain common spark.catalog functions like tableExists,
listTables, and setDefaultCatalog. The corresponding
`spark._jsparkSession.catalog` methods are also mentioned as potential
alternatives on DBR 14.1 and above. It is important to note that no new
methods or functionality have been added, and no existing functionality
has been changed - only the guidance in the documentation has been
updated. This update has been manually tested and implemented in the
documentation to ensure accuracy and reliability for software engineers.

Dependency updates:

* Updated sqlglot requirement from <23.15,>=23.9 to >=23.9,<23.16
([#1681](#1681)).
* Updated databricks-labs-blueprint requirement from <0.6.0,>=0.4.3 to
>=0.4.3,<0.7.0
([#1688](#1688)).
* Updated sqlglot requirement from <23.16,>=23.9 to >=23.9,<23.18
([#1724](#1724)).
* Updated sqlglot requirement from <23.18,>=23.9 to >=23.9,<24.1
([#1745](#1745)).
* Updated databricks-sdk requirement from ~=0.27.0 to >=0.27,<0.29
([#1756](#1756)).
* Bump databrickslabs/sandbox from acceptance/v0.2.1 to 0.2.2
([#1769](#1769)).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file python Pull requests that update Python code
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant