forked from apache/airflow
-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pull master #25
Merged
Merged
pull master #25
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
In certain databases there is a need to set the collation for ID fields like dag_id or task_id to something different than the database default. This is because in MySQL with utf8mb4 the index size becomes too big for the MySQL limits. In past pull requests this was handled [#7570](#7570), [#17729](#17729), but the root_dag_id field on the dag model was missed. Since this field is used to join with the dag_id in various other models ([and self-referentially](https://github.com/apache/airflow/blob/451c7cbc42a83a180c4362693508ed33dd1d1dab/airflow/models/dag.py#L2766)), it also needs to have the same collation as other ID fields. This can be seen by running `airflow db reset` before and after applying this change while also specifying `sql_engine_collation_for_ids` in the configuration. Other related PRs [#19408](#19408)
* Add sample dag and doc for S3ListOperator * Fix doc
* Move tree filtering inside react and add some filters * Move filters from context to utils * Fix tests for useTreeData * Fix last tests. * Add tests for useFilters * Refact to use existing SimpleStatus component * Additional fix after rebase. * Update following bbovenzi code review * Update following code review * Fix tests. * Fix page flickering issues from react-query * Fix side panel and small changes. * Use default_dag_run_display_number in the filter options * Handle timezone * Fix flaky test Co-authored-by: Brent Bovenzi <brent.bovenzi@gmail.com>
This is another attempt to improve caching performance for multi-platform images as the previous ones were undermined by a bug in buildx multiplatform cache-to implementattion that caused the image cache to be overwritten between platforms, when multiple images were build. The bug is created for the buildx behaviour at docker/buildx#1044 and until it is fixed we have to prpare separate caches for each platform and push them to separate tags. That adds a bit overhead on the building step, but for now it is the simplest way we can workaround the bug if we do not want to manually manipulate manifests and images.
* Add sample dag and doc for S3ListPrefixesOperator * Fix static checks
…23612) Generation of Breeze help requires breeze to be installed. However if you have locally installed breeze with different dependencies and did not run self-upgrade, the results of generation of the images might be different (for example when different rich version is used). This change works in the way that: * you do not have to have breeze installed at all to make it work * it always upgrades to latest breeze when it is not installed * but this only happens when you actually modified some breeze code
* Add Quicksight create ingestion Hook and Operator Co-authored-by: eladkal <45845474+eladkal@users.noreply.github.com>
…KubernetesExecutor (#23617)
* UnicodeDecodeError: 'utf-8' codec can't decode byte 0xXX in position X: invalid start byte File "/opt/work/python395/lib/python3.9/site-packages/airflow/hooks/subprocess.py", line 89, in run_command line = raw_line.decode(output_encoding).rstrip() # raw_line == b'\x00\x00\x00\x11\xa9\x01\n' UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa9 in position 4: invalid start byte * Update subprocess.py * Update subprocess.py * Fix: Exception when parsing log #20966 * Fix: Exception when parsing log #20966 Another alternative is: try-catch it. e.g. ``` line = '' for raw_line in iter(self.sub_process.stdout.readline, b''): try: line = raw_line.decode(output_encoding).rstrip() except UnicodeDecodeError as err: print(err, output_encoding, raw_line) self.log.info("%s", line) ``` * Create test_subprocess.sh * Update test_subprocess.py * Added shell directive and license to test_subprocess.sh * Distinguish between raw and decoded lines as suggested by @uranusjr * simplify test Co-authored-by: muhua <microhuang@live.com>
Previously you had to manually add versions when changelog was modified. But why not to get a bit more fun and get the versions bumped automatically based on your assesment when reviewing the provideers rather than after looking at the generated changelog.
* Prevent KubernetesJobWatcher getting stuck on resource too old If the watch fails because "resource too old" the KubernetesJobWatcher should not retry with the same resource version as that will end up in loop where there is no progress. * Reset ResourceVersion().resource_version to 0
We have now different answers posisble when generating docs, and for testing we assume we answered randomly during the generation of documentation.
Co-authored-by: Jed Cunningham <66968678+jedcunningham@users.noreply.github.com>
These checks are only make sense for upgrades. Generally they exist to resolve referential integrity issues etc before adding constraints. In the downgrade context, we generally only remove constraints, so it's a non-issue.
* Added postgres 14 to support versions(including breeze)
Add support for `RedshiftDeleteClusterOperator`. This will help to clean resources using airflow operators when needed. In the current implementation, By default, I'm waiting until the cluster is completely removed to return immediately without waiting set `wait_for_completion` param to False - Add operator class - Add basic unit test - Add an example task - Add relevant documentation
This is broken since 2.3.0. that's if a user specifies a migration_timeout of 0 then no migration is run at all.
Bumps [eventsource](https://github.com/EventSource/eventsource) from 1.0.7 to 1.1.1. - [Release notes](https://github.com/EventSource/eventsource/releases) - [Changelog](https://github.com/EventSource/eventsource/blob/master/HISTORY.md) - [Commits](EventSource/eventsource@v1.0.7...v1.1.1) --- updated-dependencies: - dependency-name: eventsource dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
The certifi limitation was introduced to keep snowflake happy while performing eager upgrade because it added limits on certifi. However seems like it is not limitation any more in latest versions of snowflake python connector, so we can safely remove it from here. The only remaining limit is dill but this one still holds.
Bitnami's CloudFront CDN is seemingly having issues, so point at github direct instead until it is resolved.
We started to experience "Internal Error" when installing Helm chart and apperently bitnami "solved" the problem by removing from their index software older than 6 months(!). This makes our CI fail but It is much worse. This renders all our charts useless for people to install This is terribly wrong, and I raised this in the issue here: bitnami/charts#10539 (comment)
- Trivial typo fix in the command to run static checks on the last commit - Update "run all tests" to "run all checks" where applicable for consistency
The original fix in #24112 did not work due to: * not updated lock * EOL characters at the end of multiline long URL This PR fixes it.
* Reduce API calls from /grid - Separate /grid_data from /grid - Remove need for formatData - Increase default query stale time to prevent extra fetches - Fix useTask query keys * consolidate grid data functions * fix www tests test grid_data instead of /grid
Our experimental support for MSSQL starts from v2017(in README.md) but we still support 2000 & 2005 in code. This PR removes this support, allowing us to use mssql.DATETIME2 in all MSSQL DB.
* Note that yarn dev needs webserver -d * Update CONTRIBUTING.rst Co-authored-by: Jed Cunningham <66968678+jedcunningham@users.noreply.github.com> * Use -D * Revert "Use -D" This reverts commit 94d63ad. Co-authored-by: Jed Cunningham <66968678+jedcunningham@users.noreply.github.com>
The "stop" command of Breeze uses "all" backend to remove all volumes - but mssql has special approach where the volumes defined depend on the filesystem used and we need to add the specific docker-compose files to list of files used when we use stop command.
…24122) Now that breeze uses --mount instead of --volume (the former of which does not create missing mount dirs like the latter does see docs here: https://docs.docker.com/storage/bind-mounts/#differences-between--v-and---mount-behavior) we need to create these directories explicitly.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
^ Add meaningful description above
Read the Pull Request Guidelines for more information.
In case of fundamental code change, Airflow Improvement Proposal (AIP) is needed.
In case of a new dependency, check compliance with the ASF 3rd Party License Policy.
In case of backwards incompatible changes please leave a note in UPDATING.md.