-
Notifications
You must be signed in to change notification settings - Fork 14.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dag processor manager queue split (fixes SLAs) #25489
Conversation
@potiuk you may also be interested in this one. |
airflow/dag_processing/manager.py
Outdated
# we maintain 2 queues: stuff requiring rapid response due to scheduler updates, and stuff that | ||
# should be serviced once the priority stuff has all been worked through, e.g. periodic dir scans | ||
# additionally there's a set to track which files on disk still haven't been refreshed yet | ||
self._priority_file_path_queue: Deque[str] = deque() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I changed these from list
s to deque
s because teeeeechnically it's more efficient.
Also, I like seeing people try to work out how to pronounce 'deque'.
airflow/dag_processing/manager.py
Outdated
@@ -378,8 +378,17 @@ def __init__( | |||
async_mode: bool = True, | |||
): | |||
super().__init__() | |||
self._log = logging.getLogger('airflow.processor_manager') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I moved this up from further down __init__
because I added a log message which came before it in the old location.
It seems ... weird ... that this doesn't just use the logging mixin. Is this an artefact from Ye Olden Times?
airflow/dag_processing/manager.py
Outdated
@@ -539,7 +559,7 @@ def _run_parsing_loop(self): | |||
poll_time = None | |||
|
|||
self._refresh_dag_dir() | |||
self.prepare_file_path_queue() | |||
self.populate_std_file_queue_from_dir() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same method, different name. I thought this was more descriptive.
airflow/dag_processing/manager.py
Outdated
) | ||
self._std_file_path_queue = deque(x for x in self._std_file_path_queue if x in new_file_paths) | ||
callback_paths_to_del = list(x for x in self._callback_to_execute.keys() if x not in new_file_paths) | ||
for path_to_del in callback_paths_to_del: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This more thorough clean-up is new. Having the old callbacks lying around was a form of memory leak, but in practice not a serious one.
tests/dag_processing/test_manager.py
Outdated
assert manager._file_path_queue == [] | ||
manager.prepare_file_path_queue() | ||
assert manager._file_path_queue == ['file_1.py', 'file_2.py', 'file_3.py', 'file_4.py'] | ||
assert list(manager._std_file_path_queue) == [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have to convert all the deque
s to lists. This is clunky.
airflow/config_templates/config.yml
Outdated
Setting this to <= 0 disables the behaviour, in case it's important to you that those | ||
frequently updating dags / slas always take priority at the cost of delaying updates | ||
from disk | ||
version_added: 2.4.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean, I'm assuming that this PR will be "subject to discussion" to put it mildly and has no chance of making 2.3.4. If it sails through, I'll happily correct this.
newsfragments/25489.bugfix.rst
Outdated
@@ -0,0 +1 @@ | |||
DAGProcessorManager queue made fully FIFO, split into two (priority and std), and dags guaranteed to be read from disk periodically (fixes/improves SLA alert reliability) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suspect "bugfix" undersells the change. As per PR description, will happily change the classification to whatever I'm told is the right one to use for a change like this.
airflow/dag_processing/manager.py
Outdated
# 1. cleared the priority queue then cleared this set while draining the std queue, or | ||
# 2. been unable to clear the priority queue, hit max_file_process_interval, and drained the set | ||
# while clearing overdue files | ||
if not self._outstanding_std_file_paths: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a key change as mentioned in the PR description: we refresh files from disk once the set
is empty. The queue might still have SLA callbacks in it, but that shouldn't stop us refreshing from disk.
Thanks for so thorough description and your willingness to improve SLA (I really, really appreciate it as this is one of the things that we had on our backburner for quite a while). However, I think @argibbs this is the kind of change and desciption is such big and potential of breaking things is such big that it requires a devlist discussion IMHO to drag attention of people who should be dragged rather than dicussing it only in PR. I know you wrote ("probably not make it in 2.3.4"), but I also think "probably not make it in 2.4.0 either" is a better assesment. I strongly suggest to make a small digest of the description you did - with most important parts of "why" and "how" extracted (less is more) and send it to the devlist https://airflow.apache.org/community/, with link to this PR and inviting people to express their thoughts. I think this is not going to be merged before 2.4 branch is cut. Too big risks, it needs to be likely very thoroughly tested and justified that we know what we are doing here, so there is stiil a lot of time to discuss it. I also think some people who are now looking at finalizing AIP-48 so that it is ready for prime time for 2.4 (and who should be part of the discussiosn), have no time and energy to look at it. And we definiitely want to avoid of adding too many potentially high-impact changes in one minor release. We've already added too much in 2.3.0 and I think we've learned a lesson there. So regardless of whether what you propose is sound or not, it might take a while to merge it. I strongly encourage you to describe the case/problem/solution in concise way and send it to the devlist. |
But I do feel that direction you took is sound. We need some prioritisation of callbacks. It just need deeper look and discussions that fits more than a PR opened maybe 2 weeks before substantial release which is already undergoing quite substantial testing. |
Sounds good to me. I'm running with this locally, so I don't really mind about timelines. I'm just trying to give back (and also push upstream so I don't have to keep patching each time there's a release 😄 ) Will do as you suggested re: the digest etc. |
Cool! This is the RIGHT approach! Much appreciated! |
@argibbs these changes look great! Very well described as well.
We have had the case where because of a crucial singel point of failure, hardly any DAG was able to continue (as we use a lot of ExternalTaskSensors). This meant that all of our schedulers were down and handling SLA fires for a long time so I am very happy to see this optimisation 👍 Note: I don't think this should be included in this MR but given you worked on this, you probably have a good view on the following.Part of what we can speed up is to instead of fire each SLA for the same DAG individually, group SLAs per DAG. Thereby, we add a dag with an SLA to the queue instead of an actual SLA. This prevents us from parsing a DAG file 10 times, for 10 different SLAs of the same DAG. I guess you could optimise this by including a dummy operator at the end that requires all the important tasks to succeed, but that's not super user friendly. |
@Jorricks thank you for the kind words. I agree, SLAs could/should be made more efficient. I found this change easier/simpler to make & test, possibly because it's a problem I've tackled many times in my "real" job. Also, as you note, I think this change defends against a range of possible problems, rather than just SLAs. I may try to improve SLA firing in the future, but I will be upfront about my motivations; I'm not looking to be "The SLA guy", as I simply don't have the spare time. If I end up with a load of spare time, that will doubtless change... |
I think it is a bit risky for 2.4 - so this one might stay open for a while until we branch off for 2.4 (@argibbs - it's not forgotten I can assure you). |
Some rebase is needed after the __future__annotation change, but I think we are pretty close to the "focused" 2.4.0 release effort. I think this is a good time to rebase/fix and maybe re-raise a devlist discussion about it. |
Hiya, yes, I hadn't forgotten either. 😄 Have just found some time to refresh this, and am sorting the rebase now. Will re-raise the dev-list email once that's done (and I'll spell your name right this time too!) |
I have no idea why, but after rebasing, github seems convinced there's no changed in the branch, and has closed and won't let me reopen it ... I've created a new PR (from the same branch - see #27317 - it's the exact same change) |
The "I'm determined to fix SLAs" PR
OK, so #25147 made a start in this direction. Summing up the, er, summary from that MR, the problem was that SLA callbacks could keep occuring, and prevent the dag processor manager from ever processing more than 2 or 3 dags in the queue before the SLA callbacks re-upped and went to the front of the queue.
Under the new behaviour, the SLA callbacks went to the back of the queue. This guaranteed that the queue would be processed at least once. However, it turns out that dags on disk would only be re-added to the queue once the queue was empty. But with SLA callbacks arriving All. The. Time. the queue would never drain, and we'd never re-read dags from disk. So if you updated the dag file, you'd have to bounce the scheduler to pick up the change, and then it would process all non-SLA-generating DAGs exactly once. And then you'd need to bounce again.
Related Issues
Closes #15596 (I hope!)
I've almost certainly missed some steps.
Before I go into a bit more detail about the change, I'd like to acknowledge that as a (very) small time contributor to the project, I'm not familiar with all the done things when making more radical changes. In particular, I assume there's more doc changes needed than just a newsfragment. I've added a config flag, some metrics, and the behaviour of the queue processing has subtly changed (for the better I hope!)
I'd very much appreciate someone(s) leaving a comment 👇 telling me what else I need to do in terms of docs etc.
TL;DR
I mean, I see your point. Skip to the bottom for the summary.
Pay attention, here comes the science bit
Ok, so to briefly recap the queue behaviour prior to this change:
dag_dir_list_interval
). It does not require that the queue be empty. Any changes to the set of files on disk e.g. dags being deleted will cause the manager to remove dags from the queue.min_file_process_interval
), so if your dag files only take 10 seconds to process, there will be 20 seconds of idle time, but if your dag files take a minute to process, then the manager will be permanently busy, because as soon as the queue drains, it'll be well past time to reload the queue from disk.Locally, I tested a hacky fix whereby on receipt of a SLA callback I still add the callback, but I simply didn't add the dag to the queue (it's a one-line change - extremely simple!). This works, but means that SLA callbacks are only processed when the queue drains, and is reprocessed (because then all dags are added to the queue, guaranteeing that we will process any outstanding SLA callbacks). However, If someone has specified a large wait time between loading dags from disk, this will affect how timely the SLA alerts are, which is fine for me, because I don't do that, but I wasn't getting a "this will be fine for everyone" vibe from the change (I did say it was hacky!).
Also, there's another catch. While the problem is much more prevalent with SLAs, these are not the only callbacks. I could envisage a situation where someone configures a dag with a very small interval (e.g. a dag run every 10 seconds). While I think this is a much more theoretical problem that might not ever exist in the wild - this isn't really the use case Airflow is intended for - the upshot would be that a dag generating lots of dag callbacks would be spamming the queue. And those callbacks still go to the front of the queue, i.e. you're back to the situation I tried to solve in the previous MR!
I don't think that word means what you think it means
I decided that fundamentally, part of the the problem was that the queue should be FIFO. And it wasn't. But if I made it FIFO, then the higher priority DAG callbacks would have to wait their turn behind the dag files loaded from disk, and I'm pretty sure stopping that would eliminate some of the speed-ups Airflow 2 was trumpeted as solving. Airflow 1 used to have on average a 15 second gap (= 30/2) between one task completing and the downstream tasks being scheduled, because once the task completed, you had to wait for the manager to drain the queue, add the files to the queue from disk, and then process the dag. (And that's assuming you could even process all your dags in <30 seconds...). In Airflow 2, because of the dag callbacks, the gap between downstream tasks being scheduled is usually sub-second.
I didn't want to be the guy who accidentally breaks that particular speed up. 😱
So I did two things:
Thing 1: Tackling the FIFO issue aka gazumping callbacks.
max_file_process_interval
and it's the dual to the existingmin_file_process_interval
. It guarantees that if you do happen to have a permanently busy priority queue, eventually we'll take a breather, and process the files on disk anyway.Thing 2: Handling the fact that SLAs stop the standard queue from ever being empty.
set
which tracked which dag files in the queue were still outstanding from the last refresh from disk. Once the queue was refreshed from disk, we'd work through every file eventually (because FIFO), and at that point the set would be empty, even if the queue wasn't (because SLAs).Notes
This doesn't materially change how SLAs work; they are generated and consumed the same as before. It just means that we reliably consume the alerts once generated without breaking the rest of the system. In my experience at least, adding SLAs would simply cause the system to stop processing dag updates (as per #15596).
In particular, I don't address issues with SLA timestamps (as raised by #22532), nor do I deal with other problems (e.g. now SLAs fire reliably, I have noticed that they fire during catch-up, and that the same alert can fire multiple times).
This is not because I think the current approach is perfect (like everyone else on the internet, I Have Thoughts on how it could be improved, given infinite time) but rather it is a sad-but-true fact that I don't have the time to take on a big project. So I am going to continue with my current approach of tinkering round the edges; and if the remaining issues are minor enough to live with, I'm just going to live with them (at least for now).
Summary: I'm from the UK; politely waiting in line is what we do best.
_file_path_queue
to_std_file_path_queue
(for SLAs and dags loaded from disk) andpriority_file_path_queue
(for DAG call backs)max_file_process_interval
config to ensure files are read from disk every so often, even if the priority queue is always busy