-
-
Notifications
You must be signed in to change notification settings - Fork 30.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
create_task() recommendation seems like bad practice #104091
Comments
I think there's a misunderstanding here about what This means the coroutine doesn't need to be explicitly awaited, because the event loop will take care of ensuring that it gets waited on. So this documented recommendation for fire-and-forget task creation does work fine, and is not bad practice (except inasmuch as TaskGroups are probably a better option in most cases.) |
No, it ensures it executes, not that it get Simple reproducer:
Well written code should never see "Task exception was never retrieved".
TaskGroups can only be used for this if you create some kind of global TaskGroup for your application that you keep around till the application ends. This can work, but in something like a server application, I think you'd only get the exceptions when the server shuts down, which could be weeks after the exception happened. So, for server applications, it'd still be best to use aiojobs or similar. |
Sorry, I did misunderstand your report. The name "fire-and-forget" specifically suggests that you don't care if it succeeds or fails. If you are going to check the result/exception, then it isn't really fire and forget. I'm not sure if we should be recommending a specific third-party library here, but it may make sense to add a note in the documentation that fire-and-forget means you won't know if it succeeded or failed, and you may get unretrieved exception warnings in case of failure. I'll reopen this so asyncio experts can weigh in on what, if anything, should be improved here. |
Yeah, I'm not too sure about linking to a third-party library either, I just don't have a better suggestion (although TaskGroup can certainly be recommended in some circumstances). Good engineering practice is not to ignore all exceptions though, so I'm sceptical that was the intended design, and not just something that was overlooked. Typically, a user wants to run tasks in the background without waiting for it to complete or needing to retrieve the result. That doesn't mean they don't want to know when they've broken their code and it's not running anything... |
The coroutine running in the background task could contain a try/except clause that catches all exceptions and logs them. You could easily make a decorator that does this for your application so you don't have to repeat the try/except/log boilerplate in every background task you write. We're working on additional flexibility for TaskGroup that will make this pattern simpler. (See e.g. https://discuss.python.org/t/revisiting-persistenttaskgroup-with-kotlins-supervisorscope/18417) |
Right, I think if TaskGroup starts supporting logging exceptions as they occur, then it should be able to replace aiojobs.Scheduler. So, I think we should be recommending a long-lived TaskGroup for fire-and-forget tasks then, instead of the currently documented workaround. It might also be worth having a
In aiohttp, we have cleanup_ctx, where we'd use (Edit: I see discussion of a |
TaskGroup itself won't be logging exceptions, you will have to write your own exception handler or done-callback for that. For TaskGroup.start() and .close() you can call I don't see an action item here. |
But, it will default to
Fair enough, I think generally people discourage use of calling magic methods directly, so I was thinking users will be more comfortable with a public (I mean, no |
[Submitted by mistake, see full comment below.] |
Whoops, closed by mistake. Will resume in a bit. |
I would like to push back on this a little. Why do you want it to call The regular (Though TBH I'm not 100% sold on my own argument here, so maybe you have a good argument?)
Technically, a dunder is not "private" or "protected" in the sense that a single or double underscore conveys. For files, Regarding the original doc example, I don't necessarily like making all examples industrial-strength, because (to me) the purpose of an example is understanding, not a copy-pastable solution. (You end up with very verbose examples if you strive for the latter.) I'd be okay with a small update to that particular example though that reminds the reader that if the task fails, no exception will be printed anywhere. The user can then decide what to do about that. |
It was mentioned at https://discuss.python.org/t/revisiting-persistenttaskgroup-with-kotlins-supervisorscope/18417/12 I was thinking that aiohttp installs it's own default handler, but now that I look I'm not so sure. I'll give a try later, but basically I'd expect to atleast have exceptions appear on stdout if no log handler is provided.
In that case the exception is reraised and can then be handled by other code, right? Whereas a background task could only be reraised when the application ends. So, in this scenario, I'd expect the exception to be handled immediately and not raise anything on shutdown.
My argument is really just to do the expected thing by default. i.e. A user who creates this Supervisor without much knowledge to best practices should not then end up spending hours or days trying to figure out why their code is not behaving correctly, simply because they never even knew an exception was happening in their code. It would also just be convenient, not having to add (and remember to add) a custom decorator or whatever everytime you quickly create a task to throw into the Supervisor.
I'm not too bothered either way, as I say, we have cleanup_ctx in aiohttp, so it'll either look like:
or
|
I have to apologize, my Discourse mailing list mode was off for a while and I missed that post (and many others). But, in https://discuss.python.org/t/asyncio-tasks-and-exception-handling-recommended-idioms/23806/11 it seems @achimnol is reversing course; I found no trace of
Okay, you have convinced me. I'd like @achimnol's response. |
One last point I'd say on the logging front is that for smaller personal projects it's pretty common to not configure logging at all. In that case, it's expected to see exceptions/warnings when you run the application from a terminal or by looking at the web server logs, as they would be sent to stdout/stderr by default. |
I've been busy for closing up my company's seris A funding along with several big customer follow-ups. 😞 Recently, I began to look into this issue again, with some rewriting of the aiotools experimentation to make the key concepts in TaskGroup/Supervisor reusable to write various coroutine aggregation patterns. I agree with that "a user should not end up spending hours or days trying to figure out why their code is not behaving correctly, simply because they never even knew an exception was happening in their code." because I also experienced this many times. 😭 |
Moving this issue status back to To Do as I do not believe that anyone is currently documenting this. But it may be helpful for someone interested in Docs to do so. |
Documentation
My understanding of asyncio is that all tasks etc. should be
await
ed on. If this is not done, then exceptions in tasks will never be seen and asyncio will give an "exception was not retrieved" warning.The documentation currently recommends this code for a fire-and-forget task creation:
https://docs.python.org/3/library/asyncio-task.html#asyncio.create_task
This seems like bad practice as the tasks are never awaited (another problem might be if this is copied inside a function, won't the set get garbage collected after the function ends?). To my knowledge, there is no trivial way to do fire-and-forget tasks using stdlib and I have always recommended users to try the aiojobs library to handle this. e.g. In the aiohttp documentation:
https://docs.aiohttp.org/en/latest/web_advanced.html#web-handler-cancellation
(Scroll down to the second warning, and the paragraph preceding it).
I feel that this example should probably be scrapped and possibly a recommendation to use aiojobs for these tasks may be better.
I'd also suggest including the text of both error messages ("exception not retrieved" and "Task was destroyed"), so that people putting that error into a search engine are likely to end up on this page.
The text was updated successfully, but these errors were encountered: