Skip to content
This repository has been archived by the owner on Apr 26, 2024. It is now read-only.

Enable unused-awaitable error for mypy #14519

Closed
wants to merge 19 commits into from
Closed

Conversation

MatMaul
Copy link
Contributor

@MatMaul MatMaul commented Nov 22, 2022

Since I got bitten already twice by this, let's see if this check makes sense.

This is to be reviewed on a per commit basis.
Each commit is handling a different type of ignorable reported error.

There are also a couple of commits that look more like small actual bugs than false positives.

Pull Request Checklist

@MatMaul
Copy link
Contributor Author

MatMaul commented Nov 22, 2022

It seems like usually # type: ignore comments have an explanation comment with it.

Do we want to do the same here or relying on the git history to find this info looks good enough ?

EDIT: I am planning to not squash this PR to keep the rationales around in the commit messages.

@MatMaul MatMaul changed the title Enable unused-awaitable for mypy Enable unused-awaitable error for mypy Nov 22, 2022
@DMRobertson
Copy link
Contributor

I think a few of these are functions which are decorated by wrap_as_background_process; I don't think it's a problem that their invocations are missing an await?

In general it seems like this has a fairly high false positive rate, which is sad to see.

@clokep
Copy link
Member

clokep commented Nov 22, 2022

Do we want to do the same here or relying on the git history to find this info looks good enough ?

EDIT: I am planning to not squash this PR to keep the rationales around in the commit messages.

It looks like the git history just says "ignore these" -- I think squashing those all together would be fine.

I suspect it would be better to include the reasoning in comments in the code or add comments to run_background_process and friends to explain this.

@MatMaul
Copy link
Contributor Author

MatMaul commented Nov 22, 2022

I think a few of these are functions which are decorated by wrap_as_background_process; I don't think it's a problem that their invocations are missing an await?

Indeed not sure why I didn't noticed. So no bugs, I'll include them in the relevant commit.

It looks like the git history just says "ignore these" -- I think squashing those all together would be fine.

I should add more details in the commit messages to explain the rationale of each type of changes.

I suspect it would be better to include the reasoning in comments in the code or add comments to run_background_process and friends to explain this.

That makes complete sense for run_background type of ignores where I can just comment those methods.

For e39d757 it's however a bit more annoying since it is the same rationale but in quite a number of different places.
I could keep this commit separated, put a detailed explanation in the commit message and add a one liner please see detailed explanation in the commit message for each instance ?

@clokep
Copy link
Member

clokep commented Jan 25, 2023

@MatMaul Is this worth following-up with or should we close it?

@DMRobertson
Copy link
Contributor

@MatMaul Is this worth following-up with or should we close it?

Per #14947 (comment), my vote is for landing this.

@DMRobertson
Copy link
Contributor

I've merged in develop and will fixup what I can.

Copy link
Contributor

@DMRobertson DMRobertson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this looks good, but I did do some of this myself---so would like another set of eyes.

AFAICS there are three categories of warnings we're suppressing here:

  • run_in_background and run_as_background_process return deferreds so you can block until---or do something after---the background task completes.
  • we call a method on a Deferred which returns self, and mypy complains that the return value is unused
  • synchronous ops that return a Deferred (e.g. popping from a Dict[str, Deferred])

The first class causes a lot of noise here. I wonder if we should make run_in_background and run_as_background_process return None---I don't think we ever use their return values. (And if we did we could have two variants, e.g. run_in_background_and_discard versus run_in_background_and_ which does return a deferred, say run_in_background_returning_handler` or something.)

changelog.d/14519.misc Outdated Show resolved Hide resolved
Comment on lines +62 to 64
defer.ensureDeferred( # type: ignore[unused-awaitable]
run_as_background_process("background_updates", run_background_updates)
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the call to ensureDeferred actually doing anything here? run_as_background_process returns a Deferred and ensureDeferred is a no-op when its argument is a Deferred.

(I also find this script really confusing---the way it starts the reactor and gets the task being run to stop the reactor!)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suspect that we can remove the ensureDeferred call.

gc_task.start(0.1)
gc_task.start(0.1) # type: ignore[unused-awaitable]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This returns a deferred which fires when the gc_task.stop() is called---which it never is---or if the task fails. Seems safe to ignore.

@@ -635,7 +635,7 @@ async def _ctx_manager() -> AsyncIterator[None]:
# writer waiting for us and it completed entirely within the
# `new_defer.callback()` call above.
if self.key_to_current_writer.get(key) == new_defer:
self.key_to_current_writer.pop(key)
self.key_to_current_writer.pop(key) # type: ignore[unused-awaitable]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure what is going on here. I think this is safe?

Thoughts:

  • I wonder if del self.key_to_current_writer.pop[key] would work here?
  • Is it possible for self.key_to_current_writer.get(key) to change between line 637 and 638? (I don't think so---there shouldn't be another thread accessing this?)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if del self.key_to_current_writer.pop[key] would work here?

Looks like it should?

Is it possible for self.key_to_current_writer.get(key) to change between line 637 and 638? (I don't think so---there shouldn't be another thread accessing this?)

I don't think so either. The ReadWriteLock is designed to be async-safe, but not thread safe.

@@ -287,7 +287,7 @@ def test_validate_config(self) -> None:
"""Provider metadatas are extensively validated."""
h = self.provider

def force_load_metadata() -> Awaitable[None]:
def force_load_metadata() -> "OpenIDProviderMetadata":
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This one is mine. I think this fix is correct(?)

@@ -451,7 +451,7 @@ def test_client_does_not_retry_on_400_plus(self):
self.failureResultOf(d)

def test_client_sends_body(self):
defer.ensureDeferred(
defer.ensureDeferred( # type: ignore[unused-awaitable]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this have a self.get_success around it? Or is the self.pump below sufficient to ensure this gets called?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ensureDeferred will run the async function up until the first blocking await, ie. it should be fine.

@@ -297,7 +299,9 @@ def test_complete_appservice_txn_updates_last_txn_state(
service = Mock(id=self.as_list[0]["id"])
events = [Mock(event_id="e1"), Mock(event_id="e2")]
txn_id = 5
self._set_state(self.as_list[0]["id"], ApplicationServiceState.UP)
self.get_success(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also not sure if this needs to be here? 🤷

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Look reasonable!

@@ -26,7 +26,7 @@ class DeferredCacheTestCase(TestCase):
def test_empty(self) -> None:
cache: DeferredCache[str, int] = DeferredCache("test")
with self.assertRaises(KeyError):
cache.get("foo")
cache.get("foo") # type: ignore[unused-awaitable]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAICS cache is roughly a clever Mapping[str, Deferred] throughout this file, and we're checking that we can get and set its contents. It's okay that we're not awaiting on them. But please double-check this.

@squahtx
Copy link
Contributor

squahtx commented Feb 6, 2023

I think this looks good, but I did do some of this myself---so would like another set of eyes.

AFAICS there are three categories of warnings we're suppressing here:

  • run_in_background and run_as_background_process return deferreds so you can block until---or do something after---the background task completes.

The first class causes a lot of noise here. I wonder if we should make run_in_background and run_as_background_process return None---I don't think we ever use their return values. (And if we did we could have two variants, e.g. run_in_background_and_discard versus run_in_background_and_ which does return a deferred, say run_in_background_returning_handler` or something.)

nb: we're supposed to await run_in_background, otherwise CPU usage accounting gets lost when the background task outlives the request that started it. We also get wonderful "Re-starting finished log context" log spam when that happens.

@squahtx squahtx requested a review from a team February 6, 2023 21:18
@DMRobertson
Copy link
Contributor

b: we're supposed to await run_in_background, otherwise CPU usage accounting gets lost when the background task outlives the request that started it. We also get wonderful "Re-starting finished log context" log spam when that happens.

Huh, thanks. That's news to me---the docstring at

Note that if you completely discard the result, you should make sure that
`f` doesn't raise any deferred exceptions, otherwise a scary-looking
CRITICAL error about an unhandled error will be logged without much
indication about where it came from.
made it sound like it was kind-of-okay to not await run_in_background.

@DMRobertson
Copy link
Contributor

we're supposed to await run_in_background

Do you mean specifically using the await keyword here, or more generally awaiting by feeding it into Twisted's event loop?

Also, are the examples of using run_in_background in https://matrix-org.github.io/synapse/latest/development/synapse_architecture/cancellation.html#uncancelled-processing correct?

@squahtx
Copy link
Contributor

squahtx commented Feb 7, 2023

we're supposed to await run_in_background

Do you mean specifically using the await keyword here, or more generally awaiting by feeding it into Twisted's event loop?

the await keyword

Also, are the examples of using run_in_background in https://matrix-org.github.io/synapse/latest/development/synapse_architecture/cancellation.html#uncancelled-processing correct?

Good question. I think those are fine because there's an implicit wait for the task to finish (or at least hit the to_resolve.callback(None) line, after which there's no more async code).

... actually maybe those examples aren't okay. They'll certainly not re-start any finished logging contexts, but the active logging context will get marked as finished inside the to_resolve.callback(None) line, before it has been deactivated. Logging contexts are way too confusing.

Copy link
Contributor

@squahtx squahtx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In principle I like the idea of turning this check on. But there are an awful large number of false positives, which are easy to get fatigued by. I'm also fairly sure we shouldn't be discarding the result of run_in_background.

Comment on lines +62 to 64
defer.ensureDeferred( # type: ignore[unused-awaitable]
run_as_background_process("background_updates", run_background_updates)
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suspect that we can remove the ensureDeferred call.

@@ -635,7 +635,7 @@ async def _ctx_manager() -> AsyncIterator[None]:
# writer waiting for us and it completed entirely within the
# `new_defer.callback()` call above.
if self.key_to_current_writer.get(key) == new_defer:
self.key_to_current_writer.pop(key)
self.key_to_current_writer.pop(key) # type: ignore[unused-awaitable]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if del self.key_to_current_writer.pop[key] would work here?

Looks like it should?

Is it possible for self.key_to_current_writer.get(key) to change between line 637 and 638? (I don't think so---there shouldn't be another thread accessing this?)

I don't think so either. The ReadWriteLock is designed to be async-safe, but not thread safe.

@@ -451,7 +451,7 @@ def test_client_does_not_retry_on_400_plus(self):
self.failureResultOf(d)

def test_client_sends_body(self):
defer.ensureDeferred(
defer.ensureDeferred( # type: ignore[unused-awaitable]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ensureDeferred will run the async function up until the first blocking await, ie. it should be fine.

@@ -297,7 +299,9 @@ def test_complete_appservice_txn_updates_last_txn_state(
service = Mock(id=self.as_list[0]["id"])
events = [Mock(event_id="e1"), Mock(event_id="e2")]
txn_id = 5
self._set_state(self.as_list[0]["id"], ApplicationServiceState.UP)
self.get_success(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Look reasonable!

Mathieu Velten and others added 2 commits March 13, 2023 12:54
@MatMaul
Copy link
Contributor Author

MatMaul commented Mar 13, 2023

I'm also fairly sure we shouldn't be discarding the result of run_in_background.

I am unsure what to do about that.
I think in most case where we don't currently use the result we want to switch to run_as_background_process ?
I am inferring that after reading the docstrings, since run_as_background_process explicitly mentions the case (or for firing-and-forgetting in the middle of a normal synapse async function).

@squahtx
Copy link
Contributor

squahtx commented Mar 13, 2023

I'm also fairly sure we shouldn't be discarding the result of run_in_background.

I am unsure what to do about that. I think in most case where we don't currently use the result we want to switch to run_as_background_process ?

That's probably the best way forward right now. Though I'd note that this has some implications regarding where resource usage is accounted under in metrics, and also makes it harder to determine which request kicked off which bit of processing.

@DMRobertson
Copy link
Contributor

We've had this on our board for a while and I don't think we have the bandwidth to drive it forward. I'd love to see this fixed and I think we've learned useful information from this, but I think we should park this for now.

If anyone gets time to look at this in the future, I'd suggest tackling one of the classes identified in #14519 (review) to hopefully make a smaller PR.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants