-
-
Notifications
You must be signed in to change notification settings - Fork 345
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add thread cache #1545
Add thread cache #1545
Conversation
On my Linux laptop, this makes 'await trio.to_thread.run_sync(lambda: None)' about twice as fast, from ~150 µs to ~75 µs. Closes: python-triogh-6 Test program: import trio import time COUNT = 10000 async def main(): while True: start = time.monotonic() for _ in range(COUNT): await trio.to_thread.run_sync(lambda: None) end = time.monotonic() print("{:.2f} µs/job".format((end - start) / COUNT * 1e6)) trio.run(main)
Huh, failed test was a genuine, unrelated bug – see #1546 |
OK, maybe #1548 will help with the test failure... I guess we're consistently seeing these test failures here, and only here, because |
Codecov Report
@@ Coverage Diff @@
## master #1545 +/- ##
========================================
Coverage 99.67% 99.68%
========================================
Files 108 110 +2
Lines 13358 13466 +108
Branches 1012 1024 +12
========================================
+ Hits 13315 13423 +108
Misses 28 28
Partials 15 15
|
Phew, tests finally passing, and I think this is ready to review. Random question: do you think it's more intuitive to have |
My intuition favors |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good except for one suggestion.
trio/_core/_thread_cache.py
Outdated
class ThreadCache: | ||
def __init__(self): | ||
self._idle_workers = {} | ||
self._cache_lock = Lock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You don't seem to use the _cache_lock
anywhere. If you remove it, you could even make THREAD_CACHE
be the dict directly, and move ThreadCache.start_thread_soon()
into the global start_thread_soon()
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice catch on _cache_lock
!
We currently have a test that instantiates a private ThreadCache
object (test_race_between_idle_exit_and_job_assignment`), and I don't see how to easily make it work otherwise, so I guess I'll leave the class there for now.
Yeah, having slept on it I think you're right – also |
@@ -93,7 +93,7 @@ def __init__(self): | |||
self._idle_workers = {} | |||
self._cache_lock = Lock() | |||
|
|||
def start_thread_soon(self, deliver, fn): | |||
def start_thread_soon(self, fn, deliver): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For what it's worth, yes, this new order seems more intuitive to me, as deliver is called after fn
On my Linux laptop, this makes
await trio.to_thread.run_sync(lambda: None)
about twice as fast, from ~150 µs to ~75 µs.Closes: #6
Test program: