You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, when just one work item is queued to the thread pool, two threads are released. When just one async IO completes, three threads are released.
In async IO completion dispatchers such as this, don't parallelize dispatching unless there is actually another work item to process
Similarly in the thread pool work item dispatcher, don't parallelize dispatching unnecessarily. In the thread pool dispatcher, another work item could be dequeued and set aside before parallelizing, the next thread would process the work item that was set aside (or some other thread would pick up the work item).
Both use similar parallelizing schemes, investigate modifying the schemes to 3-stage schemes to avoid an enqueuer from requesting a thread while another thread is already looking for work to determine whether to parallelize further. This would ensure that threads are requested sequentially, and avoid overeager parallelization in some cases.
These should help to reduce CPU usage in scenarios where the thread pool gets periodic short bursts of work
Tagging subscribers to this area: @mangod9
See info in area-owners.md if you want to be subscribed.
Issue Details
In async IO completion dispatchers such as this, don't parallelize dispatching unless there is actually another work item to process
Similarly in the thread pool work item dispatcher, don't parallelize dispatching unnecessarily. In the thread pool dispatcher, another work item could be dequeued and set aside before parallelizing, the next thread would process the work item that was set aside (or some other thread would pick up the work item if there are no other work items).
Both use similar parallelizing schemes, investigate modifying the schemes to 3-stage schemes to avoid an enqueuer from requesting a thread while another thread is already looking for work to determine whether to parallelize further. This would ensure that threads are requested sequentially, and avoid overeager parallelization in some cases.
These should help to reduce CPU usage in scenarios where the thread pool gets periodic short bursts of work
The text was updated successfully, but these errors were encountered: