You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The argument is that a bona fide blocking operation will cause a thread to suspend at the kernel scheduler level, just as a cede causes a fiber to suspend at the runtime level. That is to say, blocking syscalls introduce fairness boundaries at the kernel-level in exactly the same way that async introduces a fairness boundary at the runtime-level. Furthermore asyncdoes reset the auto-cede counter.
The motivation is an optimization: if you have a long running fiber with IO.blocking(...) calls interspersed with other non-suspending ops, then the runtime makes an effort to keep that fiber pinned to that blocking thread instead of sending it on round-trips back to the compute pool. However, that fiber will eventually hit the auto-cede and be forced to make that round-trip to the compute pool. As argued above, this is completely unnecessary: it is not increasing fairness, only wasting resources bouncing a fiber between blocking threads and the compute pool when all along it could stay pinned to a single blocking thread.
The text was updated successfully, but these errors were encountered:
Fwiw, if we aren't on the WSTP, blocking does indeed reset the counter. We only preserve the existing counter when we use the blocking support on WSTP.
Also, this optimization is valuable even when you have a single blocking call on a fiber. For example consider a sequence like:
IO.blocking(/* resolve DNS */) *>
doNonSuspendingStuff *>IO.async { /* connect to remote server */ }
If we reset the counter, there is more runway to make it all the way to the async call without having to suspend in the middle and get shunted back to compute. Then when we hit the async we end up suspending naturally anyway.
Initially discussed on Discord.
The argument is that a bona fide blocking operation will cause a thread to suspend at the kernel scheduler level, just as a
cede
causes a fiber to suspend at the runtime level. That is to say, blocking syscalls introduce fairness boundaries at the kernel-level in exactly the same way thatasync
introduces a fairness boundary at the runtime-level. Furthermoreasync
does reset the auto-cede counter.The motivation is an optimization: if you have a long running fiber with
IO.blocking(...)
calls interspersed with other non-suspending ops, then the runtime makes an effort to keep that fiber pinned to that blocking thread instead of sending it on round-trips back to the compute pool. However, that fiber will eventually hit the auto-cede and be forced to make that round-trip to the compute pool. As argued above, this is completely unnecessary: it is not increasing fairness, only wasting resources bouncing a fiber between blocking threads and the compute pool when all along it could stay pinned to a single blocking thread.The text was updated successfully, but these errors were encountered: