Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Library that wraps blocking IO system calls with a 1:1-threaded task. #3367

Closed
bblum opened this issue Sep 3, 2012 · 7 comments
Closed

Library that wraps blocking IO system calls with a 1:1-threaded task. #3367

bblum opened this issue Sep 3, 2012 · 7 comments
Labels
A-concurrency Area: Concurrency A-runtime Area: std's runtime and "pre-main" init for handling backtraces, unwinds, stack overflows C-enhancement Category: An issue proposing an enhancement or a PR with one. E-easy Call for participation: Easy difficulty. Experience needed to fix: Not much. Good first issue.

Comments

@bblum
Copy link
Contributor

bblum commented Sep 3, 2012

jld mentioned on IRC that it would be bad if a blocking IO system call blocked the entire rust scheduler thread, so that it couldn't schedule other threads in the meantime.

It would be useful to have a library which wraps blocking IO calls with a call to task::task().sched_mode(manual_threads(1)).spawn so that when the kernel blocks that thread rust can schedule other threads on the existing scheduler. (The calling task would block on the exit+response of the 1:1-threaded task.)

An interesting researchy/heuristic problem part of this would be figuring out when it would be more expensive to create the new scheduler thread than to just call and block directly.

@atris
Copy link

atris commented Sep 3, 2012

That means,creating a new kernel thread and mapping it to the blocking IO thread?

@bblum
Copy link
Contributor Author

bblum commented Sep 3, 2012

Yes.

@atris
Copy link

atris commented Sep 4, 2012

I think it would be better to create a new kernel thread and map it to the blocking IO thread.This way,rust scheduler probably doesnt have to worry about the blocking thread and the OS can still schedule and manage the blocking thread.

@bblum
Copy link
Contributor Author

bblum commented Sep 4, 2012

Is that not what you/I just said in the previous comments? I believe this is the same thing.

The rust scheduler should have to worry about the blocking thread. On a machine with 1 CPU (and hence 1 scheduler thread by default), the task that invokes this library wrapper should block in a rust-scheduler-enabled way so that other tasks may run while the IO completes.

@atris
Copy link

atris commented Sep 4, 2012

Yeah,

It is just an old habit to confirm my thinking :)

Yeah,I agree...I missed the point a bit...

This was referenced Sep 11, 2012
@msullivan
Copy link
Contributor

Is this how we want to do this? I was under the impression that we were going to use wrappers based on non-blocking IO.

@msullivan
Copy link
Contributor

Closing this, since it isn't the approach we want for most of our IO functions, and it isn't clear whether there is library code currently for which it is the right approach.

RalfJung pushed a commit to RalfJung/rust that referenced this issue Mar 9, 2024
jaisnan pushed a commit to jaisnan/rust-dev that referenced this issue Jul 29, 2024
Update Rust toolchain from nightly-2024-07-20 to nightly-2024-07-21
without any other source changes.
This is an automatically generated pull request. If any of the CI checks
fail, manual intervention is required. In such a case, review the
changes at https://github.com/rust-lang/rust from
rust-lang@9057c3f
up to
rust-lang@5069856.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-concurrency Area: Concurrency A-runtime Area: std's runtime and "pre-main" init for handling backtraces, unwinds, stack overflows C-enhancement Category: An issue proposing an enhancement or a PR with one. E-easy Call for participation: Easy difficulty. Experience needed to fix: Not much. Good first issue.
Projects
None yet
Development

No branches or pull requests

3 participants