Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add std::thread::available_concurrency #74480

Merged
merged 1 commit into from
Oct 18, 2020
Merged

Add std::thread::available_concurrency #74480

merged 1 commit into from
Oct 18, 2020

Conversation

yoshuawuyts
Copy link
Member

@yoshuawuyts yoshuawuyts commented Jul 18, 2020

This PR adds a counterpart to C++'s std::thread::hardware_concurrency to Rust, tracking issue #74479.

cc/ @rust-lang/libs

Motivation

Being able to know how many hardware threads a platform supports is a core part of building multi-threaded code. In C++ 11 this has become available through the std::thread::hardware_concurrency API. Currently in Rust most of the ecosystem depends on the num_cpus crate (no.35 in top 500 crates) to provide this functionality. This PR proposes an API to provide access to the number of hardware threads available on a given platform.

edit (2020-07-24): The purpose of this PR is to provide a hint for how many threads to spawn to saturate the processor. There's value in introducing APIs for NUMA and Windows processor groups, but those are intentionally out of scope for this PR. See: #74480 (comment).

Naming

Discussing the naming of the API on Zulip surfaced two options:

  • std::thread::hardware_concurrency
  • std::thread::hardware_threads

Both options seemed acceptable, but overall people seem to gravitate the most towards hardware_threads. Additionally @jonas-schievink pointed out that the "hardware threads" terminology is well-established and is used in among other the RISC-V specification (page 20):

A component is termed a core if it contains an independent instruction fetch unit. A RISC-V-compatible core might support multiple RISC-V-compatible hardware threads, or harts, through multithreading.

It's also worth noting that the original paper introducing C++'s std::thread submodule unfortunately doesn't feature any discussion on the naming of hardware_concurrency, so we can't use that to help inform our decision here.

Return type

An important consideration @joshtriplett brought up is that we don't want to default to 1 for platforms where the number of available threads cannot be retrieved. Instead we want to inform the users of the fact that we don't know and allow them to handle that case. Which is why this PR uses Option<NonZeroUsize> as its return type, where None is returned on platforms where we don't know the number of hardware threads available.

The reasoning for NonZeroUsize vs usize is that if the number of threads for a platform are known, they'll always be at least 1. As evidenced by the example the NonZero* family of APIs may currently not be the most ergonomic to use, but improving the ergonomics of them is something that I think we can address separately.

Implementation

@Mark-Simulacrum pointed out that most of the code we wanted to expose here was already available under libtest. So this PR mostly moves the internal code of libtest into a public API.

@rust-highfive
Copy link
Collaborator

r? @dtolnay

(rust_highfive has picked a reviewer for you, use r? to override)

@rust-highfive rust-highfive added the S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. label Jul 18, 2020
@luser
Copy link
Contributor

luser commented Jul 18, 2020

To clarify, would this report the number of "logical" cores on CPUs that support hyperthreading? That's probably the right choice if the API intends to answer the question "how much parallelism is reasonable".

@yoshuawuyts
Copy link
Member Author

To clarify, would this report the number of "logical" cores on CPUs that support hyperthreading?

@luser Yes I believe it should. Since I didn't write the backing impl I had to verify, but the num_cpus::get implementation takes an identical approach as the code in rustc (as opposed to the num_cpus::get_physical).

Perhaps we should clarify in the docs what we mean by "hardware threads"?

@dfamonteiro
Copy link

To clarify, would this report the number of "logical" cores on CPUs that support hyperthreading? That's probably the right choice if the API intends to answer the question "how much parallelism is reasonable".

That is a great point. Related to this, is it worthwhile to have two functions that distinguish between the number of logical cores and the number of hardware cores?

@ehuss
Copy link
Contributor

ehuss commented Jul 18, 2020

Would you mind retaining the original code that used cfg_if? It intentionally was written with a fall-through.

Also, my impression is that all platform-specific code should be in the sys module. I'm not sure how much that is enforced, though.

Might also want to address, the num_cpus code has been more maintained and supports more than this does. Any particular reason not to use that?

@mati865
Copy link
Contributor

mati865 commented Jul 18, 2020

Hardware threads is correct term but can be quite confusing for some people. To make it easier you can view it as:

  • software threads - created by app inside OS, is not related to amount of CPU threads
  • hardware threads - in a great simplification: how many threads CPU can handle simultaneously (for SMT CPUs it's cores*<threads per core>)
  • cores - physical cores, SMT threads are not included

@guswynn
Copy link
Contributor

guswynn commented Jul 18, 2020

This doesn't seem to have support for target_os="none", unless I'm totally missing how cfg(target_os) works?

@yoshuawuyts
Copy link
Member Author

What @mati865 says is accurate. Running the num_cpus APIs on my AMD computer yields the following results:

fn main() {
    println!("hardware threads {}", num_cpus::get());
    println!("hardware cores {}", num_cpus::get_physical());
}
hardware threads: 24
hardware cores: 12

@dfamonteiro I'd like to keep the scope of this PR minimal; but it's certainly possible an API along the lines of std::thread::hardware_cores could be introduced in the future. One takeaway I'm having from this conversion so far is that we need to do a better job at explaining what a "hardware thread" in the documentation.

@yoshuawuyts
Copy link
Member Author

@ehuss ah yes, apologies -- seems I missed that in the rebase. Fixed it!

@guswynn it now does (:

@the8472
Copy link
Member

the8472 commented Jul 18, 2020

What @mati865 says is accurate. Running the num_cpus APIs on my AMD computer yields the following results:

fn main() {
    println!("hardware threads {}", num_cpus::get());
    println!("hardware cores {}", num_cpus::get_physical());
}
hardware threads: 24
hardware cores: 12

num_cpus does not actually tell you the number of hardware threads, it tells you the maximum scheduling capacity available to the current process, which can be lower than the number of hardware threads if CPU affinities, cgroups or similar things are applied. This probably is what most thread pools actually want.

Returning the number of hardware threads would lead to oversubscription in containers.

Java does the same: https://www.docker.com/blog/improved-docker-container-integration-with-java-10/

@SimonSapin
Copy link
Contributor

None is returned on platforms where we don't know the number of hardware threads available.

This Option::None value represents not the absence of hardware thread, but a failure to retrieve information about them. It should therefore be Result::Err(something) instead.

we don't want to default to 1 for platforms where the number of available threads cannot be retrieved

For the same reason, it seems undesirable to return 1 on supported platforms when the corresponding libc call returns an error. Why not return also return Err(something) in those situations? std::io::Error might be appropriate.

@yoshuawuyts
Copy link
Member Author

Updated with @SimonSapin and Leo Le Boueter's feedback; we now return io::Result<NonZeroUsize> because syscalls may fail and we should forward the error.

@retep998
Copy link
Member

There needs to be a loud section about the limitations of this API on Windows. Specifically that it only returns the number of logical processors in the current processor group which is limited to at most 64 logical processors. To support more than 64 logical processors requires explicit support for enumerating processor groups and assigning threads to logical processors in other groups.

https://docs.microsoft.com/en-us/windows/win32/procthread/processor-groups

@joshtriplett
Copy link
Member

@retep998 Rather than having that limitation, we should be calling the APIs that let us figure out the total number of hardware threads, in all processor groups.

@yoshuawuyts
Copy link
Member Author

@joshtriplett @retep998 Added an entry on the tracking issue under "known issues" that the amount of threads reported on Windows is at most 64. Also tracking @luser's report that the interaction with CPU affinity on Linux doesn't seem right.

@mati865
Copy link
Contributor

mati865 commented Jul 19, 2020

@joshtriplett and place warning that users will be limited to 64 hardware threads unless they apply Windows specific workaround. I suppose they will rather use crate that does it for them...

@retep998
Copy link
Member

@joshtriplett Threads will not run on other processor groups by default. Normally threads only have access to the logical processors in the processor group for that process, so std::thread::hardware_threads should not count logical processors in other groups. If we want to count logical processors outside the current processor group, then we need to provide a more comprehensive API that reports information on processor groups and NUMA nodes and CPU sets, and have APIs to assign threads appropriately.

@joshtriplett
Copy link
Member

@retep998 Is it possible for a thread's affinity to be for multiple processor groups at once, or does a thread have to be limited to a single processor group at a time no matter how it's started or modified? If the former, we could use that and start threads that can run anywhere by default. If the latter, do Rayon and other libraries use the appropriate APIs at the moment to start threads on every hardware thread across the system?

Long-term, I'd like us to have NUMA-aware APIs as well, but for the moment, it'd be nice to have APIs for the simple case of "start as many hardware threads as the system has".

@retep998
Copy link
Member

@joshtriplett You must explicitly assign the group affinity of a thread using SetThreadGroupAffinity. A thread can only be assigned to a single group at any given time. Rayon is not multi-group aware, and will only scale up to the 64 logical processors of the processor group the process is started on. Reporting information on logical processors in other groups should only be done in tandem with allowing users to run their threads on those other processor groups.

@joshtriplett
Copy link
Member

joshtriplett commented Jul 20, 2020

@retep998 That's an unfortunate limitation, and it sounds like there's no way for us to do better automatically. In that case, yeah, we should go ahead and list the number of processors in the processor group. But separately, we should also try to offer a simple way to spawn threads on every CPU on the system.

@SimonSapin
Copy link
Contributor

A typical use of this function is to decide how many software threads to spawn for some parallel computation. Therefore what matters is not how much hardware exists somewhere, but how much of it can actually be used. Windows above 64 threads is an example where these two things are not the same.

Because of this, I feel that the function’s name should not include the word "hardware". Something like available_concurrency sounds more accurate.

@joshtriplett
Copy link
Member

joshtriplett commented Jul 21, 2020 via email

@Amanieu
Copy link
Member

Amanieu commented Jul 21, 2020

I'm increasingly getting the feeling that this is a hard problem and that we will likely not end up with the best API (as seen with hindsight 2 years from now).

This is the kind of API that is best left to crates.io where breaking changes can be made as needed by simply bumping the version number. On the other hand the standard library API must remain stable for ever which is unfortunate if we want to make a breaking change.

@joshtriplett
Copy link
Member

joshtriplett commented Jul 21, 2020 via email

@SimonSapin
Copy link
Contributor

That seems reasonable. Another case where these might differ would be when running in a Linux cgroup that limits available CPUs.

Does the implementation in this PR return the appropriate number, when run from such a cgroup?

@retep998
Copy link
Member

Another case where they might differ is on Windows when the process has a process affinity or is run inside a job object that constrains the affinity.

@bors
Copy link
Contributor

bors commented Oct 16, 2020

@yoshuawuyts: 🔑 Insufficient privileges: not in try users

@JohnTitor
Copy link
Member

IIRC we don't have FreeBSD builder on the try build so let's just re-add this to the queue :)
@bors r=dtolnay

@bors
Copy link
Contributor

bors commented Oct 16, 2020

📌 Commit 9508c321c66a26711f46ab795e26f4bef0bfb8c2 has been approved by dtolnay

@bors bors added S-waiting-on-bors Status: Waiting on bors to run and complete tests. Bors will change the label on completion. and removed S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author. labels Oct 16, 2020
@ehuss
Copy link
Contributor

ehuss commented Oct 16, 2020

@bors r-

This will still fail:

error[E0308]: mismatched types
   --> library/std/src/thread/available_concurrency.rs:115:27
    |
115 |                 if res == -1 {
    |                           ^^ expected `()`, found integer

error: aborting due to previous error

The semicolon in the unsafe block makes the res variable equal to ().

If you have Docker, I would encourage trying to test using that.

@bors bors added S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author. and removed S-waiting-on-bors Status: Waiting on bors to run and complete tests. Bors will change the label on completion. labels Oct 16, 2020
@RalfJung
Copy link
Member

RalfJung commented Oct 16, 2020

Alternative you can use Miri:
XARGO_RUST_SRC=~/src/path/to/rust/library cargo miri setup --target xxx
This does a check-only build so no linker or build tools for the target are required. (This code does not seem to have any cfg(miri) in it either that could affect the result here. Directly using xargo would avoid that but that is a bit more work to set up.)

@yoshuawuyts
Copy link
Member Author

yoshuawuyts commented Oct 16, 2020

The semicolon in the unsafe block makes the res variable equal to ().

@ehuss Ah yeah, good catch. I thought that followed out of an earlier error, but I got it wrong. That should be fixed now. Apparently I'd also missed that for OpenBSD which is fixed now too.

If you have Docker, I would encourage trying to test using that.

Do we have instructions on how to run this available? The Windows instructions for the compiler ended up being based on my notes, and researching that took a fair bit of work. If we don't have instructions on how to compile locally on BSD on Docker yet we should probably open up an issue for this somewhere (not sure if rust-lang/rust is the right repo for that?)

@ehuss
Copy link
Contributor

ehuss commented Oct 16, 2020

Do we have instructions on how to run this available?

https://rustc-dev-guide.rust-lang.org/tests/intro.html#testing-with-docker-images

The command for freebsd is src/ci/docker/run.sh dist-x86_64-freebsd. Unfortunately there is no setup for openbsd. I have an openbsd VM I occasionally use for testing, but I have never built rustc on it, and I suspect it is not easy. Using miri also sounds useful, though I don't have much experience with that.

@yoshuawuyts
Copy link
Member Author

@RalfJung that seemed to work successfully for both FreeBSD and OpenBSD! Though from the output it seems like it may only have checked core but not std? For posterity and anyone trying this on Windows, the way I did was using the following:

PS C:\Users\yoshu\Code\rust> rustup override set nightly-x86_64-pc-windows-gnu # miri doesn't seem to work yet on msvc
PS C:\Users\yoshu\Code\rust> rustup component add miri
PS C:\Users\yoshu\Code\rust> $env:XARGO_RUST_SRC='C:\Users\yoshu\Code\rust\library'
PS C:\Users\yoshu\Code\rust> cargo miri setup --target x86_64-unknown-freebsd
PS C:\Users\yoshu\Code\rust> cargo miri setup --target x86_64-unknown-openbsd

@ehuss dang, that seems to assume a Linux environment and I'm running on Windows. I don't think I'll be able to get that to work without provisioning a new environment (VM or otherwise) which I don't have the bandwidth for right now.

Given all known issues have been addressed at this point, can we give this a shot at running through bors again?

@RalfJung
Copy link
Member

RalfJung commented Oct 17, 2020

that seemed to work successfully for both FreeBSD and OpenBSD! Though from the output it seems like it may only have checked core but not std?

It should check core, std, and even test. And the output looks like that for me:

$ cargo miri setup --target x86_64-unknown-freebsd
   Compiling compiler_builtins v0.1.35
    Checking core v0.0.0 (/home/r/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core)
   Compiling libc v0.2.79
   Compiling cc v1.0.60
   Compiling std v0.0.0 (/home/r/.rustup/toolchains/miri/lib/rustlib/src/rust/library/std)
   Compiling unwind v0.0.0 (/home/r/.rustup/toolchains/miri/lib/rustlib/src/rust/library/unwind)
    Checking rustc-std-workspace-core v1.99.0 (/home/r/.rustup/toolchains/miri/lib/rustlib/src/rust/library/rustc-std-workspace-core)
    Checking cfg-if v0.1.10
    Checking alloc v0.0.0 (/home/r/.rustup/toolchains/miri/lib/rustlib/src/rust/library/alloc)
    Checking rustc-demangle v0.1.16
    Checking panic_abort v0.0.0 (/home/r/.rustup/toolchains/miri/lib/rustlib/src/rust/library/panic_abort)
    Checking rustc-std-workspace-alloc v1.99.0 (/home/r/.rustup/toolchains/miri/lib/rustlib/src/rust/library/rustc-std-workspace-alloc)
    Checking panic_unwind v0.0.0 (/home/r/.rustup/toolchains/miri/lib/rustlib/src/rust/library/panic_unwind)
    Checking hashbrown v0.9.0
    Finished release [optimized] target(s) in 25.33s
    Checking rustc-std-workspace-std v1.99.0 (/home/r/.rustup/toolchains/miri/lib/rustlib/src/rust/library/rustc-std-workspace-std)
    Checking proc_macro v0.0.0 (/home/r/.rustup/toolchains/miri/lib/rustlib/src/rust/library/proc_macro)
    Checking term v0.0.0 (/home/r/.rustup/toolchains/miri/lib/rustlib/src/rust/library/term)
    Checking unicode-width v0.1.8
    Checking getopts v0.2.21
    Checking test v0.0.0 (/home/r/.rustup/toolchains/miri/lib/rustlib/src/rust/library/test)
    Finished release [optimized] target(s) in 1.79s

(Ignore the "compiling", that's a cargo bug: rust-lang/cargo#7921)

@ehuss
Copy link
Contributor

ehuss commented Oct 18, 2020

@bors r=dtolnay

@bors
Copy link
Contributor

bors commented Oct 18, 2020

📌 Commit 3717646 has been approved by dtolnay

@bors bors added S-waiting-on-bors Status: Waiting on bors to run and complete tests. Bors will change the label on completion. and removed S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author. labels Oct 18, 2020
@bors
Copy link
Contributor

bors commented Oct 18, 2020

⌛ Testing commit 3717646 with merge c38ddb8...

@bors
Copy link
Contributor

bors commented Oct 18, 2020

☀️ Test successful - checks-actions, checks-azure
Approved by: dtolnay
Pushing c38ddb8 to master...

@bors bors added the merged-by-bors This PR was explicitly merged by bors. label Oct 18, 2020
@bors bors merged commit c38ddb8 into rust-lang:master Oct 18, 2020
@rustbot rustbot added this to the 1.49.0 milestone Oct 18, 2020
bors added a commit to rust-lang/miri that referenced this pull request Oct 18, 2020
@yoshuawuyts yoshuawuyts deleted the hardware_threads branch October 18, 2020 10:57
@yoshuawuyts yoshuawuyts changed the title Add std::thread::available_concurrency Add std::thread::available_concurrency Oct 5, 2021
Manishearth added a commit to Manishearth/rust that referenced this pull request Oct 6, 2021
…=m-ou-se

Rename `std::thread::available_conccurrency` to `std::thread::available_parallelism`

_Tracking issue: https://github.com/rust-lang/rust/issues/74479_

This PR renames  `std::thread::available_conccurrency` to `std::thread::available_parallelism`.

## Rationale

The API was initially named `std::thread::hardware_concurrency`, mirroring the [C++ API of the same name](https://en.cppreference.com/w/cpp/thread/thread/hardware_concurrency). We eventually decided to omit any reference to the word "hardware" after [this comment](rust-lang#74480 (comment)). And so we ended up with `available_concurrency` instead.

---

For a talk I was preparing this week I was reading through ["Understanding and expressing scalable concurrency" (A. Turon, 2013)](http://aturon.github.io/academic/turon-thesis.pdf), and the following passage stood out to me (emphasis mine):

> __Concurrency is a system-structuring mechanism.__ An interactive system that deals with disparate asynchronous events is naturally structured by division into concurrent threads with disparate responsibilities. Doing so creates a better fit between problem and solution, and can also decrease the average latency of the system by preventing long-running computations from obstructing quicker ones.

> __Parallelism is a resource.__ A given machine provides a certain capacity for parallelism, i.e., a bound on the number of computations it can perform simultaneously. The goal is to maximize throughput by intelligently using this resource. For interactive systems, parallelism can decrease latency as well.

_Chapter 2.1: Concurrency is not Parallelism. Page 30._

---

_"Concurrency is a system-structuring mechanism. Parallelism is a resource."_ — It feels like this accurately captures the way we should be thinking about these APIs. What this API returns is not "the amount of concurrency available to the program" which is a property of the program, and thus even with just a single thread is effectively unbounded. But instead it returns "the amount of _parallelism_ available to the program", which is a resource hard-constrained by the machine's capacity (and can be further restricted by e.g. operating systems).

That's why I'd like to propose we rename this API from `available_concurrency` to `available_parallelism`. This still meets the criteria we previously established of not attempting to define what exactly we mean by "hardware", "threads", and other such words. Instead we only talk about "concurrency" as an abstract resource available to our program.

r? `@joshtriplett`
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
merged-by-bors This PR was explicitly merged by bors. S-waiting-on-bors Status: Waiting on bors to run and complete tests. Bors will change the label on completion.
Projects
None yet
Development

Successfully merging this pull request may close these issues.