-
Notifications
You must be signed in to change notification settings - Fork 12.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Document minimum precision guarantees for the various std::time APIs #32626
Comments
Should we guarantee precision or document which system APIs these things use? |
I definitely agree this should be documented, at least for major platforms. If it's not documented, all you can do is test the precision, but then you have to make assumptions about future systems. If we guarantee a precision, that leaves open the question of what to do for a system that can't support such a precision. Panic? |
We already can't guarantee any particular precision. Linux for example runs on processors with a wide variety of timing hardware, with all sorts of different precisions. |
Perhaps we need another API to query the precision. |
If it's not possible to provide any guarantees, we should at least document the precision for major (tier-1) platforms, and explicitly state that there are no guarantees for other platforms. |
@crumblingstatue Does "tier-1 platform" include guest Linux on VMWare? Its clock is known to be freaky. POSIX has |
@nodakai Hrrm. If truly nothing else can be done, then at least let's document the platform APIs we use for at least the major platforms, so the user doesn't have to look them up in the RFCs. That's at least more information about the precision than nothing at all. |
Do we aim for at least some level of precision where possible? |
We aim for the highest precision available on whatever platform you're running on. |
@sfackler Then we could put that into the documentation. That's useful information. |
I tried adding |
Triage: no changes I'm aware of. |
Fix Instant/Duration math precision & associativity on Windows **tl;dr** Addition and subtraction on Duration/Instant are not associative on windows because we use the perfcounter frequency in every calculation instead of just when we measure time. This is my first contrib (PR or Issue) to Rust, so please lmk if I've done this wrong. I followed CONTRIBUTING to the extent I could given my system doesn't seem to be able to build the compiler with changes in the source tree. I also asked about this issue in #rust-beginners a week or so ago, before digging through libstd -- I'm unsure if there's a good way to follow up on that, but I'd be happy to update the docs on the timing structs if this fixes the problem. ## Issue The `Duration` type keeps seconds in the upper-64 and nanoseconds in the lower-32 bits. In theory doing math on these ought to be basically the same as doing math on any other 64 or 32 bit integral number. On windows (and I think macos too), however, our math gets messy because the Instant type stores the current point in time in units of HPET Performance Counter counts, not nanoseconds, and does unit conversions on every math operation, rather than just when we measure the time from the system clock. I tried this code: ``` use std::time::{Duration, Instant}; fn main() { let now = Instant::now(); let offset = Duration::from_millis(5); assert_eq!((now + offset) - now, (now - now) + offset); } ``` On UNIX machines (linux and macos) it behaves as you'd expect -- [no crash](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=cf2206c0b7e07d8ecc7767a512364094). On Windows hosts, however, it blows up because of a precision problem in the Instant +/- Duration math: ``` C:\Users\aberg\work\timetest (master -> origin) λ cargo run Finished dev [unoptimized + debuginfo] target(s) in 0.02s Running `target\debug\timetest.exe` thread 'main' panicked at 'assertion failed: `(left == right)` left: `4.999914ms`, right: `5ms`', src\main.rs:6:5 note: Run with `RUST_BACKTRACE=1` for a backtrace. error: process didn't exit successfully: `target\debug\timetest.exe` (exit code: 101) C:\Users\aberg\work\timetest (master -> origin) λ cat src\main.rs use std::time::{Duration, Instant}; fn main() { let now = Instant::now(); let offset = Duration::from_millis(5); assert_eq!((now + offset) - now, (now - now) + offset); } ``` On windows I think this is a consequence of doing the HPET-PerfCounter-Unit conversion on each math operation. I suspect the reason it works on macs is that (from what I could find online) most apple machines report timing in nanoseconds anyway. For anyone interested, the equivalent functions on macos, with a little work to fish out the numerator/denominator from a timebase struct: * `QueryPerformanceCounter()` -> `mach_absolute_time()` * `QueryPerformanceFrequency()` -> `mach_timebase_info()` If this PR ends up working as I expect it to when CI runs the tests, I can make the same changes to the macos implementation. ## Potential Fix We ought to be able to sort this out by storing nanoseconds, rather than PerfCounter units, that way intermediate math is done in the most precise units we support and we're only doing unit conversions when we actually measure the system clock (and it might even translate to a small perf gain for people doing tons of Instant/Duration math). I believe this will address the underlying cause of rust-lang#56034, and make the guessed epsilon constant from rust-lang#56059 unnecessary. If it's of interest, I can write up how these timing types work on the tier 1 platforms to address rust-lang#32626 as well, since I'm already in here figuring it out. ## This Patch To that end, I've got this patch, which I think should fix it on windows, but I'm having trouble testing it -- any time I change anything in libstd I start getting this error, which no amount of clean building seems to resolve: ``` C:\Users\aberg\work\rust (master -> origin) λ python x.py test --stage 0 --no-doc src/libstd Updating only changed submodules Submodules updated in 0.27 seconds Finished dev [unoptimized] target(s) in 2.41s Building stage0 std artifacts (x86_64-pc-windows-msvc -> x86_64-pc-windows-msvc) Finished release [optimized] target(s) in 6.78s Copying stage0 std from stage0 (x86_64-pc-windows-msvc -> x86_64-pc-windows-msvc / x86_64-pc-windows-msvc) Building stage0 test artifacts (x86_64-pc-windows-msvc -> x86_64-pc-windows-msvc) Compiling test v0.0.0 (C:\Users\aberg\work\rust\src\libtest) error[E0460]: found possibly newer version of crate `std` which `getopts` depends on --> src\libtest\lib.rs:36:1 | 36 | extern crate getopts; | ^^^^^^^^^^^^^^^^^^^^^ | = note: perhaps that crate needs to be recompiled? = note: the following crate versions were found: crate `std`: \\?\C:\Users\aberg\work\rust\build\x86_64-pc-windows-msvc\stage0-sysroot\lib\rustlib\x86_64-pc-windows-msvc\lib\libstd-d7a80ca2ae113c97.rlib crate `std`: \\?\C:\Users\aberg\work\rust\build\x86_64-pc-windows-msvc\stage0-sysroot\lib\rustlib\x86_64-pc-windows-msvc\lib\std-d7a80ca2ae113c97.dll crate `getopts`: \\?\C:\Users\aberg\work\rust\build\x86_64-pc-windows-msvc\stage0-test\x86_64-pc-windows-msvc\release\deps\libgetopts-ae40a96de5f5d144.rlib error: aborting due to previous error For more information about this error, try `rustc --explain E0460`. error: Could not compile `test`. To learn more, run the command again with --verbose. command did not execute successfully: "C:\\Users\\aberg\\work\\rust\\build\\x86_64-pc-windows-msvc\\stage0\\bin\\cargo.exe" "build" "--target" "x86_64-pc-windows-msvc" "-j" "12" "--release" "--manifest-path" "C:\\Users\\aberg\\work\\rust\\src/libtest/Cargo.toml" "--message-format" "json" expected success, got: exit code: 101 failed to run: C:\Users\aberg\work\rust\build\bootstrap\debug\bootstrap test --stage 0 --no-doc src/libstd Build completed unsuccessfully in 0:00:20 ``` --- Since you wrote the linked PRs and might remember looking at related problems: r? @alexcrichton
@bearcage I see that you mentioned this issue in your PR! You're still interested in picking this up? |
@alexcrichton @aturon @dtolnay @KodrAus @SimonSapin @withoutboats @sfackler @Kimundi @BurntSushi @Amanieu Does the Library Team want to establish a minimum guaranteed precision for std::time APIs? Or desires to have platform-specific API calls being mentioned? Otherwise we can close this issue! I personally think that listing platform-specific APIs is not a good idea since they might change over time and are a hassle to maintain for the docs-team. I don't really think a minimum precision is required either, but can be added if desired. Ps. Right now I pinged everyone of the libs team (I heard team pings don't work anymore), is there any convention to follow when wanting to get in contact with the libs team on a certain issue? |
I think at the bare minimum precision with respect to reading time from the system should probably documented (or rather I don't think anyone would object to that). For example Windows you get something like 10ns precision between taking timing samples whereas on Linux it's 1ns (IIRC). The precision of I think one of the main remaining questions is what the precision of |
@alexcrichton Thanks for your response! So if I understood you correctly the task for this issue is: Figure out the underlying API calls for SystemTime and documenting the precision for each platform. Do I understand this correctly? I am currently abroad, but I can probably create a PR for this once I get home. I see that the current std::time::Duration page already states that it operates on nanosecond precision, so we don't have to change it's documentation. |
I believe so, yes. As usual though the documentation needs to be careful to indicate what's captured in this thread about how we call some APIs, but those APIs aren't guaranteed to be called. Additionally system configuration may mean the API calls are less precise. (etc, etc) |
…alls, r=rkruppe Added table containing the system calls used by Instant and SystemTime. # Description See rust-lang#32626 for a discussion on documenting system calls used by Instant and SystemTime. ## Changes - Added a table containing the system calls used by each platform. EDIT: If I can format this table better (due to the large links) please let me know. I'd also be happy to learn a quick command to generate the docs on my host machine! Currently I am using: `python x.py doc --stage 0 src/libstd` but that gives me some `unrecognized intrinsic` errors. Advice is always welcome :) closes rust-lang#32626
#29894 talks about what it uses to query time on the various platforms, but that is an implementation detail.
The API documentation should define minimum precision guarantees for these APIs, so the user can make an informed decision whether using these APIs will provide sufficient precision for their use case on all supported platforms.
For example, if a user wanted to use these APIs to limit framerate in a video game, second precision wouldn't cut it, but millisecond precision should be sufficient for most cases, so if these APIs guarantee at least millisecond precision, then they can be used for this purpose.
The text was updated successfully, but these errors were encountered: