Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DRAFT] Durations are now zeptosecond counters (1e-21 second) #326

Open
wants to merge 16 commits into
base: master
Choose a base branch
from

Conversation

ChristopherRabotin
Copy link
Member

Status: draft

Remaining work:

  • Fix unit tests
  • Fix integration tests
  • Add unit kinds, including various second prefixes
  • Add const fn for Duration initializers of common units (centuries, days, seconds, nanoseconds) -- all non-integer initializers must use the Unit enum
  • Major documentation update

This PR greatly simplifies the Duration structure. Instead of having a counter of centuries and nanoseconds within that century, the Duration is now a signed zeptosecond counter on a single i128. Yes, this uses an extra 128 - (16+64) = 48 bits (6 bytes), but it dramatically increases the speed of computations and significantly reduces the potential bugs in the Duration structure.

@gwbres
Copy link
Collaborator

gwbres commented Aug 13, 2024

I'm initiating zepto branches in all my GNSS crates (😆). To serve as a beta tester and verifier, I will need a Nyx branch and ANISE branch that point to this very branch, because the position solver needs all three of them ;)

@ChristopherRabotin
Copy link
Member Author

A real poweruser! I'll let you know when it's ready to test. I can't imagine this being too much work to be honest: it took me a very short amount of time to convert Duration to that zeptosecond counter, but there seems to be a bug in unit based operations with the work I did last night.

@gwbres
Copy link
Collaborator

gwbres commented Aug 13, 2024

A real poweruser! I'll let you know when it's ready to test. I can't imagine this being too much work to be honest: it took me a very short amount of time to convert Duration to that zeptosecond counter, but there seems to be a bug in unit based operations with the work I did last night.

The proof of quality work ;) no hurries, I have many other paths to investigate

ChristopherRabotin and others added 3 commits August 14, 2024 09:55
These changes will introduce some breaking changes for sure
Signed-off-by: Guillaume W. Bres <guillaume.bressaix@gmail.com>
@gwbres
Copy link
Collaborator

gwbres commented Aug 24, 2024

I see that you are removing .centuries from Epoch.
So we'll have 2^18 * 1E-21 seconds, which is about 10.79 Giga years... starting from ??

Signed-off-by: Guillaume W. Bres <guillaume.bressaix@gmail.com>
@ChristopherRabotin
Copy link
Member Author

Thanks for helping on this branch! I was about to pick up the work today, so the multiple timezones really helps!

I see that you are removing .centuries from Epoch. So we'll have 2^18 * 1E-21 seconds, which is about 10.79 Giga years... starting from ??

Good point... from the reference epoch of each time scale. That's what I had in mind. One limitation is that it means the "minimum possible duration" may be defined in one time scale but not the other. One comment that burntsushi made in the discussions a few months ago is that a lot of the operations in hifitime that could fail don't return an error if they fail, and instead return a saturated min or max. Maybe the in_time_scale function should return an error if that duration cannot be converted to the other one?

@gwbres
Copy link
Collaborator

gwbres commented Aug 25, 2024

Thanks for helping on this branch! I was about to pick up the work today, so the multiple timezones really helps!

work efficiency 🤣

I see that you are removing .centuries from Epoch. So we'll have 2^18 * 1E-21 seconds, which is about 10.79 Giga years... starting from ??

Good point... from the reference epoch of each time scale. That's what I had in mind. One limitation is that it means the "minimum possible duration" may be defined in one time scale but not the other.

Not sure I understand the reason why, they all have a T0 right ? and this PR gives dt=1E-21s as the smallest dt

One comment that burntsushi made in the discussions a few months ago is that a lot of the operations in hifitime that could > fail don't return an error if they fail, and instead return a saturated min or max. Maybe the in_time_scale function should
return an error if that duration cannot be converted to the other one?

it all comes down to the behavior you intend and desire to create.
Saturating and never failing can create a robust computer, with tiny errors in some scenarios (embeeded?).
Managing all errors is tedious yet mathematically accurate (simulation?).
As far as the time scale conversion goes, any impossible operation should be forbidden: it's very different from internal calculation error, this is physical error

@ChristopherRabotin
Copy link
Member Author

Hmm, I didn't quite expect this problematic behavior: the f64 is not precise enough to accurately convert to an i128 without loss of precision.

For example, when initializing Duration::from_seconds(10.1), the current code will compute the exponent needed in f64, multiply the input value, and then use that returned value as an input to the zeptoseconds. However, there is a rounding issue:

[src/timeunits.rs:339:40] factor_zs as f64 = 1e21
[src/timeunits.rs:339:31] q * dbg!(factor_zs as f64) = 1.01e22
[src/timeunits.rs:338:13] Duration { zeptoseconds: dbg!(q * dbg!(factor_zs as f64)) as i128 } = Duration {
    zeptoseconds: 10099999999999998951424,
}

Current code:

fn mul(self, q: f64) -> Duration {
        let factor_zs = match self {
            Unit::Century => NANOSECONDS_PER_CENTURY * ZEPTOSECONDS_PER_NANOSECONDS,
            Unit::Week => DAYS_PER_WEEK_I128 * NANOSECONDS_PER_DAY * ZEPTOSECONDS_PER_NANOSECONDS,
            Unit::Day => NANOSECONDS_PER_DAY * ZEPTOSECONDS_PER_NANOSECONDS,
            Unit::Hour => NANOSECONDS_PER_HOUR * ZEPTOSECONDS_PER_NANOSECONDS,
            Unit::Minute => NANOSECONDS_PER_MINUTE * ZEPTOSECONDS_PER_NANOSECONDS,
            Unit::Second => NANOSECONDS_PER_SECOND * ZEPTOSECONDS_PER_NANOSECONDS,
            Unit::Millisecond => NANOSECONDS_PER_MILLISECOND * ZEPTOSECONDS_PER_NANOSECONDS,
            Unit::Microsecond => NANOSECONDS_PER_MICROSECOND * ZEPTOSECONDS_PER_NANOSECONDS,
            Unit::Nanosecond => ZEPTOSECONDS_PER_NANOSECONDS,
            Unit::Picosecond => ZEPTOSECONDS_PER_PICOSECONDS,
            Unit::Femtosecond => ZEPTOSECONDS_PER_FEMPTOSECONDS,
            Unit::Attosecond => ZEPTOSECONDS_PER_ATTOSECONDS,
            Self::Zeptosecond => 1,
        };

        // Bound checking to prevent overflows
        if q >= f64::MAX / (factor_zs as f64) {
            Duration::MAX
        } else if q <= f64::MIN / (factor_zs as f64) {
            Duration::MIN
        } else {
            dbg!(Duration {
                zeptoseconds: dbg!(q * dbg!(factor_zs as f64)) as i128,
            })
        }
    }

Let me try to figure this out. I think I may need to look at the exponent of the input to try to convert to an integer as soon as possible.

@ChristopherRabotin
Copy link
Member Author

ChristopherRabotin commented Aug 26, 2024

I think that this problem can be solved using a GCD algorithm like the one I introduced here (for hifitime v2): https://github.com/dnsl48/fraction/pull/37/files#diff-34e20107d75000e9bae6843d1fa9f42adf14f11c2334d8e32a98321010c2a194R2306 . This should only be called when initializing from f64 and only if the my_value_As_f64.fract() > 0.

Current code: https://github.com/dnsl48/fraction/blob/17649f7ff9f0509cfd52fe1ecaa4d97b1cddf851/src/fraction/generic_fraction.rs#L1191

@ChristopherRabotin
Copy link
Member Author

That approach worked out of the box! Now I have another issue in DurationParts where the to_unit shows some rounding errors:

[src/duration/mod.rs:595:13] dt = Duration {
    zeptoseconds: 86400000000099000000000000,
}
[src/duration/parts.rs:63:32] value.to_unit(Unit::Picosecond) = 1000.0000000000001
[src/duration/parts.rs:63:27] dbg!(value.to_unit(Unit::Picosecond)).floor() = 1000.0
thread 'duration::ut_duration::test_serdes' panicked at src/duration/mod.rs:596:13:
assertion `left == right` failed
  left: "\"1 day 99 ns\""
 right: "\"1 day 98 ns 1000 ps\""
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

I think that the to_unit function may need to return integers, but I'm not sure. I make heavy use of the to_seconds and to_unit() as an f64. Maybe the subdivision function should return the integer and be used in the impl From<Duration> for DurationParts.

@gwbres
Copy link
Collaborator

gwbres commented Aug 26, 2024

Hmm, I didn't quite expect this problematic behavior: the f64 is not precise enough to accurately convert to an i128 without loss of precision.

makes sense, it only has 52 bits for fractional part, but more importantly :

image

@ChristopherRabotin
Copy link
Member Author

The approach I implemented last night works, but it's significantly slower than the current version of Duration. This slowness is especially obvious with the "convert f64 to duration" benchmark, which converts 6311433599.999999 seconds to a Duration.

Master: https://github.com/nyx-space/hifitime/actions/runs/10277826855 --> ~ 5 nanoseconds
Current branch: https://github.com/nyx-space/hifitime/actions/runs/10553768831?pr=326 --> ~ 65 nanoseconds (or 13 times slower)

This is most likely due to the loop {} in the initialization. I think that the approach should be revised to take large steps: currently, the code multiplies by 10 at each loop and checks if it's still a floating point value. Instead, it should multiply by 1000 and check (or even 1_000_000). These large steps would reduce the number of loops from 6 to 2 if initializing durations with a 1 microsecond precision from f64.

The benchmarks are about ~2 as slow for the proposed implementation. This is especially surprising for the "Duration add and assert day hour" test because both of these should be pure integer duration conversions. @gwbres , any idea what could be the issue there? I'll dive more into it tonight.

@gwbres
Copy link
Collaborator

gwbres commented Aug 26, 2024

The benchmarks are about ~2 as slow for the proposed implementation. This is especially surprising for the "Duration add and assert day hour" test because both of these should be pure integer duration conversions. @gwbres , any idea what could be the issue there? I'll dive more into it tonight.

if I'm following correctly, you are talking about pure integer operations, not .to_seconds() like just above.
Iit is normal for this PR to be slower, even in pure integer operation, I presume when working on 64bit platforms, at least twice as many operations are required to consider 128bit number

@gwbres
Copy link
Collaborator

gwbres commented Aug 26, 2024

The benchmarks are about ~2 as slow for the proposed implementation. This is especially surprising for the "Duration add and assert day hour" test because both of these should be pure integer duration conversions. @gwbres , any idea what could be the issue there? I'll dive more into it tonight.

if I'm following correctly, you are talking about pure integer operations, not .to_seconds() like just above. Iit is normal for this PR to be slower, even in pure integer operation, I presume when working on 64bit platforms, at least twice as many operations are required to consider 128bit number. But, very hardware dependent, if you have SIMD compliant platform, you can have 128b or even higher supported natively (std:simd)

@ChristopherRabotin
Copy link
Member Author

ChristopherRabotin commented Aug 26, 2024

Yes, that makes sense actually, and I had not considered this. In the long run, it most likely prevents a lot of bugs, but is it worth the 2x slow down for operations on durations that are less than one century long? (More than one century long means that the current duration counter on centuries is also used, so we should have two integer operations plus the normalization operation.)

SIMD seems to be a while away from stable Rust: rust-lang/rust#86656.

Edit: if most operations are in floating point, and if we're already using 128 bits to represent a duration, would it make sense to just use a floating point counter on 128 bits when the f128 becomes part of Rust core?

@gwbres
Copy link
Collaborator

gwbres commented Aug 26, 2024

Edit: if most operations are in floating point, and if we're already using 128 bits to represent a duration, would it make sense to just use a floating point counter on 128 bits when the f128 becomes part of Rust core?

to be more accurate, it is incorrect to say most operations are floating point, right ? the current infrastructure uses fixed point at its very basis. Unless you are talking about something else.

Are you saying introduce f128 to be "on par" with i128 (proposed here) or moving everything to f128 (moving away from a fixed point core).

F128 won't buy you anything in terms of performance, it will either be as slow as the proposed fixed point right here, or possibly slower. Unless Floating Point Unit have the ability to manage it and it ends up faster than i128. But that is once again platform dependent.

The only thing f128 buys you is the dynamic range of the floating point format, and it increases the 15-17 decimal point limitation you currently have, making 1E-21 reached. Last case, being i128 on the integer part and f128 on the fractional part, which would be the "worst" case scenario in terms of performances.

And what about making this an "option" ? it is not uncommon to have core libraries have a Speed/Accuracy trade off.
Some that come to mind are the widespread FFTW or Liquid DSP, which uses FFTW as well. For example in my laboratory applications, we would typically use the accuracy implementation with a calculation speed trade off. I just dont know the hassle it might be maintaining two implementations

@ChristopherRabotin
Copy link
Member Author

to be more accurate, it is incorrect to say most operations are floating point, right ? the current infrastructure uses fixed point at its very basis. Unless you are talking about something else.

Correct, everything within hifitime are integer operations. I was thinking about the interface to hifitime where users probably pass in floating point number of seconds because that's more common (at least in Nyx).

F128 won't buy you anything in terms of performance.
Then let's stick with i128. I prefer the explicit operations of integers.

And what about making this an "option"
That sounds like a lot of work so maybe for the future ;-)

@ChristopherRabotin
Copy link
Member Author

Switching to increments of 1000 when converting from f64 does not significantly improve speed: https://github.com/nyx-space/hifitime/actions/runs/10569484722?pr=326 .

@ChristopherRabotin
Copy link
Member Author

ChristopherRabotin commented Aug 27, 2024

Hmm, here's another issue I noticed : if the number cannot be fully represented on an f64, then the proposed initialization leads to a rounding error at the maximum precision.

For example, in the integration tests, there's a check that (1/3).hours() is marked as exactly 20 minutes (because if using floating point values, that isn't true). In this case though, there is a rounding error where it's shown as 19 min 59 s 999 ms 999 us 999 ns 999 ps 880 fs. The 880 femtoseconds are a rounding artifact. I need to figure out how to identify that at the initialization of a duration from a floating point and correct it.

I think that I can handle with by adding one picosecond after dropping the extra precision of 880 fs which is an artifact. In this specific case, the power of ten found until the f64 could be represented as an integer was 16... And pico is 1e-12, so I'm not sure how to reconcile dropping the 880 fs (at 1e-15) and the power of ten of 16.

We'll also definitely need tests that initialize attoseconds and zeptoseconds from their f64 representations and ensure that no precision is lost.

As usual, something that seemed like a quick change turns out to not be one.

@gwbres
Copy link
Collaborator

gwbres commented Aug 27, 2024

Then let's stick with i128. I prefer the explicit operations of integers.

at least your error is constant

For example, in the integration tests, there's a check that (1/3).hours() is marked as exactly 20 minutes (because if using floating point values, that isn't true). In this case though, there is a rounding error where it's shown as 19 min 59 s 999 ms 999 us 999 ns 999 ps 880 fs. The 880 femtoseconds are a rounding artifact. I need to figure out how to identify that at the initialization of a duration from a floating point and correct it.

the problem being that the error in floating point format is not constant and depends on the input number.
Is that what f64.is_finite() lets you know ? or is it simply null remainder in Eucledian division

I think that I can handle with by adding one picosecond after dropping the extra precision of 880 fs which is an artifact. In > this specific case, the power of ten found until the f64 could be represented as an integer was 16... And pico is 1e-12, so
I'm not sure how to reconcile dropping the 880 fs (at 1e-15) and the power of ten of 16.

depending on the number of bits of the input floating point number, you have to keep in mind where the smallest fractionanl digit is. In f32 I think you only have 6 digits, apparently f64 buys you 15 digits, etc..

We'll also definitely need tests that initialize attoseconds and zeptoseconds from their f64 representations and ensure that no precision is lost.

👍

As usual, something that seemed like a quick change turns out to not be one.

Yeah I'm afraid so. Yet I don't think it is reasonnable to assume you can multiply by 2 the amount of information and expect the "performances" not to degrade by at least 2.

Sorry I hit "Edit" instead of "Reply" 🤣 is that even possible

@ChristopherRabotin
Copy link
Member Author

Guillaume, I may need to postpone this for a few weeks to focus on other more urgent work. So I'll try another few changes this evening, but if I can't fix this precision issue, specifically for the (1.0/3.0).hours() call, then I will postpone it.

@gwbres
Copy link
Collaborator

gwbres commented Aug 27, 2024

No worries on my side, this is not a priority

@gwbres
Copy link
Collaborator

gwbres commented Dec 12, 2024

have you thought about how to approach and schedule this topic Chris ?
Hifitime upgrade on its own is kind of pointless to us, it's a whole nyx upgrade that we're interested in.
Currently we are more interested by newer nyx tags than this very topic (basically aligning to anise).
Also, keep in mind that, if you can provide branches for this topic, we can serve as beta testers

PS: not sure who your new contributor is, but that looks like a bot to me 😆

@ChristopherRabotin
Copy link
Member Author

Ha, yes, I'm pretty sure that's bot: I was surprised to get the approval email!

Concerning the zeptosecond PR, this is at the very bottom of my priority list, and I see it mostly as a fun exercise. In my day-to-day use of ANISE, even nanoseconds are too precise to be useful. My objective is to ship a stable version 2.0.0 of Nyx by the end of the year with all related dependencies locked in, and then prevent the dependency hell I know you've experienced.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Increase precision of Duration to cover 10^-29 seconds to 10^28 seconds
3 participants