-
Notifications
You must be signed in to change notification settings - Fork 98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor testing framework into a libm-test #198
Conversation
This proc macro takes a macro of the form: ``` macro_rules! nop { ( id: $id:ident; arg_tys: $($arg_tys:ty),*; arg_ids: $($arg_ids:ident),*; ret: $ty:ty; ) => {}; } ``` as input and expands it on every public libm API. This facilitates generating tests, benchmarks, etc. for all APIs.
Currently, only the randomly generated tests from the old build.rs are generated.
} | ||
} | ||
|
||
// Some interesting random values |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should properly think about adding positive and negative NAN, MIN_POSITIVE, MAX and, subnorm which are 1.401298464324817070923730e-45f32
and 4.9406564584124654e-324f64
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried to keep the tests testing exactly the same thing as in the old build.rs
, to keep the test from breaking on CI due to too many new issues - there were already enough simpler to fix issues being raised by libm-analyze. I think we should move this to do something cleverer here as we fix the tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added some of these values, haven't added the subnormal yet, I thought MIN_POSITIVE
and -MIN_POSITIVE
where the two subnormal values. Is that incorrect ?
The FMA implementation is quiet broken. Instead of using I've removed nexttoward/nexttowardf for similar reasons. I don't think we can implement these correctly in Rust without a f80 floating point type with 64-bit significand. |
I'm sorry I just can't keep reviewing multi-thousand line complete reorganizations of crates. Can this be split up? Some changes I would prefer to not happen after a skim are:
I again do not want to lose the current functionality of tests which I'm afraid a refactoring like this could possibly lose. Can all these changes be disentangled from one another to avoid having a massive blocking PR? |
Also if things like the fma implementation are broken can a separate PR be used to track that? It gets very difficult to manage this when so many fixes are piled in one place. Additionally I do not want to add clippy to CI here, it seems like it will cause endless headaches of maintenance over time as clippy lints change. |
This change does not lose any tests AFAICT, all APIs are automatically tested, and compilation fails if one is not. One can ignore the results of tests of some APIs, if these happen to be broken, but all original tests pass here.
That's being discussed in #195 so I'd rather wait to see how that resolves there.
Sure. |
@alexcrichton so PRs #203, #204, #205, #206, #207, #208, and #209 are the ground work for this. After those get resolved. I'll rebase here and see what the next steps are. The first step would be to just submit a PR adding the |
Ok, thanks! |
cc @alexcrichton
This PR removes the old testing framework, and instead adds two new crates, libm-analyze, and libm-test.
libm-analize provides a proc macro, that expands its argument for each of the public functions in the libm crate, providing it its signature:
This is then re-used by the
libm-test
crate, to implement the random tests for each of the libm APIs that were previously provided by the build-script. It uses the same proc macro to implement exhaustiveness tests ala #186 , and it replaces with it most of libm-bench as well.The
libm_analyize::for_each_api
macro does a sanity check of the libm API when the libm-analyze tests run. It marks dozens and dozens of issues (missing inline, missing no_panic, no extern "C", returning types that are not repr(C), mismatches with musl API, etc.). The list is in this gist. Fixing all of those takes time, so the errors are silenced by default, except when building the libm-analyze crate. At some point we should turn these into hard errors. EDIT: I started doing that here, but I've left hrad API breaking changes to subsequent PRs.