-
-
Notifications
You must be signed in to change notification settings - Fork 845
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve floating point accuracy in percentile #4617
Conversation
The fix here is relatively straightforward and exactly mirrors the fix used in numpy. The associated unit test is a little more annoying. In numpy, Hypothesis was used to create a more general test of this problem, but since it looks like we aren't using Hypothesis in cupy, I used Hypothesis to find a "magic value" that would produce the error and then just hardcoded that value into the test that I've submitted here. Does that seem reasonable or do folks have some other idea on how this might be tested? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The fix here is relatively straightforward and exactly mirrors the fix used in numpy.
Any chance you could give a pointer to the Numpy counterpart for the record (either issue/PR or link to actual code is fine)?
The associated unit test is a little more annoying. In numpy, Hypothesis was used to create a more general test of this problem, but since it looks like we aren't using Hypothesis in cupy, I used Hypothesis to find a "magic value" that would produce the error and then just hardcoded that value into the test that I've submitted here. Does that seem reasonable or do folks have some other idea on how this might be tested?
Well, even Hypothesis itself is fairly new to NumPy. It landed in NumPy only about a year ago (numpy/numpy#15189) so don't feel annoyed that it's not adopted here 😄
Could you add a few more magic values here, and also test against NumPy's outputs (as done in all other tests throughout the codebase)? You could return the output in the test function, decorated by @testing.numpy_cupy_allclose()
. This way we get a stronger confidence in the results.
Oh yes! Sorry about that; included it in the issue but forgot to repost it here. The numpy PR is here: numpy/numpy#16273, and it addressed numpy/numpy#14685.
Haha! No worries; I was more annoyed with myself that I hadn't found a more elegant test without Hypothesis...
Absolutely! Will update momentarily. |
All right, got the test updated and posted some benchmark data. Let me know if I can tweak anything further or if you'd like to see something more from that test. |
Jenkins, test this please. |
Jenkins CI test (for commit 10e03d7, target branch master) failed with status FAILURE. |
@wphicks Could you check test failure? |
Ah, interesting. It looks like numpy is violating the percentile monotonicity test in CI. I couldn't reproduce this with numpy==1.20.0, but earlier versions (which don't include numpy/numpy#16273) unsurprisingly demonstrate this issue. I wasn't immediately able to determine what numpy version was being used in CI to confirm that this is definitely the problem, but it seems likely. Does someone with better familiarity of the CI system know of a good way to check that version? In terms of a fix, we can either add a requirement to the |
I see! Could you use decorator |
This reverts commit 55b37e9 in favor of using the testing.with_requires decorator
Skip test when numpy version is old enough to violate the percentile monotonicity constraint itself
Done! Thank you. I reverted the change to setup.py and added the decorator. |
Looks like Github actions are having some issues right now, which seems to be responsible for the latest failure. |
Probably from this ( https://www.githubstatus.com/incidents/5qdkkyg958vy ). Maybe try pushing an empty commit? They say it was since resolved. |
Power-cycle (close and reopen) should restart it. |
Jenkins, test this please |
Jenkins CI test (for commit 949a25c, target branch master) succeeded! |
This shows up in the Windows CI log. Does cuTENSOR get installed on Windows typically? If so, is this just an issue of a download timing out or something?
https://ci.preferred.jp/cupy.experimental.win.cuda100/68442/ |
There're still tons of errors in the Windows CI so hopefully we can address them all before v9 is out. For now it's expected to see failures on Windows. For this specific case, it's tracked in #4601. The issue is cuTENSOR for Windows is packaged as @jakirkham Perhaps it's easier if you could request the cuTENSOR team to upload a different, machine-readable format? 🙂 At least cuDNN does not have this issue. |
(btw all tests in |
Ok let's move the Windows discussion to that issue then 🙂 |
The CI failure is unrelated to this PR. LGTM! Thank you for the PR! |
Improve the floating point accuracy of the linear interpolation formula used in
percentile_weightnening
kernel ofcupy.percentile
Resolve #4607