You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi @jogepari,
Happy to report that this have already been fixed in PR #1493. Here is a snippet running your code on master branch:
As you can see memory is now constant over a certain point.
To get the changes now, please install from master: pip install https://github.com/Lightning-AI/metrics/archive/master.zip
or wait for the next release.
@jogepari I can see its due to it being in the changed section https://github.com/Lightning-AI/torchmetrics/blob/master/CHANGELOG.md#changed and not the bugfix fixed section. Only linked PRs in the fixed section is included in bugfix releases.
We can argue if this is a bugfix, but the author of the PR saw it more as in improvement/change because the algorithm is actually working, it is just consuming a lot of memory.
Also, on your screenshot, calculation time with step=0.025 is already exceeding one without thresholds, it this normal behaviour?
So the consequence of lowering the memory requirements is that we need to use an alternative algorithm that is slower when using a lot of thresholds. So yes, I would say the results are expected.
🐛 Bug
Using binary_precision_recall_curve with thresholds set results in huge memory consumption (and fast decreasing time savings)
To Reproduce
Notebook demo using memory_profiler
Preview
sklearn for comparison
No thresholds
Thresholds with steps: 0.1, 0.05, 0.025
Expected behavior
Being both faster and more memory efficient than without thresholds set.
Environment
conda
,pip
, build from source): 0.11.4, conda-forgeThe text was updated successfully, but these errors were encountered: