Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MetricCollection just recomputes the first metric after first .compute call if compute_groups is enabled #2206

Closed
RomanN27 opened this issue Nov 7, 2023 · 2 comments · Fixed by #2211
Assignees
Labels
bug / fix Something isn't working help wanted Extra attention is needed v1.2.x
Milestone

Comments

@RomanN27
Copy link

RomanN27 commented Nov 7, 2023

🐛 Bug

I think the MetricCollection was designed to be used as follows:

metrics = MetricCollection([...])
for _ in range(...):
    metrics.update(...)
metrics.compute()
metrics.reset()

But I used it the following way:

metrics = MetricCollection([...])
for _ in range(...):
    metrics.update(...)
    metrics.compute()
metrics.reset()

This does not work. After the first compute call only the first metric in the metric collection will return a new value. This is because after calling .compute of all metrics in the metric collection the respective ._computed property is set to a value. If the metrics are in the same compute_group then the next .update call of the metric collection will invoke the .update method only for the first metric. However it is exactly this method that sets ._computed to None again. For the other metrics it is still not None and thus will return the cached value of the first computation the whole time.

To Reproduce

from torch import Tensor
import numpy as np
from torchmetrics import MetricCollection
from torchmetrics.classification import BinaryPrecision, BinaryAccuracy, BinaryRecall, BinarySpecificity
# Generate random predictions and labels for a binary classification problem
np.random.seed(42)  # For reproducibility
n_samples = 100  # Number of samples


# Metrics calculation
metrics = MetricCollection([
    BinaryRecall(),
    BinaryAccuracy(),
    BinaryPrecision(),
    BinarySpecificity()
])

n_epochs = 10

for _ in range(n_epochs):
    # Generate random predictions (0 or 1)
    predictions = Tensor(np.random.randint(2, size=n_samples))

    # Generate random true labels (0 or 1)
    true_labels = Tensor(np.random.randint(2, size=n_samples))

    metrics.update(predictions, true_labels)
    print(metrics.compute())


Expected behavior

The compute method should return the new calculations for all metrics.

Environment

  • TorchMetrics 1.2.0, installed with pip
  • Python 3.11
  • PyTorch 2.1.0
  • MacOs

Additional context

@RomanN27 RomanN27 added bug / fix Something isn't working help wanted Extra attention is needed labels Nov 7, 2023
Copy link

github-actions bot commented Nov 7, 2023

Hi! thanks for your contribution!, great first issue!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug / fix Something isn't working help wanted Extra attention is needed v1.2.x
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants
@Borda @SkafteNicki @RomanN27 and others