You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It's extremely slow to compute the Matthew Correlation Coefficient since torchmetrics == 0.10.0.
To Reproduce
During my testing, I am seeing massive slow down when computing the Matthew Correlation Coefficient, especially on the GPU (using pytorch-lightning to build, train and test a deep learning model's performance on this metric).
I have compiled a code sample (next section) and below shows the result over different versions of torchmetrics.
ON CPU:
torchmetrics 0.7.3: 0.93961 s
torchmetrics 0.8.0: 0.93549 s
torchmetrics 0.8.2: 0.94494 s
torchmetrics 0.9.0: 0.92856 s
torchmetrics 0.9.2: 0.93682 s
torchmetrics 0.10.0: 1.10903 s
ON GPU:
torchmetrics 0.7.3: 0.11444 s
torchmetrics 0.8.0: 0.11682 s
torchmetrics 0.8.2: 0.11425 s
torchmetrics 0.9.0: 0.11433 s
torchmetrics 0.9.2: 0.11410 s
torchmetrics 0.10.0: 359.30208 s
So yeah testing over thousands of batches now takes almost a week for me to complete 🤣 Please take a look at this soon.
Code sample
from tqdm.auto import tqdm
import time
import torch
from torchmetrics import MatthewsCorrCoef
torch.manual_seed(1)
b, h, w = 10, 1080, 1920
device = "cpu"
def generate(b, h, w):
prob = torch.rand(b, h, w).to(device)
truth = torch.randint(0, 2, (b, h, w)).to(device)
return prob, truth
batches = []
for _ in range(10):
batches.append(generate(b, h, w))
mcc = MatthewsCorrCoef(num_classes=2).to(device)
t1 = time.time()
for detections, targets in tqdm(batches):
mcc.update(detections, targets)
print(f"{time.time() - t1:.5f}")
Expected behavior
It's supposed to be faster in the range of ~0.1 second for GPU and ~0.9 second for CPU in this naive benchmark.
Environment
GPU: NVIDIA RTX 3090
TorchMetrics version (pip): 0.10.0
Python & PyTorch Version (e.g., 1.0): python=3.9, pytorch=1.12.1
Any other relevant information such as OS (e.g., Linux): ubuntu18.04, (same behavior with 20.04)
The text was updated successfully, but these errors were encountered:
🐛 Bug
It's extremely slow to compute the Matthew Correlation Coefficient since torchmetrics == 0.10.0.
To Reproduce
During my testing, I am seeing massive slow down when computing the Matthew Correlation Coefficient, especially on the GPU (using pytorch-lightning to build, train and test a deep learning model's performance on this metric).
I have compiled a code sample (next section) and below shows the result over different versions of torchmetrics.
ON CPU:
torchmetrics 0.7.3: 0.93961 s
torchmetrics 0.8.0: 0.93549 s
torchmetrics 0.8.2: 0.94494 s
torchmetrics 0.9.0: 0.92856 s
torchmetrics 0.9.2: 0.93682 s
torchmetrics 0.10.0: 1.10903 s
ON GPU:
torchmetrics 0.7.3: 0.11444 s
torchmetrics 0.8.0: 0.11682 s
torchmetrics 0.8.2: 0.11425 s
torchmetrics 0.9.0: 0.11433 s
torchmetrics 0.9.2: 0.11410 s
torchmetrics 0.10.0: 359.30208 s
So yeah testing over thousands of batches now takes almost a week for me to complete 🤣 Please take a look at this soon.
Code sample
Expected behavior
It's supposed to be faster in the range of ~0.1 second for GPU and ~0.9 second for CPU in this naive benchmark.
Environment
pip
): 0.10.0The text was updated successfully, but these errors were encountered: