Logging accuracy with batch accumulation #5805
-
I wanted to ask how pytorch handles accuracy (and maybe even loss) logging when we have something like My training looks like this: def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = F.cross_entropy(y_hat, y, weight=self.weight)
result = pl.TrainResult(loss)
result.log("train_loss", loss, prog_bar=True)
result.log("train_accuracy", self.accuracy(y_hat.argmax(dim=-1), y), prog_bar=True)
return result where If this is not currently the case, I'm happy to do a PR if someone can show me where to look in the source code to make such a change. Thanks in advance |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments
-
Hi @sachinruk |
Beta Was this translation helpful? Give feedback.
-
looking at the progress bar, it seems like the loss and train_loss as seen above are two (slightly) different numbers. And yes loss seems to be working as expected, mainly metrics Im worried about. |
Beta Was this translation helpful? Give feedback.
-
@sachinruk Class based metrics have been revamped! Please checkout the documentation for the new interface. def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
self.accuracy.update(y_hat.argmax(dim=-1), y)
if self.trainer.accumulate_grad_batches % self.global_step == 0:
accumulated_val = self.accuracy.compute()
self.log('acc_accumulate', accumulated_val)
... Closing this for now. |
Beta Was this translation helpful? Give feedback.
@sachinruk Class based metrics have been revamped! Please checkout the documentation for the new interface.
While the metrics package does not directly integrate with the
accumulate_grad_batches
argument (yet), you should be able to do something like this now:Closing this for now.