Skip to content

Releases: GRAAL-Research/poutyne

v1.17.3

03 Dec 23:22
2742c75
Compare
Choose a tag to compare

What's Changed

Full Changelog: v1.17.2...v1.17.3

v1.17.2

08 Jul 19:12
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v1.17.1...v1.17.2

v1.17.1

09 Jul 16:15
Compare
Choose a tag to compare
  • Fix _XLA_AVAILABLE import with old versions of torchmetrics.
  • Fix WandB tests.

v1.17

21 May 13:13
003b37d
Compare
Choose a tag to compare
  • FBeta is using the non-deterministic torch function bincount. Either by passing the argument make_deterministic to the FBeta class or by using one of the PyTorch functions torch.set_deterministic_debug_mode or torch.use_deterministic_algorithms, you can now make this function deterministic. Note that this might make your code slower.

v1.16

29 Apr 18:57
Compare
Choose a tag to compare
  • Add run_id and terminate_on_end arguments to MLFlowLogger.

Breaking change:

  • In MLFlowLogger, except for experiment_name, all arguments must now be passed as keyword arguments. Passing experiment_name as a positional argument is also deprecated and will be removed in future versions.

v1.15

19 Mar 19:00
Compare
Choose a tag to compare
  • Remove support for Python 3.7

v1.14

02 Dec 22:43
Compare
Choose a tag to compare
  • Update examples using classification metrics from torchmetrics to add the now required task argument.
  • Fix the no LR scheduler bug when using PyTorch 2.0.

v1.13

05 Nov 15:36
Compare
Choose a tag to compare

Breaking changes:

  • The deprecated torch_metrics keyword argument has been removed. Users should use the batch_metrics or epoch_metrics keyword argument for torchmetrics' metrics.
  • The deprecated EpochMetric class has been removed. Users should implement the Metric class instead.

v1.12.1

31 Aug 20:05
Compare
Choose a tag to compare
  • Fix memory leak when using a recursive structure (of tuples or lists) as data in the Model.fit() or the ModelBundle.train_data() methods.

v1.12

16 Jul 21:48
Compare
Choose a tag to compare
  • Fix a bug when transfering the optimizer on another device caused by a new feature in PyTorch 1.12, i.e. the "capturable" parameter in Adam and AdamW.
  • Add utilitary functions for saving (save_random_states) and loading (load_random_states) Python, Numpy and Pytorch's (both CPU and GPU) random states. Furthermore, we also add the RandomStatesCheckpoint callback. This callback is now used in ModelBundle.