Releases: GRAAL-Research/poutyne
Releases · GRAAL-Research/poutyne
v0.5.1
v0.5
- Adding a new
OptimizerPolicy
class allowing to have Phase-based learning rate policies. The two following learning policies are also provided:- "Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates", Leslie N. Smith, Nicholay Topin, https://arxiv.org/abs/1708.07120
- "SGDR: Stochastic Gradient Descent with Warm Restarts", Ilya Loshchilov, Frank Hutter, https://arxiv.org/abs/1608.0398
- Adding of "bin_acc" metric for binary classification in addition to the "accuracy" metric".
- Adding "time" in callbacks' logs.
- Various refactoring and small bug fixes.
v0.4.1
Breaking changes:
- Update for PyTorch 0.4.1 (PyTorch 0.4 not supported)
- Keyword arguments must now be passed with their keyword names in most PyToune functions.
Non-breaking changes:
- self.optimizer.zero_grad() is called instead of self.model.zero_grad().
- Support strings as input for all PyTorch loss functions, metrics and optimizers.
- Add support for generators that raise the StopIteration exception.
- Refactor of the Model class (no API break changes).
- Now using pylint as code style linter.
- Fix typos in documentation.
v0.4
- New usage example using MNIST
- New *_on_batch methods to Model
- Every Numpy array is converted into a tensor and vice-versa everywhere it applies i.e. methods return Numpy arrays and can take Numpy arrays as input.
- New convenient simple layers (Flatten, Identity and Lambda layers)
- New callbacks to save optimizers and LRSchedulers.
- New Tensorboard callback.
- Various bug fixes and improvements.
v0.3
Breaking changes:
- Update to PyTorch 0.4.0
- When one or zero metric is used, evaluate and evaluate generator do not return numpy arrays anymore.
Other changes:
- Model now offers a to() method to send the PyTorch module and its input to a specified device. (thanks PyTorch 0.4.0)
- There is now a 'accuracy' metric that can be used as string in the metrics list.
- Various bug fixes.
v0.2.2
v0.2.1
v0.2
- ModelCheckpoint now writes off the checkpoint atomically.
- New initial_epoch parameter to Model.
- Mean of losses and metrics done with batch size weighted by len(y) instead of just the mean of the losses and metrics.
- Update to the documentation.
- Model's predict and evaluate makes more sense now and have now a generator version.
- Few other bug fixes.