Releases: nnaisense/evotorch
0.5.1
0.5.0
New Features
- Allow the user to reach the search algorithm's internal optimizer by @engintoklu in #89
- Make EvoTorch future-proof by @engintoklu in #77
- Ensure compatibility with PyTorch 2.0 and Brax 0.9
- Migrate from old Gym interface to Gymnasium
- Inform the user when
device
is not set correctly by @engintoklu in #90
Fixes
- Fix
get_minibatch()
ofSupervisedNE
by @engintoklu in #74 - Fix division-by-zero while initializing
CMAES
by @engintoklu in #86 - Fix wrong defaults of
SteadyStateGA
by @engintoklu in #87
0.4.1
Fixes
- Fix the interface of
make_I
(#62) (@engintoklu) - Fix generate_batch returning
None
(#63) (@engintoklu) - Fix C decomposition rate calculation on CUDA devices (#64) (@NaturalGradient)
Docs
- Add contribution guidelines (#70) (@engintoklu, @Higgcz)
- Add "how to cite" into the README (#69) (@engintoklu)
- Include CMA-ES into the README (#57) (@NaturalGradient)
0.4.0
New Features
- Implementation of
WandbLogger
(#35) (@galatolofederico) - Simplify the usage of
NeptuneLogger
(#38) (@Higgcz) - GPU-friendly + vectorized pareto ranking (#32) (@NaturalGradient)
- User interface improvements by (#34) (@engintoklu)
- Add env. variable to control verbosity of the logger (#48) (@Higgcz)
- Add torch-based
CMAES
implementation (#41) (@NaturalGradient) - Improve
GeneticAlgorithm
and addMAPElites
(#44) (@engintoklu, @pliskowski) - Add noxfile to run pytest across multiple python versions (#40) (@Higgcz)
Fixes
- Fix all the mkdocstrings warnings (#39) (@Higgcz)
- Fix infinite live reloading of the docs (#36) (@Higgcz)
Docs
0.3.0
New
Vectorized gym support: Added a new problem class, evotorch.neuroevolution.VecGymNE
, to solve vectorized gym environments. This new problem class can work with brax environments and can exploit GPU acceleration (#20).
PicklingLogger: Added a new logger, evotorch.logging.PicklingLogger
, which periodically pickles and saves the current solution to the disk (#20).
Python 3.7 support: The Python dependency was lowered from 3.8 to 3.7. Therefore, EvoTorch can now be imported from within a Google Colab notebook (#16).
API Changes
@pass_info decorator: When working with GymNE
(or with the newly introduced VecGymNE
), if one uses a manual policy class and wishes to receive environment-related information via keyword arguments, that manual policy now needs to be decorated via @pass_info
, as follows: (#27)
from torch import nn
from evotorch.decorators import pass_info
@pass_info
class CustomPolicy(nn.Module):
def __init__(self, **kwargs):
...
Recurrent policies: When defining a manual recurrent policy (as a subclass of torch.nn.Module
) for GymNE
or for VecGymNE
, the user is now required to define the forward method of the module according to the following signature:
def forward(self, x: torch.Tensor, h: Any = None) -> Tuple[torch.Tensor, Any]:
...
Note: The optional argument h
is the current state of the network, and the second value of the output tuple is the updated state of the network. A reset()
method is not required anymore, and it will be ignored (#20).
Fixes
Fixed a performance issue caused by the undesired cloning of the entire storages of tensor slices (#21).
Fixed the signature and the docstrings of the overridable method _do_cross_over(...)
of the class evotorch.operators.CrossOver
(#30).
Docs
Added more example scripts and updated the related README file (#19).
Updated the documentation related to GPU usage with ray (#28).
0.2.0
Fixes:
- Fix docstrings in gaussian.py (#11) (@engintoklu)
- Fix for str_to_net(...) (#12) (@engintoklu)
- Hard-code network_device property to CPU for GymNE (#6) (@NaturalGradient)
Docs:
- Fix comment in the Gym experiments notebook (#5) (@engintoklu)
- Improve code formatting in docstrings (#3) (@flukeskywalker)
- Add documentation of NeptuneLogger class (#15) (@NaturalGradient)
0.1.1
0.1.0
We are excited to release the first public version of EvoTorch - an evolutionary computation library created at NNAISENSE.
With EvoTorch, one can solve various optimization problems, without having to worry about whether or not these problems at hand are differentiable. Among the problem types that are solvable with EvoTorch are:
- Black-box optimization problems (continuous or discrete)
- Reinforcement learning tasks
- Supervised learning tasks
- etc.
Various evolutionary computation algorithms are available in EvoTorch:
- Distribution-based search algorithms:
- PGPE: Policy Gradients with Parameter-based Exploration.
- XNES: Exponential Natural Evolution Strategies.
- SNES: Separable Natural Evolution Strategies.
- CEM: Cross-Entropy Method.
- Population-based search algorithms:
- SteadyStateGA: A fully elitist genetic algorithm implementation. Also supports multiple objectives, in which case behaves like NSGA-II.
- CoSyNE: Cooperative Synapse Neuroevolution.
All these algorithms mentioned above are implemented in PyTorch, and therefore, can benefit from the vectorization and GPU capabilities of PyTorch. In addition, with the help of the Ray library, EvoTorch can further scale up these algorithms by splitting the workload across:
- multiple CPUs
- multiple GPUs
- multiple computers over a Ray cluster