You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This commit was created on GitHub.com and signed with GitHub’s verified signature.
[1.3.0] - 2024-09-11
Added
Distributed multi-GPU and multi-node learning (JAX implementation)
Utilities to start multiple processes from a single program invocation for distributed learning using JAX
Model instantiators return_source parameter to get the source class definition used to instantiate the models
Runner utility to run training/evaluation workflows in a few lines of code
Wrapper for Isaac Lab multi-agent environments
Wrapper for Google Brax environments
Changed
Move the KL reduction from the PyTorch KLAdaptiveLR class to each agent that uses it in distributed runs
Move the PyTorch distributed initialization from the agent base class to the ML framework configuration
Upgrade model instantiator implementations to support CNN layers and complex network definitions,
and implement them using dynamic execution of Python code
Update Isaac Lab environment loader argument parser options to match Isaac Lab version
Allow to store tensors/arrays with their original dimensions in memory and make it the default option
Changed (breaking changes)
Decouple the observation and state spaces in single and multi-agent environment wrappers and add the state
method to get the state of the environment
Simplify multi-agent environment wrapper API by removing shared space properties and methods
Fixed
Catch TensorBoard summary iterator exceptions in TensorboardFileIterator postprocessing utils
Fix automatic wrapper detection issue (introduced in previous version) for Isaac Gym (previews),
DeepMind and vectorized Gymnasium environments
Fix vectorized/parallel environments reset method return values when called more than once
Fix IPPO and MAPPO act method return values when JAX-NumPy backend is enabled