MindSpore Reinforcement 0.6.0 Release Notes
Major Features and Improvements
- [BETA] Support GAIL(Generative Adversarial Imitation Learning Jonathan Ho et al..2016) Algorithm. The algorithms are tuned on HalfCheetah environment and support CPU, GPU and Ascend backends.
- [BETA] Support C51(Marc G. Bellemare et al..2017) Algorithm. The algorithms are tuned on CartPole environment and support CPU, GPU and Ascend backends.
- [BETA] Support CQL(Conservative Q-Learning Aviral Kumar et al..2019) Algorithm. The algorithms are tuned on Hopper environment and support CPU, GPU and Ascend backends.
- [BETA] Support AWAC(Accelerating Online Reinforcement Learning with Offline Datasets Ashvin Nair et al..2020) Algorithm. The algorithms are tuned on Ant environment and support CPU, GPU and Ascend backends.
- [BETA] Support Dreamer(Danijar Hafner et al..2020) Algorithm. The algorithms are tuned on Walker-walk environment and support GPU backends.
Contributors
Thanks goes to these wonderful people:
Pro. Peter, Huanzhou Zhu, Bo Zhao, Gang Chen, Weifeng Chen, Liang Shi, Yijie Chen.