Paper collection of Multi-Agent Reinforcement Learning (MARL) Opponent-Aware algorithms
Note: [Currently being updated]
Multi-Agent Reinforcement Learning is an exciting and growing field deeply connected with game theory, single-agent learning, psychology, optimization theory and multi-agent systems. A particularly interesting and recently explored class of problems is the topic of cooperative and competitive multi-agent learning, where agents need not only to learn the properties of the environment, but also account for the actions of other agents. Opponent-aware MARL algorithms explore how agents may take into the account other agents present in the environment and use it to converge to an optimal strategy.
This short collection of research publications and useful resources offers an introduction to this area. The resources are sorted by its properties and time. The list is being regularly updated and any suggestions or pull requests are highly welcome.
The collection has been created for research purposes. If any authors would not like their publications to be listed in this list or have any remarks, please reach out to macwiatrak [at] gmail.com.
- Stable Opponent Shaping in Differentiable Games by Letcher A., Foerster J., Balduzzi D., Rocktäschel T., Whiteson S. arXiv, 2018.
- QMIX: Monotonic Value Function Factorisation for Deep Multi−Agent Reinforcement Learning by Rashid T., Samvelyan M., et al. arXiv, 2018.
- DiCE: The Infinitely Differentiable Monte Carlo Estimator by Foerster J., Farquhar G., Al-Shedivat M., et al. arXiv, 2018.
- Learning with Opponent-Learning Awareness by Foerster J., Chen R., et al. arXiv, 2018.
- Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environment by Lowe R., Wu Y., et al. arXiv, 2018.
- Counterfactual Multi-Agent Policy Gradients by Foerster J., Farquhar G., et al. arXiv, 2017.
- Multi-Agent Generalized Recursive Reasoning by Wen Y., Yang Y., et al. arXiv, 2019.
- Probabilistic Recursive Reasoning for Multi-Agent Reinforcement Learning by Wen Y., Yang Y., et al. arXiv, 2019.