A curated, but incomplete, list of game AI resources on multi-agent learning.
If you want to contribute to this list, please feel free to send a pull request. Also you can contact daochen.zha@rice.edu, or khlai@rice.edu.
📢 News: Please check out our open-sourced Large Time Series Model (LTSM)!
📢 Have you heard of data-centric AI? Please check out our data-centric AI survey and awesome data-centric AI resources!
Game AI is focusing on predicting which actions should be taken, based on the current conditions. Generally, most games incorporate some sort of AI, which are usually characters or players in the game. For some popular games such as Starcraft and Dota 2, developers have spent years to design and refine the AI to enhance the experience.
Numerous studies and achievements have been made to game AI in single-agent environments, where there is a single player in the games. For instance, Deep Q-learning is successfully applied to Atari Games. Other examples include Super Mario, Minecraft, and Flappy Bird.
Multi-agent environments are more challenging since each player has to reason about the other players' moves. Modern reinforcement learning techniques have boosted multi-agent game AI. In 2015, AlphaGo, for the first time beat a human professional Go player on a full-sized 19×19 board. In 2017, AlphaZero taught itself from scratch and learned to master the games of chess, shogi, and Go. In more recent years, researchers have made efforts to poker games, such as Libratus, DeepStack and DouZero, achieving expert-level performance in Texas Hold'em and Chinese Poker game Dou Dizhu. Now researchers keep progressing and achieve human-level AI on Dota 2 and Starcraft 2 with deep reinforcement learning.
Perfect information means that each player has access to the same information of the game, e.g., Go, Chess, and Gomoku. Imperfect information refers to the situation where players can not observe the full state of the game. For example, in card games, a player can not observe the hands of the other players. Imperfect information games are usually considered more challenging with more possibilities.
This repository gathers some awesome resources for Game AI on multi-agent learning for both perfect and imperfect information games, including but not limited to, open-source projects, review papers, research papers, conferences, and competitions. The resources are categorized by games, and the papers are sorted by years.
- Open-Source Projects
- Review and General Papers
- Research Papers
- Conferences and Workshops
- Competitions
- Related Lists
- RLCard: A Toolkit for Reinforcement Learning in Card Games [paper] [code].
- OpenSpiel: A Framework for Reinforcement Learning in Games [paper] [code].
- Unity ML-Agents Toolkit [paper] [code].
- Alpha Zero General [code].
- DeepStack-Leduc [paper] [code].
- DeepHoldem [code].
- OpenAI Gym No Limit Texas Hold 'em Environment for Reinforcement Learning [code].
- PyPokerEngine [code].
- Deep mind pokerbot for pokerstars and partypoker [code].
- PerfectDou: Dominating DouDizhu with Perfect Information Distillation [code].
- DouZero: Mastering DouDizhu with Self-Play Deep Reinforcement Learning [code].
- Doudizhu AI using reinforcement learning [code].
- Dou Di Zhu with Combinational Q-Learning [paper] [code].
- DouDiZhu [code].
- 斗地主AI设计与实现 [code].
- StarCraft II Learning Environment [paper] [code].
- Gym StarCraft [code].
- StartCraft II Reinforcement Learning Examples [code].
- A Guide to DeepMind's StarCraft AI Environment [code].
- A reimplementation of Alphastar based on DI-engine with trained models [code].
- CCZero (中国象棋Zero) [code].
- Deep reinforcement learning from self-play in imperfect-information games, arXiv 2016 [paper].
- Multi-agent Reinforcement Learning: An Overview, 2010 [paper].
- An overview of cooperative and competitive multiagent learning, LAMAS 2005 [paper].
- Multi-agent reinforcement learning: a critical survey, 2003 [paper].
Betting games are one of the most popular form of Poker games. The list includes Goofspiel, Kuhn Poker, Leduc Poker, and Texas Hold'em.
- Neural Replicator Dynamics, arXiv 2019 [paper].
- Computing Approximate Equilibria in Sequential Adversarial Games by Exploitability Descent, IJCAI 2019 [paper].
- Solving Imperfect-Information Games via Discounted Regret Minimization, AAAI 2019 [paper].
- Deep Counterfactual Regret Minimization, ICML, 2019 [paper].
- Actor-Critic Policy Optimization in Partially Observable Multiagent Environments, NeurIPS 2018 [paper].
- Safe and Nested Subgame Solving for Imperfect-Information Games, NeurIPS, 2018 [paper].
- DeepStack: Expert-Level Artificial Intelligence in Heads-Up No-Limit Poker, Science 2017 [paper].
- A Unified Game-Theoretic Approach to Multiagent Reinforcement Learning, NeurIPS 2017 [paper].
- Poker-CNN: A pattern learning strategy for making draws and bets in poker games using convolutional networks [paper].
- Deep Reinforcement Learning from Self-Play in Imperfect-Information Games, arXiv 2016 [paper].
- Fictitious Self-Play in Extensive-Form Games, ICML 2015 [paper].
- Solving Heads-up Limit Texas Hold’em, IJCAI 2015 [paper].
- Regret Minimization in Games with Incomplete Information, NeurIPS 2007 [paper].
- PerfectDou: Dominating DouDizhu with Perfect Information Distillation, NeurIPS 2022 [paper] [code].
- DouZero: Mastering DouDizhu with Self-Play Deep Reinforcement Learning, ICML 2021 [paper] [code].
- DeltaDou: Expert-level Doudizhu AI through Self-play, IJCAI 2019 [paper].
- Combinational Q-Learning for Dou Di Zhu, arXiv 2019 [paper] [code].
- Determinization and information set Monte Carlo Tree Search for the card game Dou Di Zhu, CIG 2011 [paper].
- Variational oracle guiding for reinforcement learning, ICLR 2022 [paper]
- Suphx: Mastering Mahjong with Deep Reinforcement Learning, arXiv 2020 [paper].
- Method for Constructing Artificial Intelligence Player with Abstraction to Markov Decision Processes in Multiplayer Game of Mahjong, arXiv 2019 [paper].
- Building a Computer Mahjong Player Based on Monte Carlo Simulation and Opponent Models, IEEE CIG 2017 [paper].
- Boosting a Bridge Artificial Intelligence, ICTAI 2017 [paper].
- Mastering the game of Go without human knowledge, Nature 2017 [paper].
- Mastering the game of Go with deep neural networks and tree search, Nature 2016 [paper].
- Temporal-difference search in computer Go, Machine Learning, 2012 [paper].
- Monte-Carlo tree search and rapid action value estimation in computer Go, Artificial Intelligence, 2011 [paper].
- Computing “elo ratings” of move patterns in the game of go, ICGA Journal, 2007 [paper].
- Grandmaster level in StarCraft II using multi-agent reinforcement learning, Nature 2019 [paper].
- On Reinforcement Learning for Full-length Game of StarCraft, AAAI 2019 [paper].
- Stabilising experience replay for deep multi-agent reinforcement learning, ICML 2017 [paper].
- Cooperative reinforcement learning for multiple units combat in starCraft, SSCI 2017 [paper].
- Learning macromanagement in starcraft from replays using deep learning, CIG 2017 [paper].
- Applying reinforcement learning to small scale combat in the real-time strategy game StarCraft: Broodwar, CIG 2012 [paper].