The code used to power the DeepRole paper.
Recent breakthroughs in AI for multi-agent games like Go, Poker, and Dota, have seen great strides in recent years. Yet none of these games address the real-life challenge of cooperation in the presence of unknown and uncertain teammates. This challenge is a key game mechanism in hidden role games. Here we develop the DeepRole algorithm, a multi-agent reinforcement learning agent that we test on The Resistance: Avalon, the most popular hidden role game. DeepRole combines counterfactual regret minimization (CFR) with deep value networks trained through self-play. Our algorithm integrates deductive reasoning into vector-form CFR to reason about joint beliefs and deduce partially observable actions. We augment deep value networks with constraints that yield interpretable representations of win probabilities. These innovations enable DeepRole to scale to the full Avalon game. Empirical game-theoretic methods show that DeepRole outperforms other hand-crafted and learned agents in five-player Avalon. DeepRole played with and against human players on the web in hybrid human-agent teams. We find that DeepRole outperforms human players as both a cooperator and a competitor.
battlefield/
: Harness code used to play bots against each otherbattlefield/battlefield
: Code to run games and do analysis.battlefield/battlefield/bots
: Scaffolding code to play bots.
deeprole/
: DeepRole algorithm + codedeeprole/code
: The core CPP code.deeprole/code/nn_train
: Keras training code.
data/
: Data related to and generated by DeepRole.data/figures/
The figure generating code.data/proavalon/
The human vs. bot analysis code.
reference_cfr
: CPP implementation of a reference CFR bot with a custom abstraction.
All CPP code requires LLVM and C++17 support, but otherwise is batteries-included. It should be as simple as running make
from the root of the directory.