Skip to content

iribarnesy/bot-among-us

Repository files navigation


Logo

Among Us intelligent agent - End of Study Project

Intelligent agent in a social environment - Among Us, a bluff video game.
Study project in collaboration with CY-tech university.
Explore the Google Drive folder »
Read the french report »

Bot demonstration in video

Bot demonstration

Features

This repository contains a human-like bot (only use the screen, the mouse and the keyboard) which is autonomous.

  • The agent can navigate through the environment of the Skeld map.
  • He can do all the tasks of Skeld Map.
  • He can detect other players in the screen.
  • He can kill the crewmates.
  • He can report the bodies.
  • He can memorize the events he has lived (the moment, the room he was, the players he saw, the task he has done).
  • He can summarize what he lived during the past round.

You can read the french report if you want more details about the features and their technical implementation.

Requirements

  • Python3
  • Tesseract
  • OpenCV
  • Numpy
  • PyAutoGUI
  • Among Us (Steam)
  • FFMPEG
  • shapely (install with conda)
  • PyTorch
  • Transformers[sentencepiece]

Execute

To run, use the command python launch_bot.py.

Google Cloud Vision

You must have the key JSON file placed in th environments/ folder. Then you must declare the GOOGLE_APPLICATION_CREDENTIALS as the path of the key file. You can do it by running the first cell of the notebook.

Download OpenCV 3.4 for image annotation

Download link

Download Object Detection API

You must have protoc installed locally. Follow this link to install protoc-3.15.8-win64. The protoc command should work then, if not please verify that the binary is in your path (Add the bin/ folder to your PATH, go to environment variables in Windows).

Then, execute these commands:

git clone https://github.com/tensorflow/models.git tensorflow_models
cd tensorflow_models/research/
protoc object_detection/protos/*.proto --python_out=.
cp object_detection/packages/tf2/setup.py .
python -m pip install -U .

If the last command exit with error please try to run python -m pip install --user .

You can then run python object_detection/builders/model_builder_tf2_test.py to verify that the installation is fine.

Copy the model (download it from the google drive folder) to a new models/ folder at the root of the project. It should be as below :

models/
└───all_boxes_model_40_batches/
    ├───assets/
    ├───variables/
    ├───checkpoint
    ├───ckpt-1-1.data-00000-of-00001
    ├───ckpt-1-1.index
    ├───saved_model.pb
    └───ssd_resnet50_v1_fpn_640x640_coco17_tpu-8.config

Then, you can run python src/players_recognition/main.py to vizualize the players detection loop.

If you have some errors due to matplotlib or opencv please uninstall and reinstall the packages

pip uninstall opencv-python matplotlib
pip install opencv-python matplotlib

Download Text Summarization model.

Copy the model (download it from the google drive folder) to a new models/ folder at the root of the project. It should be as below :

models/
└───T5_summarization_model/
    ├───config.json
    └───pytorch_model.bin

This model needs torch and transformers[sentencepiece] packages to work.

Add dependencies to make the bot speak

First install the python dependencies with pip install -r requirements.txt (if not done before)

Then, it's necessary to have ffmpeg installed, just run ffmpeg in shell to verify it. If you have not, you can download ffmpeg for windows and add it to your PATH.

Finally you have to add some secrets to a .env file at the root of the project.

DISCORD_TOKEN = <DISCORD_TOKEN>
GUILD = <GUILD>
CHANNEL = <CHANNEL>
TEXT_CHANNEL_ID = <TEXT_CHANNEL_ID>
VOICE_CHANNEL_ID = <VOICE_CHANNEL_ID>

Then you can instantiate the bot and make him say something. It will connect to the discord server/guild, join the "among" channel and speak.

from src.bot import Bot
bot = Bot()
bot.discord_bot.say("something")

About

An intelligent agent for the Among us video game

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •