You can find support project for this repo - Autoregressive neural environment for training an RL-agent, the most hardcore completed project according to the OpenDataScience PetProject Hackathon organizers (5.02.2022-20.02.2022)
We present our machine learning bot (ml-bot is in alpha testing), which can play the surviv.io game.
This bot tries to solve only the locomotion problem by training on human gameplay and processing the incoming frame-picture with its Deep Learning algorithms. To train this agent, we have used 100 youtube-videos containing 1.2 million frames (equivalent to ~12 hours of gameplay recordings). Anyone can run the bot on their device (see below our Installation guides
).
- our bot can get closer to the boxes with loot
- our bot knows how to avoid red zone
- our bot is trying to get out of the red zone (if the zone has covered it)
- our bot begins to move chaotically, when enemies shoot at him
- our bot likes to build a route through the bushes
- improve the agent's movement
- train the agent to properly interact with the loot and with the cursor, shoot, use helpful items
- optimize the agent control architecture
- delve deeper into RL algorithms
The goal is to develop a bot that is interesting to watch. The behavior of the bot should not differ from the behavior of a person in similar situations. Our research will help raise the level of AI in games, make games more interesting, and bots in them more similar to the actions of a real person.
We assume that if people are interested in watching other gamers (professional or not) through twitch, then they will be interested in watching our agent as well.
- offline reinforcement learning
- python3
- selenium (agent actions execution in game environment)
- openCV (screenshots processing)
- torch (action selection)
- mss (do screenshots)
Ubuntu\MacOS
1. Clone GitHub repository
git clone https://github.com/Laggg/ml-bots-surviv.io
2. Download supporting files
Download model weights from here and chromedriver, that suits your chrome version, from here (unzip it, if needed).
Locate both files to ./supporting_files/
folder.
3. Create python virtual environment and install requirements.txt
cd ml-bots-surviv.io
python -m venv surviv_env
source surviv_env/bin/activate
pip install -r requirements.txt
possible issues:
Issue: Could not build wheels for opencv-python which use PEP 517 and cannot be installed directly
Solution: `pip install --upgrade pip setuptools wheel`
4. Run the agent
python play.py
1. Activate python environment
source surviv_env/bin/activate
2. Run the agent
python play.py
Windows
1. Check that you have Anaconda3
with python3
2. Check that you have google chrome browser
(our agent supports only chrome)
0. Earlier you do 1-2 steps from paragraph "Before the first launch"
1. Clone repo by Anaconda Prompt
or dowland zip-file repo and unzip it
git clone https://github.com/Laggg/ml-bots-surviv.io.git
2. Dowland neural net weights from source and put it into ./supporting_files/
folder
3. Dowland driver for your OS and for your chrome version (don't forget to check your google chrome version!) from link, unzip it and put into ./supporting_files/
folder
after 3rd step you can check
./supporting_files/
folder:
4. Open Anaconda prompt inside repo-folder
example:
5. Create a virtual environment for this project
python –m venv surviv_env
6. Activate created virtual environment
cd surviv_env/scripts && activate && cd ../../
7. Install all required libraries
pip install -r requirements.txt
8. Launch the agent into the game!
python play.py
9. After all you can deactivate virtual env and close Anaconda prompt
window
0. Earlier you do 1-9 steps from paragraph "For the first launch"
1. Open Anaconda prompt
inside repo-folder
2. cd surviv_env/scripts && activate && cd ../../
3. python play.py
4. After all you can close deactivate virtual env and close Anaconda prompt
window