New Human-like Play and Analysis
This is not the latest release - see v1.15.3 for various bugfixes and use the code and/or executables there rather than here.
But stay on this page and read on below for info about human-like play and analysis introduced in v1.15.x!
If you're a new user, this section has tips for getting started and basic usage! If you don't know which version to choose (OpenCL, CUDA, TensorRT, Eigen, Eigen AVX2), see here. Also, download the latest neural nets to use with this engine release at https://katagotraining.org/.
KataGo is continuing to improve at https://katagotraining.org/ and if you'd like to donate your spare GPU cycles and support it, it could use your help there!
As a reminder, for 9x9 boards, see here for a special neural net better than any other net on 9x9, which was used to generate the 9x9 opening books at katagobooks.org.
Available below are both the standard and "bs29" versions of KataGo. The "bs29" versions are just for fun, and don't support distributed training but DO support board sizes up to 29x29. They may also be slower and will use much more memory, even when only playing on 19x19, so use them only when you really want to try large boards.
The Linux executables were compiled on a 20.04 Ubuntu machine. Some users have encountered issues with libzip or other library compatibility issues in the past. If you have this issue, you may be able to work around it by compiling from source, which is usually not so hard on Linux, see the "TLDR" instructions for Linux here.
Known issues (fixed in v1.15.1)
- Analysis engine erroneously reports an error when sending a
query_version
action.
New Human-trained Model
This release adds a new human supervised learning ("Human SL") model trained on a large number of human games to predict human moves across players of different ranks and time periods! Not much experimentation with it has been done yet and there is probably low-hanging fruit on ways to use and visualize it, open for interested devs and enthusiasts to try.
Download the model linked here or listed in the downloads below, b18c384nbt-humanv0.bin.gz
. Casual users should NOT download b18c384nbt-humanv0.ckpt
- this is an alternate format for devs interested in the raw pytorch checkpoint for experimentation or for finetuning using the python scripts.
Basic usage:
./katago.exe gtp -model <your favorite usual model for KataGo>.bin.gz -human-model b18c384nbt-humanv0.bin.gz -config gtp_human5k_example.cfg
The human model is passed in as an extra model via -human-model
. It is NOT a replacement for the default model (actually it can be if you know what you are doing! See the config and Human SL analysis guide for more details.).
Additionally, you need a config specifically designed to use it. The gtp_human5k_example.cfg
configures KataGo to imitate 5-kyu-level players. You can change it to imitate other ranks too, as well as to do many more things, including making KataGo play in a human style but still at a strong level or analyze in interesting ways. Read the config file itself for documentation on some of these possibilities!
And for advanced users or devs see also this guide to using the human SL model, which is written from the perspective of the JSON-based Analysis Engine, but is also applicable to gtp as well.
Human SL analysis guide
Pretty Pictures
Just to show off how the model has learned how differently ranked players might play, here are example screenshots from a less-trained version of the Human SL model from a debug visualization during development. When guessing what 20 kyu players are likely to play, Black's move is to simply follow White, attaching at J17:
At 1 dan, the model guesses that players are likely to play the tiger mouth spoil or wedge at H17/H16, showing an awareness of local good shape, as well as some likelihood of various pokes at white's loose shape:
At 9 dan, the model guesses that the most likely move is to strike the very specific weak point at G14, which analysis confirms is one of the best moves.
As usual, since this is a raw neural net without any search, its predictions are most analogous to a top player's "first instinct with no reading" and at high dan levels won't be as accurate in guessing what such players, with the ability to read sharply, would likely play.
Another user/dev in the Computer Go discord shared this interesting visualization, where the size of the square is based on the total probability mass of the move summed across all player ranks, and the color and label are the average rank of player that the model predicts playing that move:
Hopefully some of these inspire possibilities for game review and analysis in GUIs or tools downstream of the baseline functionality added by KataGo. If you have a cool idea for experimenting with these kinds of predictions and stats, or think of useful ways to visualize them, feel free to try it!
Other Changes This Release
GTP and Analysis Engine changes
(Updated GTP doc, Updated Analysis Engine Doc)
- Various changes to both GTP and Analysis Engine to support the human SL model, see docs.
- GTP
version
command now reports information about the neural nets(s) used, not just the KataGo executable version. - GTP
kata-set-param
now supports changing the large majority of search parameters dynamically instead of only a few. - GTP
kata-analyze
command now supports a newrootInfo
property for reporting root node stats. - GTP added
resignMinMovesPerBoardArea
as a way to prevent early resignation. - GTP added
delayMoveScale
anddelayMoveMax
as a way to add a randomized delay to moves so to prevent the bot from responding instantly to players. Delay will be on average shorter on "obvious" moves, hopefully giving a more natural-feeling pacing. - Analysis Engine now by default will report a warning in response to queries that contain unused fields, to help alert about typos.
- Analysis Engine now reports various raw neural net outputs in rootInfo.
- GTP and Analysis Engine both have changed "visits" to mean the child node visit count (i.e. the number of playouts that the child node after a move received) instead of the edge visit count (i.e. the number of playouts that the root MCTS formula "wanted" to invest in the move). The child visit count is more indicative of evaluation depth and quality. A new key "edgeVisits" has been added to report the original edge visit count, which is partly indicative of how much the search "likes" the move.
- These two values used to be almost identical in practical cases, although graph search could make them differ sometimes. With some humanSL config settings in this new version, they can now differ greatly.
Misc improvements
- Better error handling in TensorRT, should catch more cases where there are issues querying the GPU hardware and avoid buggy or broken play.
Training Scripts Changes
- Many changes and updates to training scripts to support human SL model training and architecture. Upgrade with caution, if you are actively training things.
- Added experimental sgf->training data command (
./katago writetrainingdata
) to KataGo's C++ side that was used to produce data for human SL net training. There is no particular documentation offered for this, run it with-help
and/or be prepared to read and understand the source code. - Configs for new models now default to model version 15 with a slightly different pass output head architecture.
- Many minor bugfixes and slight tweaks to training scripts.
- Added option to gatekeeper to configure the required winning proportion.