内容简介:This package provides aBeyond its much publicized success in attaining superhuman level at games such as Chess and Go, DeepMind's AlphaZero algorithm illustrates a more general methodology of combining learning and search to explore large combinatorial spa
AlphaZero.jl
This package provides a generic , simple and fast implementation of Deepmind's AlphaZero algorithm:
- The core algorithm is only 2,000 lines of pure, hackable Julia code.
- Generic interfaces make it easy to add support for new games or new learning frameworks.
- Being between one and two orders of magnitude faster than competing alternatives written in Python, this implementation enables to solve nontrivial games on a standard desktop computer with a GPU.
Why should I care about AlphaZero?
Beyond its much publicized success in attaining superhuman level at games such as Chess and Go, DeepMind's AlphaZero algorithm illustrates a more general methodology of combining learning and search to explore large combinatorial spaces effectively. We believe that this methodology can have exciting applications in many different research areas.
Why should I care about this implementation?
Because AlphaZero is resource-hungry, successful open-source implementations (such as Leela Zero ) are written in low-level languages (such as C++) and optimized for highly distributed computing environments. This makes them hardly accessible for students, researchers and hackers.
The motivation for this project is to provide an implementation of AlphaZero that is simple enough to be widely accessible, while also being sufficiently powerful and fast to enable meaningful experiments on limited computing resources. We found the Julia language to be instrumental in achieving this goal.
Training a Connect Four Agent
To download AlphaZero.jl and start training a Connect Four agent, just run:
git clone https://github.com/jonathan-laurent/AlphaZero.jl.git cd AlphaZero.jl julia --project -e "import Pkg; Pkg.instantiate()" julia --project --color=yes scripts/alphazero.jl --game connect-four train
Each training iteration takes between one and two hours on a desktop computer with an Intel Core i5 9600K processor and an 8GB Nvidia RTX 2070 GPU. We plot below the evolution of the win rate of our AlphaZero agent against two baselines (a vanilla MCTS baseline and a minmax agent that plans at depth 5 using a handcrafted heuristic):
Note that the AlphaZero agent is not exposed to the baselines during training and learns purely from self-play, without any form of supervision or prior knowledge.
We also evaluate the performances of the neural network alone against the same baselines. Instead of plugging it into MCTS, we play the action that is assigned the highest prior probability at each state:
Unsurprisingly, the network alone is initially unable to win a single game. However, it ends up significantly stronger than the minmax baseline despite not being able to perform any search.
For more information on training a Connect Four agent using AlphaZero.jl, see our full tutorial .
Resources
- Documentation Home
- An Introduction to AlphaZero
- Package Overview
- Connect-Four Tutorial
- Hyperparameters Documentation
Contributing
Contributions to AlphaZero.jl are most welcome. Many contribution ideas are available in our contribution guide . Please do not hesitate to open a Github issue to share any idea, feedback or suggestion.
Acknowledgements
This material is based upon work supported by the United States Air Force and DARPA under Contract No. FA8750-18-C-0092. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the United States Air Force and DARPA.
以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。