Site views: 1907785



17-04-2019 - Lc0 win Computer Chess Championship


Lc0 neural chess engine



The machine-learning chess engine Lc0 won the Computer Chess Championship last weekend, making history as the first neural-network project to take the title. Lc0, which taught itself how to play chess, is now at the game's pinnacle as the champion computer engine.

Could this be a decisive moment in the story of computer chess, when the neural networks leaped past traditional chess engines on their way to dominance? Only time will tell, but Lc0 put on quite a show in winning CCC 7: Blitz Bonanza with a score of 167.5/300 in the finals.

Lc0 placed ahead of runner-up Stockfish (162/300) in the blitz finals, the first time in eight Computer Chess Championships that Stockfish didn't win the tournament (CCC 1-7 and the previous-format event in 2017).

The four-engine finals were packed with neural networks, as Lc0 variants Leelenstein (144/300) and Antifish (126.5/300) placed third and fourth. Computer Chess Championship rules were updated after the tournament to allow no more than two finalists that share a significant code base.

The open-source Lc0 project was inspired by Google/DeepMind's AlphaZero, a neural-network engine that made headlines by beating previous versions of Stockfish in private matches.

Lc0 also won its mini-match against Stockfish in the finals, +10 -8 =82, which is the first time that Stockfish has ever lost a head-to-head matchup with any engine in Computer Chess Championship history.

Can Lc0 defend its title in a longer time control? CCC 8: Deep Dive is live now, featuring 24 of the world's top chess engines playing at a rapid time control of 15 minutes plus a five-second increment per move.

The CCC 8 field returns the same 24 engines from CCC 7.

Stage one of CCC 8 is a 24-player round-robin, where each engine will play every other engine two times as White and two times as Black. The top four engines from stage one will advance to a 50 round-robin final stage.


CCC 8 Deep Dive Contestants


CCC 8: Deep Dive Information:

•Engines: 24
•Time control: 15 minutes + 5-second increment
•Format: 4x round-robin, escalation order (each engine plays every other engine two times as White and two times as Black) in stage one
•Games: 1,404 total; 1,104 in stage one and 300 in the finals
•Start date: April 13
•Expected duration: 42 days total; 33 days for stage one (~May 16), 9 days for finals (~May 25)
•Opening book: None in stage one, book chosen by tournament staff in finals
•Endgame tablebases: On
•Top four engines in stage one advance to 50x round-robin finals (No more than two finalists can share code; any qualifying engine that shares code after the first two will be disqualified and the next-highest-scoring engine will take its place.)

CCC 8: Deep Dive Contestants and Escalation Order:

1. Lc0*
2. Stockfish
3. Fizbo
4. Komodo
5. Laser
6. Shredder
7. Leelenstein*
8. BlackMamba
9. Schooner
10. Fire
11. Xiphos
12. Andscacs
13. Antifish*
14. Rofchade
15. Arasan
16. Houdini
17. Protector
18. Senpai
19. Allie*
20. Wasp
21. Texel
22. Bobcat
23. Komodo Monte Carlo
24. Ethereal

*Denotes a neural-network engine.



What precisely is Lc0?


Leela Chess Zero, (LCZero, lc0) is an adaption of Gian-Carlo Pascutto's Leela Zero Go project to Chess, initiated and announced by Stockfish co-author Gary Linscott, who was already responsible for the Stockfish Testing Framework called Fishtest. Leela Chess is open source, released under the terms of GPL version 3 or later, and supports UCI. The goal is to build a strong chess playing entity following the same type of deep learning along with Monte-Carlo tree search (MCTS) techniques of AlphaZero as described in DeepMind's 2017 and 2018 papers, but using distributed training for the weights of the deep convolutional neural network (CNN, DNN, DCNN).

The program consists of an executable to play or analyze games, initially dubbed LCZero, soon rewritten by a team around Alexander Lyashuk for better performance and then called Lc0. This executable, the actual chess engine, performs the MCTS and reads the self-taught CNN, which weights are persistent in a separate file. Lc0 is written in C++14 and may be compiled for various platforms and backends. Since deep CNN approaches are best suited to run massively in parallel on GPUs to perform all the floating point dot products for thousands of neurons, the preferred target platforms are Nvidia GPUs supporting CUDA and cuDNN libraries.

Like AlphaZero, Lc0's evaluates positions using non-linear function approximation based on a deep neural network, rather than the linear function approximation as used in classical chess programs. This neural network takes the board position as input and outputs position evaluation (QValue) and a vector of move probabilities (PValue, policy). Once trained, these network is combined with a Monte-Carlo Tree Search (MCTS) using the policy to narrow down the search to high­probability moves, and using the value in conjunction with a fast rollout policy to evaluate positions in the tree. The MCTS selection is done by a variation of Rosin's UCT improvement dubbed PUCT (Predictor + UCT).

I was pretty doubtful aboud this project until this result. The con of this program is that requires an enormous amount of resources to give the best results, like for example a strong processor and a dual GeForce 1080 Ti, otherwise it will just play delusional. The reason behind the usage of the graphical card (?) of a chess engine is unknown to me at this moment.

Lc0 website:

Lc0 GitHub page:

Lc0 FB page:

Rate this article: