Site views: 1539914

Updated August, 16 2020.

 

Benchmarks:

 

Stockfish normal vs Stockfish NNUE:

I must say I was shocked by the gap between the NNUE version (with the actual, 08-17-2020, best Sergio's network: 20200720-1017.bin) and the normal Stockfish. Before saying everything else I must specify that these two engines were compiled by me, one using the normal, standard AlphaBeta chess analysis and the other one the NNUE neural network system + (I guess) the AlphaBeta stuff. In other words: same engine but different default NNUE settings.

The difference has been noticeable both in fast gaming and in 5 minutes, that is unusual for engines based on neural networks (they often lose for time at the game ending).

The reasons behind this jump over by the NNUE engines is unknown to me.

I've tried them 10 days ago or something on Playchess with the Goi book but they made a shitty play, with blunders like 0.00 to suddendly +2.50 (me having black) or 0.00 to -1.97 (me having white) so I thought it was just another way to drag attention by some main chess site like in the case of Lc0 (this last one underperforming on Playchess and playing very bad both in Blitz games and analysis). But in the meanwhile something apparently has been fixed at Stockfish house and the use of neural network resulted in an astonishing jump up in strength. ... Just when you thought the opening lines couldn't still improve that much...

The book I used is the 1 move dummy.ctg based on the move #1 of all the three White main openings made by the Goi book. You'll find dummy.ctg in the games package (be aware that is just an 1 move opening book, not any sort of Goi). I didn't make so much games because the difference was too much, thus evident.

 

Stockfish Beta normal vs Stockfish Beta NNUE test, 1 minute.

 

Stockfish Beta normal vs Stockfish Beta NNUE test, 2 minutes.

 

Stockfish Beta normal vs Stockfish Beta NNUE test, 5 minutes.

 

Games + dummy.ctg archive.

 

 

Goi 6.2.3 ABK vs Perfect 2019 ABK:

Time per game: 3 minutes
Processor: Intel i7-4771, 4 threads for each engine
Hashtable size: 1024 MBs
Permanent brain (ponder): OFF
Interface used: Arena 3.5.1
Engine used: BrainFish 240720 64 BMI2

 

-----------------Goi 6.2.3-----------------
Goi 6.2.3 - Perfect2019 : 28.5/50 12 wins - 5 losses - 35 draws 57% +35 Elo
-----------------Perfect2019-----------------
Perfect2019 - Goi 6.2.3 : 21.5/50 5 wins - 12 losses - 35 draws 43% -35 Elo

Download here the Arena files of the tournament.

 

 

Engines tournament July 2020:

Time per game: 3 minutes
Processor: Intel i7-4771, 4 threads for each engine
Hashtable size: 1024 MBs
Permanent brain (ponder): OFF (Yes, I know I always tell to set it ON, but for one time I wanted to kick in all the processor strength
Interface used: Fritz 17 with 6 MEN Syzygy tablebases.

 

Engines tournament 09-07-2020, Blitz 3. 2020

1 Brainfish 050720 64 BMI2 39.0/75
2 Stockfish Polyglot 090720 64,BMI2 38.0/75
3 BrainLearn 9.1 64 BMI2 37.5/75 1405.25
4 ShashChess 12.1 64 BMI2 37.5/75 1404.00
5 Cfish 080720 64 BMI2 37.5/75 1403.00
6 CorChess 6.0 090720 64 BMI2 35.5/75

Download here the Chessbase and PGN archives of the tournament.

 

This is a surprising tournament because of two factors:

1. the amazing performance of BrainLearn
2. the very scarce performance of Raubfisch

Stockfish Polyglot did a mediocre performance, placing itself between CorChess and Brainfish.

If we take a look at the table we can see that BrainLearn started to win while going forth with the games: this could mean that its learning algorithms are really functioning, the most of wins got against Raubfisch.
This last one instead made a very scarce performance, maybe because of a bug...

I want to point out that ALL the engines were used with their default values except for 4 threads for each of them.

Since I only host the strongest chess programs I will reintroduce BrainLearn between my ranks.

My compliments go to Thomas Zipproth, author of Brainfish.