Site views: 1871102

Last update: September, 16 2021.

 

Benchmarks:

 


16-09-21: Engines tournament - September 2021:


- CPU: Intel i7-4771
- RAM: RipJaws DD3 32 GB
- Hard disk: SSD 512GB EVO

- Video card: Nvidia GeForce 780 Ti OC

- Operating system: Windows 10 Pro May 2021
- Program: Fritz 17

- No opening books nor opening set
- Syzygy 6 MEN (pieces) ending tablebases

- Hashtable (not shared) size for each engine: 512 MB

- Time per game: 5 minutes per move.

Permanent brain (ponder): ON.

 

Engines tournament September 2021


Download here the games in Chessbase and PGN format.

 

I'm very surprised that a two months old Honey got the first place! I'll definitely put it in the Download list.

 

 

---------------○---------------

 

 

11-09-21: Lc0 v0.28.0 vs Stockfish 070921 3+2:


Drawn by curiosity, because this latest Lc0 solved a couple of position that Stockfish couldn't while I was correcting the Goi book, I decided to make a head to head match Lc0 vs the (once) latest Stockfish beta.

Lc0 has miserably lost...

Mehmet has told that my card could be the culprit (GeForce 780 Ti OC instead of a more powerful GeForce 2060) but, once again, I don't see the reason why I should host this engine here when it's so underperformant on an average machine and on a Fischer blitz time where the increment is 2 seconds per move. Lc0 makes mistakes even in advanced middlegame:



Lc0 error
Lc0 plays 21 ...Kf8?! instead of 21 ...Nb4.

 

- CPU: Intel i7-4771
- RAM: RipJaws DD3 32 GB
- Hard disk: SSD 512GB EVO

- Video card: Nvidia GeForce 780 Ti OC

- Operating system: Windows 10 Pro May 2021
- Program: Fritz 17

- No opening books nor opening set
- Syzygy 6 MEN (pieces) ending tablebases

- Engine 1: Stockfish 070921
- Threads: 4
- Hashtable size: 4096 MB

- Engine 2: Lc0 v0.28.0
- Threads: 2 (CPU + video card GPU)
- Hashtable size: 16384 MB

- Time per game: 3 minutes + 2 second per move.

Permanent brain (ponder): ON.

 

Lc0 v0.28.0 vs Stockfish 070921 3+2

Download here the games in Chessbase and PGN format.

 

 

---------------○---------------

 

 

(Books tournament by FANTASMITADEL50 - outskirts.altervista.org):


It was a tournament that lasted for months, amongst a big hardware damage of the FANTASMITADEL50's computer.

I was permitted to update the book only one time, before the final round, but we couldn't access to the previous rounds games to enhance our work... I wasn't in agreement with this, but whatever...

The Goi Competition Opening Book was always on the higher part of the charts for all the first three rounds. It won the final in a tight match with the other books.

I was criticized the past years that my books were a Cerebellum clone and that they weren't so strong, now I don't take inspiration anymore from Cerebellum and I have proven that I can be the best anyway in bookmaking.

 

(Games after the table).

 

TABLES:


FINAL (ROUND 4):

Actualizacion-10-fase-4-final


ROUND 3:

Actualizacion-final-ronda-tres


ROUND 2:

Actualizacion-final-ronda-dos


ROUND 1:

Actualizacion-final-ronda-uno

 

Download the games here.

 

 

---------------○---------------

 

 


(Test by OliProg - outskirts.altervista.org) NimaTiv 140521 (Hurnavich) vs. Goi Update (Kramnik):


Processor AMD A10 – 9700 R7,10 Compute Cores 4C + 6G - 3.50 GHz - RAM 8.00 GB - 64-bit operating system - x64 processor - Windows 10 Pro - 1024 mb hash – Fritz 16 GUI – Permanent brain: OFF - opening book: NimaTiv 140521 vs. Goi Update - final tablebases: Syzygy 3-4-5 pieces - both engines play with 4 threads and no large pages – NNUE: nn-62ef826d1a6d.nnue

Book Settings

NimaTiv 140521:

1. Tournament ON
2. Min games 0-100
3. Variety of play 140
4. Influence of learn 100
5. Learning strength 0

Goi Update:

1. Tournament ON
2. Min games 0-100
3. Variety of play 140
4. Influence of learn 100
5. Learning strength 0


DESKTOP-TORM2PL, Blitz 5min+1seg 0

NimaTiv 140521 (Hurnavich) vs. Goi Update (Kramnik)


Download here the games in Chessbase and PGN format.

 

 

---------------○---------------

 

 


(Test by OliProg - outskirts.altervista.org) Goi Update (Kramnik) vs. Sedat Chess TEST games (Yanbu1 - Chess2U Forum's member):


Processor AMD A10 – 9700 R7,10 Compute Cores 4C + 6G - 3.50 GHz - RAM 8.00 GB - 64-bit operating system - x64 processor - Windows 10 Pro - 1024 mb hash – Fritz 16 GUI – Permanent brain: OFF - opening book: Goi Update vs. Sedat Chess TEST games - final tablebases: Syzygy 3-4-5 pieces - both engines play with 4 threads and no large pages – NNUE: nn-62ef826d1a6d.nnue

Book Settings

Goi Update:

1. Tournament ON
2. Min games 0-100
3. Variety of play 140
4. Influence of learn 100
5. Learning strength 0

Sedat Chess TEST games:

1. Tournament ON
2. Min games 0-100
3. Variety of play 50
4. Influence of learn 100
5. Learning strength 100


DESKTOP-TORM2PL, Blitz 5min+1seg 0

Goi Update (Kramnik) vs. Sedat Chess TEST games (Yanbu1 - Chess2U Forum's member)


Download here the games in Chessbase and PGN format.

 

 

---------------○---------------

 

 


The strongest Android engine so far on Blitz (3 minutes + 3 seconds):


CorChess 121820 wins JCER - Android Engines Tournament (Chess Engines Diary) 2021.01.17-18

CorChess wins the recent JCER tournament for Android engines. You can find details and games here.

 

 

---------------○---------------

 

 


Sugar AI 1.00 vs Stockfish Polyglot:


 

Participants:

- SugaR.AI.1.00.bmi2 (Marco Zerbinati compile)
- Stockfish Polyglot 010121,x64 BMI2

Every engine with default settings except for number of threads; SugaR using the experience file found in its source (170 MB).

Graphical Interface: Komodo 14.

Operating system: Windows 10 Professional.

Tournament settings:
2 threads per engine
Hashtable: 1 GB for each engine (not shared)
Syzygy 6 MEN ending tablebases
Ponder on
Time per game: 1 minute + 1 second per move
No opening book.
NNUE activated on every engine.
80 games to play.

Hardware:
CPU: Intel i7-4771
Hard Disk: SSD Evo 512 GB
RAM: 32 GB DDR3 RipJaws

 

Results:


Sugar 1.00 vs Stockfish Polyglot 010121


Download here the games in Chessbase and PGN format.

 

 

---------------○---------------

 

 


Engines Tournament December 2020:


Participants:

- CF_EXT 081220 (Chessman compile)
- Cfish 091220
- CorChess NNUE 1.3 291120 (IIvec compile)
- BrainLearn 12.1 (amchess compile)
- CiChess 021220 (Chessman compile)
- Stockfish Polyglot 211120
- Raubfisch X44_nn_sl

Every engine with default settings except for number of threads and BrainLearn with Self Q-Learn activated, with my 1.5 MBs experience file.

Graphical Interface: Komodo 14.

Operating system: Windows 10 Professional.

Tournament settings:
3 threads per engine
Hashtable: 1 GB for each engine (not shared)
Syzygy 6 MEN ending tablebases
Ponder off (I'll have to bear this until I'll change the old, consumed hardware)
Time per game: 3 minutes
No opening book.
NNUE activated on every engine.
420 games to play.

Hardware:
CPU: Intel i7-4771
Hard Disk: SSD Evo 512 GB
RAM: 32 GB DDR3 RipJaws

 

Results:


Chess Engines Tournament December 2020

 

Download here the games in Chessbase and PGN format.

 

CF_EXT managed to recover towards at the end and suprisingly reached the first place! I was expecting CorChess to win... well, they have the same score, I can say both of them are winners. What the table suggests is that the more the engine is updated the more is strong, mostly because of a better NNUE net I guess. If I made a 2.000 games tournament the differences would be more evident and, maybe, another engine could be the winner.

 

 

---------------○---------------

 

 

Stockfish normal vs Stockfish NNUE:

I must say I was shocked by the gap between the NNUE version (with the actual, 08-17-2020, best Sergio's network: 20200720-1017.bin) and the normal Stockfish. Before saying everything else I must specify that these two engines were compiled by me, one using the normal, standard AlphaBeta chess analysis and the other one the NNUE neural network system + (I guess) the AlphaBeta stuff. In other words: same engine but different default NNUE settings.

The difference has been noticeable both in fast gaming and in 5 minutes, that is unusual for engines based on neural networks (they often lose for time at the game ending).

The reasons behind this jump over by the NNUE engines is unknown to me.

I've tried them 10 days ago or something on Playchess with the Goi book but they made a shitty play, with blunders like 0.00 to suddendly +2.50 (me having black) or 0.00 to -1.97 (me having white) so I thought it was just another way to drag attention by some main chess site like in the case of Lc0 (this last one underperforming on Playchess and playing very bad both in Blitz games and analysis). But in the meanwhile something apparently has been fixed at Stockfish house and the use of neural network resulted in an astonishing jump up in strength. ... Just when you thought the opening lines couldn't still improve that much...

The book I used is the 1 move dummy.ctg based on the move #1 of all the three White main openings made by the Goi book. You'll find dummy.ctg in the games package (be aware that is just an 1 move opening book, not any sort of Goi). I didn't make so much games because the difference was too much, thus evident.

 

Stockfish Beta normal vs Stockfish Beta NNUE test, 1 minute.

 

Stockfish Beta normal vs Stockfish Beta NNUE test, 2 minutes.

 

Stockfish Beta normal vs Stockfish Beta NNUE test, 5 minutes.

 

Games + dummy.ctg archive.

 

 

---------------○---------------