Skip to main content

Since you’re here...

... we have a small favour to ask. More people, like you, are reading and supporting our blog: "Chess Engines Diary". And unlike many other sites and blogs, we made the choice to keep our articles open for all, regardless of where they live or what they can afford to pay.

We hope you will consider supporting us today. We need your support to continue to exist, because good entries are more and more work time. Every reader contribution, however big or small, is so valuable. Support "Chess Engines Diary" even a small amount– and it only takes a minute. Thank you.

============================== My email: jotes@go2.pl



New version chess engine: Renegade 1.1.0


Renegade - UCI chess engine  Author: Krisztian Peocz  
Rating Chess Engines Diary CEDR=3426

Renegade is a chess engine written in C++ using Visual Studio 2019. It values readability and simplicity, and uses the UCI protocol to communicate, making it easy to connect it to chess frontends. Under construction since October 7, 2022 and released publicly on January 15, 2023.  

The project can be compiled using Visual Studio 2019 with C++20 features enabled. The engine makes heavy use of `popcnt` and `lzcnt` instructions, thus only processors from 2013 or so are supported, but the calls to these instructions are wrapped in a custom function, and replacing them with something more compatible is relatively straightforward. Currently only Windows binaries are compiled, but in the future I would like to make them for Linux as well.

v.1.1.0:
It's been a while, but it's time for another release!

Renegade now features a bigger and much better neural network. The final iteration was trained on over 2.7 billion positions obtained from self-play of various development versions. FRC support was added, and the training dataset now includes a small portion of DFRC games as well.

A few search improvements had been made, but the more notably, a big part of the code was rewritten now that I'm more experienced with engine development. This has little effect on playing strength, but it is good for my sanity, and for everyone who dares to look at the source code, though still plenty of work left to be done in this regard.

As always, huge thanks to everyone who ran tests, tournaments, and to those who helped me and gave me feedback, I appreciate you all!

Changelog

Neural net improvements
Regenerated the whole dataset, now amounting to over 2.78 billion positions
Increased arch size, now (768->1024)x2->1
Using QA=255 quantization again with improved order of operations
Training with tapered WDL proportion
Search improvements
Added double extensions
Added multicut for non-PV nodes
Adjusting reductions in LMR based on cutoff count
Changed RFP margin calculations when improving
When not improving, reduced move counts before LMP kicks in
Reduced default aspiration window size
Not performing futility pruning if the history score is high enough
Added FRC (and DFRC) support
Simplifications
No longer reducing less when giving check
Removed fallback move ordering method when history score is 0
Removed additional en passant threat calculations when previous move was a pawn double push
Support for search parameter tuning
Performance improvements
Using static move lists in move generation
Partial insertion sort for faster move ordering
Not recalculating the whole hash after a null move
More efficient calculation of threats
Prefetching for null-move pruning
Added a proper (but not great) way of checking
Refactored code
Saner separation of files
Rewritten position handling
Updated WDL models
Progression testing: (10s+0.1, Pohl.pgn)

Score of Renegade 1.1.0 vs Renegade 1.0.0: 520 - 87 - 393  [0.717] 1000
...      Renegade 1.1.0 playing White: 417 - 3 - 81  [0.913] 501
...      Renegade 1.1.0 playing Black: 103 - 84 - 312  [0.519] 499
...      White vs Black: 501 - 106 - 393  [0.698] 1000
Elo difference: 161.1 +/- 17.2, LOS: 100.0 %, DrawRatio: 39.3 %
You know the drill, expect less for balanced books and against other engines.

The provided binaries are targeting x86-64-bmi2 (v3 microarchitecture level).

Renegade 1.0.0 results:
Counter 5.53/6+06 Games
Avalanche 2.1.02.5/4+14 Games
Midnight 92/4+04 Games
Starzix 4.02/4+04 Games
Halogen 112/3+13 Games
Obsidian 10.01.5/3+03 Games
LTChess 9.31.5/3+03 Games
Pawn 3.01.5/3+03 Games
Minic 3.411.5/3+03 Games
Arasan 24.11/3-13 Games
Texel 1.111/3-13 Games
Uralochka 3.41 dev11/3-13 Games
Pedantic 0.6.22/2+22 Games
Sloth 1.62/2+22 Games
Puffin 2.02/2+22 Games
Fatalii 0.6.02/2+22 Games
Lynx 1.2.02/2+22 Games
Shuffle 5.0.02/2+22 Games
Jet 1.12/2+22 Games
Leorik 3.01.5/2+12 Games

github:https://github.com/pkrisz99/Renegade/releases/tag/v0.12.0

Comments

Popular posts from this blog

End of the Torch Experiement?

 Andrew Grant: "Almost exactly one year ago we announced the release of Torch. We were a bit ambitious at the time, and announced it as the #2 engine, after beating Leela in the most recent Bullet event at CCC at the time. Today, that claim rings much more true, although still has some minor points of contention. Torch has beaten Leela in multiple Bullet and Blitz events in a row; Never managed to win one of the Rapid events; But has beaten Leela in a h2h classical event. Chesscom got into the space out of a need to develop a replacement for Komodo. They needed a strong chess engine which could be used for integrating their other products. Many features on Chesscom, from Game Review, to puzzle generation, to Bots (obviously!) are powered by high quality engines. The goal was always for that. In fact, despite me being the only full time employee working on Torch, a plurality of my time was spent on things not related to Torch's strength. Its a bit self-aggrandizing, but the Tor

New Strong Engines Test, by Chess Engines Diary, 2024.04.12

  💾  552 (!) games from the tournament download   👍 @chessenginesdiary  Country - Poland, City -  Malbork 🕓 Time 3'+3" 💻HP Pavilion i5-1035G1 8GB RAM 🖬 GUI-Banksia All  CEDR  317.321 games download  (01.04.2024 - 3'+3")  HypnoS 030424, Yuli GM Pro 16, Stockfish 16.1 and ShashChess 35 . These four engines scored the same number of points and placed at the top of the table in this strong tournament. Tech table: Engine KN/move NPS dep/mov time/mov mov/game time/game fails Alexandria 6.1.0 3218 576429 30.2 5.6 56.7 316.2 Berserk 13 4779 841025 36.7 5.7 56.0 318.1 Brainlearn 28 2275 366977 31.4 6.2 43.5 269.4 Caissa 1.18 5492 923486 30.2 5.9 50.4 299.7 Clover 6.1.19 4803 854377 31.4 5.6 62.2 349.4 Cool Iris 12.10 2201 356730 27.6 6.2 51.2 315.5 CorChess 20240331 2253 372774 27.4 6.0 52.2 315.8 Crystal 8 3582 571994 20.3 6.3 47.7 298.8 Fire 9.3 5225 876565 22.0 6.0 55.5 331.1 Fisherov chess monk 1.2 3487 619728 36.0 5.6 60.4 340.0 Hyp

New chess engine: HypnoS 220624 (derived from Stockfish)

  HypnoS is a free and strong UCI chess engine derived from Stockfish  that analyzes chess positions and computes the optimal moves. HypnoS does not include a graphical user interface (GUI) that is required  to display a chessboard and to make it easy to input moves. These GUIs are  developed independently from HypnoS and are available online. HypnoS development is currently supported on the Openbench framework. OpenBench (created by Andrew Grant) is an open-source Sequential Probability Ratio Testing (SPRT) framework designed for self-play testing of engines. OpenBench makes use of distributed computing, allowing anyone to contribute CPU time to further the development of some of the world's most powerful engines. HypnoS 220624 test: STC 10.0+0.10 LLR: 2.95 (-2.94, 2.94) [0.00, 2.00] Games: 121998 W: 32785 L: 32336 D: 56877 Pntml(0-2): 451, 14472, 30679, 14971, 426 hypnos.bigzerchess.com/test/25/ LTC 60.0+0.60 LLR: 2.95 (-2.94, 2.94) [0.50, 2.50] Games: 41886 W: 10828 L: 10493 D:

Chess engine: Deep Ripper -1 (Stockfish derivatives)

Deep Ripper (Stockfish derivatives) by Eduard Nemeth Author: " Here is the first official version of Deep Ripper. For now, I have decided on a version that can be used for any hardware. If you only have a few cores, you should set the number of variants to 1 or 2. However, 4 variants are standard (1 to 10 can be set), which deliver very good analysis results on 6 or more cores. Playing on servers is also possible. I hope you enjoy testing." Deep Ripper - 1 download Deep Ripper - 1 download This engine uses the Random Op. MultiPV mode. Here, several variants are examined in parallel and, after e.g. 16 plies, merged into a single variant. This is not the same as the normal multi-variant mode, where all variants are examined separately. The results can be completely different! So it makes no sense to create an identical engine without the random mode.

Chess engine: Deep Ripper exp (Stockfish derivatives)

Deep Ripper exp (Stockfish derivatives) by Eduard Nemeth Author: " New ways (engine for download). There is little point in continuing to optimize in single variant mode. In the future, I will bundle the variants in Random Op. MultiPV mode. Two variants (or more) see more than just one. I started with a first ultra-fast engine that calculates up to 5 plies deeper than SF dev in the first 30s in single variant mode. With 2 variants, the engine is just as fast as SF dev. There are no limits here. The selective search depth can be easily increased or decreased, etc. The number of variants as well. A first experimental version is now available here. But I ask you to remember that this is only the first test version of this kind. I hope everyone has fun testing, and you will! Deep Ripper (exp) the new way of engine optimization!" Deep Ripper exp download Please note that the networks cannot be exchanged because they were specially adapted for this type of code (not by me but by th