Orion - UCI chess engine
Author - David Carteau, Rating CEDR=3084
Orion 1.0 - author:
Orion 1.0 is available !
I'm very happy to release a new version of my little engine Orion ! Almost two years since the last release, with a lot of tries and errors, and significant progress made the last past months. Be aware that the new version will be weaker, and without any new user features, so don't be too disappointed... But I'm really proud of it, and it will be a good base for future growth !
It includes:
a neural network trained "from zero", using only game results (1-0, 0-1, 1/2-1/2) and material as labels and targets ;
quantization of weights and biases, resulting in a 40% increase in nps (nodes per second) !
a completely rewritten Cerebrum library, allowing anyone to reproduce my work and get exactly the same neural network from the same data ;
a minor change, with a default Transposition Table size of 256 Mb (previously 128 Mb).
The "zero approach"
That was the objective, and that task was really, really, difficult to achieve. Training a neural network with such noisy labels was a real challenge. To give an hint on that, imagine that you have to evaluate a position near the start of the game (with, let's says, 30 or more pieces), just using the fact that, at the end, one player - for example Black - won the game. The "signal" to catch is even weaker when you consider that the game can be a draw...
I started to explore other approaches, like the Seer's one (see the previous post), but without success. The time needed to train so many networks and to label data was impressive. And the results were not here.
I then decided to switch to a simpler method, where I have to train only one network, with game results in input, but trying to predict two values : the win ratio (or probability to win), between 0 and 1.0 (renormalised between -1.0 and 1.0), and the material, between -1.98 and 1.98* (with pawn=0.1, ..., queen=0.9). I took the average of the two predictions, and multiplied it by 1000 to get a final evaluation in pseudo-centipawns.
This original approach led me to get an engine around 3050 elo (~100 elo weaker than version 0.9), but ranking doesn't matter here, what matters for me is that I managed to get a pretty strong performance in an original way, without requiring any evaluations from other engines !
Architecture of the network is almost exactly the same than for the version 0.9, except that it predicts now two values instead of one, and I Let the possibibilty to ajdust (at inference time) the balance between these two values to produce the final evaluation (0.5-0.5 by default in version 1.0).
Quantization
Another thing that I absolutely wanted to explore was quantization of the weights and biases. As I wanted to release a new version before version 0.9 was 2 years old, I used a simple approach with a post-training quantization.
This should imply a loss in evaluation accuracy, but I'm not even really sure of that due to the fact than 1) quantization can help to reduce overfitting, if any (it could be considered as a kind of regularisation method) and 2) it resulted in a (very) nice speed improvement : +40% in terms of nodes per second !
I was really impressed by the difference, even if, at the end, standard (i.e. not quantized) and quantized versions of Orion 1.0 were very close in strength.
The Cerebrum library... and engine !
A last important thing for me was to get reproducible results and to allow other people to reproduce my work. I merely rewrote all the Cerebrum library in that perspective, and even wrote the smallest (and stupid !) UCI chess engine possible (named "Cerebrum 1.0") in order to demonstrate how to load and use a NNUE-like chess neural network in an engine. Do not expect strong performance here: the engine is limited at depth 1. I let testers decide if they want or not to include it in their respective rating list, at least to see if it can reach... the last position ;-)
I really hope that you, reader (!), or at least someone, will try to use the library to reproduce my results, and obtain the exact same network than the one now embedded in Orion 1.0. If you are interested, please follow these instructions !
Data used for the training
For those who are interested, some words about games used to train the network network. Here again, I tested several alternatives: use only games played by engines (CCRL, CEGT, etc.), only games played by humans (lichess elite database), or a mix.
As expected, I finally got best results with datasets composed of games played exclusively by engines (CCRL). But... It appears that it leads the current network to have some weaknesses in endgames for example (where games are usually adjudicated by the use of tablebases). It also have some difficulty to convert winning positions to an actual win (because strongly unbalanced games are not so common in engine tournaments). But, that's great ! I have now understood that the quality and the representativeness of data is crucial ! Let's see if we can go further...
As already said, a big thank you to the CCRL team for providing in such a simple way all the games they played !
Evaluation method and estimated strength
Evaluation has been performed using Arasan 20.4.1 (~3075 elo), Igel 2.4.0 (~3100 elo) and Counter 4.0 (~3125 elo) in a 40/2 tournament (total: 600 games). The estimated strength of the version 1.0 is ~3050 elo (more or less 100 elo weaker than version 0.9 on the same test).
All the given elo values are to be considered as "CCRL 40/15" elo values.
Next steps
I have ton of ideas, both for evaluation and search, to experiment. I also have in mind than some users already asked for more features (e.g. mutlipv). I will try to release a new version with such improvements in less than two years this time !
* 1.98 = 127/64, i.e. the maximum (absolute) value that can be represented using an int8 (signed byte, in the range -127..127) with a quantization method centered in zero with 64 possible negative and 64 possible positive values.
Orion 1.0 NNUE download
Comments
Post a Comment