Since you’re here...

We hope you will consider supporting us today. We need your support to continue to exist, because good entries are more and more work time. Every reader contribution, however big or small, is so valuable. Support "Chess Engines Diary" even a small amount– and it only takes a minute. Thank you.
============================== My email: jotes@go2.pl



Chess engine: Stockfish 22051412 NNUE for Windows and Linux (+3,86 ELO)



Stockfish - UCI chess engine, compiled by Tomasz Sobczyk
Rating CEDR=3497 (2 place)

Update NNUE architecture to SFNNv5. Update network to nn-3c0aa92af1da.nnue.

Architecture changes:
Duplicated activation after the 1024->15 layer with squared crelu (so 15->15*2). As proposed by vondele.

Trainer changes:
Added bias to L1 factorization, which was previously missing (no measurable improvement but at least neutral in principle)
For retraining linearly reduce lambda parameter from 1.0 at epoch 0 to 0.75 at epoch 800.
reduce max_skipping_rate from 15 to 10 (compared to vondele's outstanding PR)

Note: This network was trained with a ~0.8% error in quantization regarding the newly added activation function.
This will be fixed in the released trainer version. Expect a trainer PR tomorrow.

Note: The inference implementation cuts a corner to merge results from two activation functions.
This could possibly be resolved nicer in the future. AVX2 implementation likely not necessary, but NEON is missing.

First training session invocation:
python3 train.py \
../nnue-pytorch-training/data/nodes5000pv2_UHO.binpack \
../nnue-pytorch-training/data/nodes5000pv2_UHO.binpack \
--gpus "$3," \
--threads 4 \
--num-workers 8 \
--batch-size 16384 \
--progress_bar_refresh_rate 20 \
--random-fen-skipping 3 \
--features=HalfKAv2_hm^ \
--lambda=1.0 \
--max_epochs=400 \
--default_root_dir ../nnue-pytorch-training/experiment_$1/run_$2

Second training session invocation:
python3 train.py \
../nnue-pytorch-training/data/T60T70wIsRightFarseerT60T74T75T76.binpack \
../nnue-pytorch-training/data/T60T70wIsRightFarseerT60T74T75T76.binpack \
--gpus "$3," \
--threads 4 \
--num-workers 8 \
--batch-size 16384 \
--progress_bar_refresh_rate 20 \
--random-fen-skipping 3 \
--features=HalfKAv2_hm^ \
--start-lambda=1.0 \
--end-lambda=0.75 \
--gamma=0.995 \
--lr=4.375e-4 \
--max_epochs=800 \
--resume-from-model /data/sopel/nnue/nnue-pytorch-training/data/exp367/nn-exp367-run3-epoch399.pt \
--default_root_dir ../nnue-pytorch-training/experiment_$1/run_$2

Passed STC:
LLR: 2.95 (-2.94,2.94) <0.00,2.50>
Total: 27288 W: 7445 L: 7178 D: 12665 Elo +3.40
Ptnml(0-2): 159, 3002, 7054, 3271, 158
https://tests.stockfishchess.org/tests/view/627e8c001919125939623644

Passed LTC:
LLR: 2.95 (-2.94,2.94) <0.50,3.00>
Total: 21792 W: 5969 L: 5727 D: 10096 Elo +3.86
Ptnml(0-2): 25, 2152, 6294, 2406, 19
https://tests.stockfishchess.org/tests/view/627f2a855734b18b2e2ece47

Download:
 
Our social media:
Twitter
Facebook
Instagram
Pinterest
Reddit
Tumblr
VKontakte


Comments

Post a Comment

Popular posts from this blog

New Strong Engines Test, by Chess Engines Diary, 2024.04.12

New chess opening book: M11.2 (bin and ctg)

New version chess engine: Lc0 BT4

Dragon NNUE by Komodo Chess - it's free!