Since you’re here...

We hope you will consider supporting us today. We need your support to continue to exist, because good entries are more and more work time. Every reader contribution, however big or small, is so valuable. Support "Chess Engines Diary" even a small amount– and it only takes a minute. Thank you.
============================== My email: jotes@go2.pl



Hypersion 2 - new version chess engine


Hypersion a free, open-source UCI chess engine.
Hypersion uses an NNUE network for evaluation (Stockfish 18 SFNNv10 architecture: HalfKAv2_hm features + FullThreats, big network 1024-d FT, small network 128-d FT) on top of an alpha-beta search with PVS, transposition table, aspiration windows, late-move reductions, singular extensions, ProbCut, futility/razoring/SEE pruning, and lazy-SMP infrastructure (single-threaded by default — multi-thread is currently unstable).
Author: RenCopp

Hypersion 2 what's new?

Second public release. Major focus: search refinements, AVX-VNNI build, and a complete rewrite of UCI_LimitStrength validated against real human-trained chess bots.

UCI_LimitStrength — complete rewrite

The v1 strength limiter was found to be broken: the noise spread formula (20-skill)*60 made every level under UCI_Elo 2000 play near-randomly (best-move probability ~5 % regardless of configured ELO). Internal testing showed Hypersion@900 beating Hypersion@1100 by 95 % — calibration was non-monotonic.

v2 fixes this with a clean two-lever design:

  1. Per-bucket node cap — limits search work, scales smoothly with skill. No depth caps (which caused horizon-effect inversions).
  2. Low-rate blunder probability — small chance per move of picking a sub-optimal move from the top half of root moves. At ELO 500 it's 8 % (was 45 % in v1 — moves now look natural-weak instead of random).

Validated against real human-trained bots:

Hypersion@NvsScoreVerdict
700dala-700 (lichess Rapid 881)25.0 %✅ matches expected 26 % for true 700 vs 881 — bullseye
900dala-900 (~1000)60.0 %OK (~+170 over target, within 10-game noise)
1100Maia 110045.0 %✅ OK
1500Maia 150050.0 %✅ OK
1900Maia 190040.0 %✅ OK

Search refinements

  • LMR formula softening (1.95 → 1.90 in the log/log table). Less aggressive late-move reductions; more accurate eval. Bench-tested positive at fast TC.
  • NMP zugzwang strengthening — at depth ≥ 12, require strictly more non-pawn material than a single minor piece for null-move pruning. Cuts false positives in K + minor endgames.
  • Endgame LMR mitigation — subtract 1 ply from the reduction when total piece count ≤ 8. Endgames need accurate forcing-line calculation; LMR was over-reducing critical lines.
  • Singular extension threshold 6 → 5 — matches Stockfish 18.
  • Stockfish-18 fallingEval time scaling — when score drops between iterations, extend search time; when stable/rising, save time. Helps with bullet-time conversion of winning positions.
  • NNUE big-net threshold 962 → 1500 — the small NNUE net was being used for K + R-class endgames where conversion accuracy matters. Now the more accurate big net stays active for endgame conversion plans.

Build

  • AVX-VNNI build target with -march=alderlake -mavxvnni. The v1 build was AVX2-only — no vpdpbusd instructions. Bench NPS up ~9 % (median over 7 samples) on Intel 12th gen+ / AMD Zen 4+ CPUs.
  • 64-byte cache-line alignment on NNUE accumulators and on-stack buffers. Eliminates straddle-line loads.
  • PGO Makefile fix — make profile now succeeds with -Wno-coverage-mismatch. Previously failed in the second pass.

Bug fixes since v1

  • Repetition detection during search — Worker::prepare() now deep-copies the full StateInfo chain from the source position. v1 discarded history, so search couldn't see 3-fold repetition still pending from the actual game (engine would evaluate a perpetual
    as "winning"). 
  • 50-move-rule + checkmate edge case — verifies legal-move existence when in check, so checkmate is not mis-reported as a rule-50 draw.
  • Endgame time bonus — boosts time spent when total piece count is low, addressing the 70 % endgame-blunder rate observed in a 50- game match against full Stockfish.
  • Cosmetic UCI eval-scale fix — UCI cp output now uses Stockfish's "1 pawn = 100 cp" convention (was 5× scale internally due to NNUE eval magnitudes).

What was investigated but reverted

Documented as tombstone comments in the source so future contributors
don't retry blind:

  • SF18 continuation-history pruning at low depth (regressed −207 ELO).
  • NMP base R bump 4 → 5 (regressed −50 ELO at fast TC).
  • Material-keyed correction history (regressed −26 ELO).
  • LMR opponentWorsening adjustment (regressed −33 ELO at fast TC).
  • SEE-quiet pruning opponentWorsening (regressed −47 ELO).

Several others were ruled neutral within ±35 ELO noise but kept where they were structurally sound (opponentWorsening flag in RFP / futility / razoring / probcut margins).

CPU requirements

x86_64 with AVX2 + AVX-VNNI (Intel 12th gen Alder Lake / 13th & 14th gen Raptor Lake / AMD Zen 4 or newer). On older CPUs without VNNI, build from source with make ARCH=x86-64-avx2.




Comments