I’ve reached an important milestone in the development of my new chess engine. MadChess 3.0 Beta can play a timed game of chess.
I copied the search function from MadChess 2.0 but implemented an evaluation function from scratch. The search function is rather sophisticated.
- Alpha / beta negamax
- PVS with aspiration windows
- MVV / LVA move order
- MultiPV with tracking of all principal variations (in search, not in hash table)
- Hashtable with score, bounds, and best move
- Delayed move generation (play best move from hashtable before generating moves)
- Null move pruning
- Killer moves
- Move history with aging (used to sort quiet moves)
- Late move reductions
- Futility pruning
- Time management based on total material and material advantage, with “panic” time loss prevention
The evaluation function is very simple.
- Material
- Piece location
- Draw by repetition, 50 moves without a capture or pawn move, or insufficient material
- Checkmate (obviously)
Move generation is limited to a single function: all moves are generated. I have not implemented separate move generators for captures-only or check-evasion.
I figure there’s no point for me to repeat the exact process I used to develop MadChess 2.0, where I slowly built up search and evaluation functions in tandem. In MadChess 3.0 Beta, I’ve changed the board representation from mailbox to bitboards. This greatly affects the evaluation logic (no more traversing a piece square table) but not the search logic, so it makes sense to retain the search function as-is. After all, the search code is a product of hundreds of hours of engineering and testing.
It will be interesting to measure the Elo value of evaluation features as I add them to MadChess 3.0 Beta. I am aware through personal experience of the highly non-linear relationship between individual search and evaluation features (with regards to contribution to Elo playing strength), so it’s unlikely I’ll find the same Elo values for evaluation features as I found in MadChess 2.0. Now I’m adding them to an engine with a mature search function. Who knows if they’ll be worth more or less than they were in MadChess 2.0? Stay tuned.
OK, what was the result of all this effort just to make a rather dumb chess engine? To answer this question, I ran a gauntlet tournament, pitting MadChess 3.0 Beta against familiar engines.
45) MadChess 3.0 2096.3 : 2000 (+783,=348,-869), 47.9 % vs. : games ( +, =, -), (%) : Diff, SD, CFS (%) Sungorus 1.4 : 200 ( 40, 27, 133), 26.8 : -213.7, 12.8, 0.0 Waxman 2017 : 200 ( 39, 38, 123), 29.0 : -158.7, 11.1, 0.0 Zevra 1.8.4 : 200 ( 39, 36, 125), 28.5 : -138.3, 12.9, 0.0 Napoleon 1.8 : 200 ( 43, 43, 114), 32.3 : -117.7, 16.9, 0.0 Galjoen 0.37.2 : 200 ( 66, 33, 101), 41.3 : -73.9, 11.9, 0.0 BikJump 2.01 : 200 ( 69, 44, 87), 45.5 : -18.0, 14.3, 10.4 Monarch 1.7 : 200 ( 90, 37, 73), 54.3 : +42.9, 15.1, 99.8 Gerbil 02 : 200 ( 122, 27, 51), 67.8 : +114.0, 6.6, 100.0 Faile 1.4 : 200 ( 111, 47, 42), 67.3 : +115.7, 13.8, 100.0 TSCP 1.81 : 200 ( 164, 16, 20), 86.0 : +334.8, 14.6, 100.0
This establishes a baseline rating.
Feature | Category | Date | Rev1 | WAC2 | Elo Rating3 | Improvement |
---|---|---|---|---|---|---|
Sophisticated Search Material and Piece Location |
Baseline | 2018 Nov 08 | 58 | 269 | 2096 | 0 |
- Subversion source code revision
- Win At Chess position test, 3 seconds per position
- Bullet chess, 2 min / game + 1 sec / move
I will add one evaluation (occasionally search) feature at a time and update this table, so I may track the progress of MadChess 3.0 Beta.