Chess Players Whose Moves Most Matched Computers
Published: March 19, 2012
Source: New York Times
As part of his research into modeling how people play chess, Kenneth W. Regan has analyzed the performances of thousands of players over the last 200 years to see which ones most matched the moves computers would have made. The following have the highest correlations from over 2,500 performances in open tournaments. It does not mean that the players cheated, though the player who had the top performance — Sébastien Feller at the Paris Open — was accused of cheating at another event. Diwakar Prasad Singh also was suspected, but he was cleared after an inquiry.
* In hundredths of a pawn. How much each move by the player, on average, disagreed with that of the computer program. † Final standing after tiebreakers. Zvjaginsev tied for 11th, Moiseenko tied for 1st, Le Quang tied for 50th in the Aeroflot Open and 1st in the Moscow Open and Gharamian tied for 1st. § Partial results because some games were not available or not included for analysis.
Chess Daily News from Susan Polgar
That is great such a tool was developed. Now Tournament Directors
can fight back and know who the suspicious characters are and who to target for extra surveillance before the players even arrive. Had this been done by Topalov before Kramnik started using the bathroom he could be World Champion right now.
In honor of our hostess, here is the analogous “Top Ten” list in my category of all world championship matches, male AND female. Hopefully it shows up OK—it won’t be as nice as the NYT graphic; if the following is jumbled then the top-13 list is here, and the whole list here.
Rank Match% AvgDiff #Moves Player Event/source-file
—– —— ——- —— —————- ————————-
1 63.4% 0.048 333 Hou, Yifan HouKoneruWWC2011R3d13
2 62.5% 0.081 333 Polgar, Zsuzsa PolgarZsuJun1996R3d13
3 62.3% 0.051 146 Kramnik, V. KramnikTopalovPlayoff2006R3d13
4 62.0% 0.073 279 Anand, V AnandKramnik2008R3d13
5 60.8% 0.064 462 Lasker, Emanuel LaskerMarshall1907R3d13
6 60.6% 0.083 762 Botvinnik, Mikh BotvinnikSmyslov1954R3d13
7 60.4% 0.109 480 Chiburdanidze, ChiburdanidzeAlexandria1981R3d1
8 60.3% 0.074 494 Topalov, V AnandTopalov2010R3d13
9 60.2% 0.059 492 Anand, V AnandTopalov2010R3d13
10 60.1% 0.099 333 Koneru, Humpy HouKoneruWWC2011R3d13
11 60.1% 0.077 699 Fischer, Robert FischerSpassky1972R3d13
12 60.1% 0.110 594 Chiburdanidze, ChiburdanidzeGaprindashvili1978
13 60.0% 0.075 423 Lasker, Emanuel LaskerSchlechter1910R3d13
Memo to the guys: Get with it! 🙂 Note that the top Rybka concordance by a male is Kramnik at Elista, in the rapid playoff, when Topalov was concerned about “keeping him at the board”!
I intend to publish the Single-PV part of my site, which is basically non-confidential, but it will take work to create a user-friendly front end, and there are some factors to iron out. The work itself is described in my published papers here, specifically the latter three, plus I am beginning a technical FAQ here.
If a player is cheating with these percentages, he’s just stupid.
Often the 2nd or 3rd move is not as good as the 1st computer move, but it’s still winning. By playing the 1st move, you are always on the edge because you end up on risky forceful lines where you always have to play the 1st move.
On some positions only one move is good but on others whether you play the first or the ninth move makes little difference.
The cheaters don’t even know how to cheat without raising suspicions.
Interesting stats again Kenneth, Did a cartoon on Why chess about this I think a while back. I’m just not sure what it actually proves though. That 30 or 40 percent difference can make a BIG difference. Thank god I’m not smart enough to figure out who the cheaters are…but I’m sure they’re out there. We just really need to be careful about who we accuse…and it’s very difficult to prove such things. Even if you have statistics. Stats only tell one side of the story….you big number cruncher you..haha..be well
Michel Magnan
Assessing chess-players on the basis of their degree of coincidence with a chess-program is the least informative way (of three) of assessing fallibility. No-one should be surprised when the human agrees with the computer if there’s a forced move or a Pawn race afoot.
My joint papers with Ken, see http://centaur.reading.ac.uk/view/creators/90000763.default.html , cover the fact that one can assess in terms of average-error, i.e. average lost-‘pawns’.
One should also assess choices in the fuller context of all the best choices available, and this has been done using Bayesian inference, a standard modelling technique. This is the most informed, finessed and perceptive method of the three.
My related research thread is pointed at in reverse-chronological order by the URLs below, though the first two do not dwell on the full-context ‘Bayesian’ approach and the last three focus on sub-6-man chess only.
http://centaur.reading.ac.uk/23800/
http://centaur.reading.ac.uk/19778/
http://centaur.reading.ac.uk/4517/
http://centaur.reading.ac.uk/4489/
http://centaur.reading.ac.uk/4519/
http://centaur.reading.ac.uk/4523/
http://centaur.reading.ac.uk/4548/
http://centaur.reading.ac.uk/4550/
http://centaur.reading.ac.uk/4579/
The number of moves surveyed also affects the probability of being at the top of the list. At the extreme, I could be at the top of the list with 100% on the basis of one obvious move – and I don’t play chess.
I agree that there could be real-time analysis of the fidelity of players’ moves, but ‘coincidence with a computer’ is not the best or most finessed way to do it.
Guy Haworth