A sudoku with seventeen clues

Might still be worthwhile to peruse;

But with only sixteen,

As they’ve proved by machine,

It can never do more than bemuse.

In the last couple of weeks, a team led by Prof. Gary McGuire of University College Dublin have announced a proof that you need to give at least 17 “clues” (digits already filled in) for a sudoku puzzle to have a unique solution. Their paper gives a more complete description of the problem and their approach. Although the proof hasn’t yet gone through peer review, it’s been reported in *Nature* that most people in the field expect it to be valid. Around 50 000 valid 17-clue puzzles were previously known, and the suspicion had been growing for some years that this was the minimum number of clues possible. (For more background, Dr Max Neunhöffer at St Andrew’s has put online the slides of a very nice talk on the mathematics of sudoku, which he gave as the EMS Popular Lecture last year.)

An interesting aspect of the new proof is that it’s carried out largely by computer. Although checking all the sudoku puzzles that could possibly be formulated is impractical, the researchers were able to use various equivalence relations to reduce the number of possible grids to a more manageable number, and then to employ a smart strategy to test whether each of these grids could be defined by only 16 clues. Even then, they required a highly efficient algorithm and a very high-performance computing cluster to search through all the possibilities. (The time and energy they spent on this isn’t frivolous, though: they suggest that their approach should be adaptable to other problems in combinatorics and graph theory.)

The first and most famous of such “brute-force” computer-aided proofs was Appel & Haken’s proof of the Four-Colour Theorem in 1976. This stirred up a lot of controversy at the time: even after clever reductions of the problem, the computer had to check so many cases that it was impossible for a human to confirm them all by hand. In what way, sceptics asked, was this a rigorous proof if it relied on the correct functioning of electronic circuits — one could check the algorithm, but could one be sure the implementation was infallible? Proponents of computer-aided proofs replied by pointing out that the human brain itself is pretty fallible: any proof that has been checked by a finite number of humans surely still has a non-zero probability of being wrong. This, of course, raises the question of whether mathematics is really as logically watertight as we’d all like to believe!

These days the validity of computer proofs is widely accepted, at least in those areas of mathematics where they’ve proven to be a useful tool. It’s fair to say, though, that not everybody likes them from an aesthetic point of view. For most of us, a brute-force proof might establish *that* the theorem is true, but it leaves us somehow feeling that it doesn’t explain *why* it’s true. Whether there is a sensible distinction between these is a question that it’s worth your while to ponder…

(DP)