On the seventh move of the crucial deciding game, black made what some now consider to have been a critical error. When black mixed up the moves for the Caro-Kann defence, white took advantage and created a new attack by sacrificing a knight. In just 11 more moves, white had built a position so strong that black had no option but to concede defeat. The loser reacted with a cry of foul play – one of the most strident accusations of cheating ever made in a tournament, which ignited an international conspiracy theory that is still questioned 20 years later.
This was no ordinary game of chess. It’s not uncommon for a defeated player to accuse their opponent of cheating – but in this case the loser was the then world chess champion, Garry Kasparov. The victor was even more unusual: IBM supercomputer, Deep Blue.
In defeating Kasparov on May 11 1997, Deep Blue made history as the first computer to beat a world champion in a six-game match under standard time controls. Kasparov had won the first game, lost the second and then drawn the following three. When Deep Blue took the match by winning the final game, Kasparov refused to believe it.
In an echo of the chess automaton hoaxes of the 18th and 19th centuries, Kasparov argued that the computer must actually have been controlled by a real grand master. He and his supporters believed that Deep Blue’s playing was too human to be that of a machine. Meanwhile, to many of those in the outside world who were convinced by the computer’s performance, it appeared that artificial intelligence had reached a stage where it could outsmart humanity – at least at a game that had long been considered too complex for a machine.
Yet the reality was that Deep Blue’s victory was precisely because of its rigid, unhumanlike commitment to cold, hard logic in the face of Kasparov’s emotional behaviour. This wasn’t artificial (or real) intelligence that demonstrated our own creative style of thinking and learning, but the application of simple rules on a grand scale.
What the match did do, however, was signal the start of a societal shift that is gaining increasing speed and influence today. The kind of vast data processing that Deep Blue relied on is now found in nearly every corner of our lives, from the financial systems that dominate the economy to online dating apps that try to find us the perfect partner. What started as student project, helped usher in the age of big data.
A human error
The basis of Kasparov’s claims went all the way back to a move the computer made in the second game of the match, the first in the competition that Deep Blue won. Kasparov had played to encourage his opponent to take a “poisoned” pawn, a sacrificial piece positioned to entice the machine into making a fateful move. This was a tactic that Kasparov had used against human opponents in the past.
What surprised Kasparov was Deep Blue’s subsequent move. Kasparov called it “human-like”. John Nunn, the English chess grandmaster, described it as “stunning” and “exceptional”. The move left Kasparov riled and ultimately thrown off his strategy. He was so perturbed that he eventually walked away, forfeiting the game. Worse still, he never recovered, drawing the next three games and then making the error that led to his demise in the final game.
The move was based on the strategic advantage that a player can gain from creating an open file, a column of squares on the board (as viewed from above) that contains no pieces. This can create an attacking route, typically for rooks or queens, free from pawns blocking the way. During training with the grand master Joel Benjamin, the Deep Blue team had learnt there was sometimes a more strategic option than opening a file and then moving a rook to it. Instead, the tactic involved piling pieces onto the file and then choosing when to open it up.
When the programmers learned this, they rewrote Deep Blue’s code to incorporate the moves. During the game, the computer used the position of having a potential open file to put pressure on Kasparov and force him into defending on every move. That psychological advantage eventually wore Kasparov down.
From the moment that Kasparov lost, speculation and conspiracy theories started. The conspiracists claimed that IBM had used human intervention during the match. IBM denied this, stating that, in keeping with the rules, the only human intervention came between games to rectify bugs that had been identified during play. They also rejected the claim that the programming had been adapted to Kasparov’s style of play. Instead they had relied on the computer’s ability to search through huge numbers of possible moves.
IBM’s refusal of Kasparov’s request for a rematch and the subsequent dismantling of Deep Blue did nothing to quell suspicions. IBM also delayed the release of the computer’s detailed logs, as Kasparov had also requested, until after the decommissioning. But the subsequent detailed analysis of the logs has added new dimensions to the story, including the understanding that Deep Blue made several big mistakes.
There has since been speculation that Deep Blue only triumphed because of a bug in the code during the first game. One of Deep Blue’s designers has said that when a glitch prevented the computer from selecting one of the moves it had analysed, it instead made a random move that Kasparov misinterpreted as a deeper strategy.
He managed to win the game and the bug was fixed for the second round. But the world champion was supposedly so shaken by what he saw as the machine’s superior intelligence that he was unable to recover his composure and played too cautiously from then on. He even missed the chance to come back from the open file tactic when Deep Blue made a “terrible blunder”.
Whichever of these accounts of Kasparov’s reactions to the match are true, they point to the fact that his defeat was at least partly down to the frailties of human nature. He over-thought some of the machine’s moves and became unecessarily anxious about its abilities, making errors that ultimately led to his defeat. Deep Blue didn’t possess anything like the artificial intelligence techniques that today have helped computers win at far more complex games, such as Go.
But even if Kasparov was more intimidated than he needed to be, there is no denying the stunning achievements of the team that created Deep Blue. Its ability to take on the world’s best human chess player was built on some incredible computing power, which launched the IBM supercomputer programme that has paved the way for some of the leading-edge technology available in the world today. What makes this even more amazing is the fact that the project started not as an exuberant project from one of the largest computer manufacturers but as a student thesis in the 1980s.
Chess race
When Feng-Hsiung Hsu arrived in the US from Taiwan in 1982, he can’t have imagined that he would become part of an intense rivalry between two teams that spent almost a decade vying to build the world’s best chess computer. Hsu had come to Carnegie Mellon University (CMU) in Pennsylvania to study the design of the integrated circuits that make up microchips, but he also held a longstanding interest in computer chess. He attracted the attention of the developers of Hitech, the computer that in 1988 would become the first to beat a chess grand master, and was asked to assist with hardware design.
But Hsu soon fell out with the Hitech team after discovering what he saw as an architectural flaw in their proposed design. Together with several other PhD students, he began building his own computer known as ChipTest, drawing on the architecture of Bell Laboratory’s chess machine, Belle. ChipTest’s custom technology used what’s known as “very large-scale integration” to combine thousands of transistors onto a single chip, allowing the computer to search through 500,000 chess moves each second.
Although the Hitech team had a head start, Hsu and his colleagues would soon overtake them with ChipTest’s successor. Deep Thought – named after the computer in Douglas Adams’ The Hitchhiker’s Guide to the Galaxy built to find the meaning of life – combined two of Hsu’s custom processors and could analyse 720,000 moves a second. This enabled it to win the 1989 World Computer Chess Championship without losing a single game.
But Deep Thought hit a road block later that year when it came up against (and lost to) the reigning world chess champion, one Garry Kasparov. To beat the best of humanity, Hsu and his team would need to go much further. Now, however, they had the backing of computing giant IBM.
Chess computers work by attaching a numerical value to the position of each piece on the board using a formula known as an “evaluation function”. These values can then be processed and searched to determine the best move to make. Early chess computers, such as Belle and Hitech, used multiple custom chips to run the evaluation functions and then combine the results together.
The problem was that the communication between the chips was slow and used up a lot of processing power. What Hsu did with ChipTest was to redesign and repackage the processors into a single chip. This removed a number of processing overheads such as off-chip communication and made possible huge increases in computational speed. Whereas Deep Thought could process 720,000 moves a second, Deep Blue used large numbers of processors running the same set of calculations simultaneously to analyse 100,000,000 moves a second.
Increasing the number of moves the computer could process was important because chess computers have traditionally used what is known as “brute force” techniques. Human players learn from past experience to instantly rule out certain moves. Chess machines, certainly at that time, did not have that capability and instead had to rely on their ability to look ahead at what could happen for every possible move. They used brute force in analysing very large numbers of moves rather than focusing on certain types of move they already knew were most likely to work. Increasing the number of moves a machine could look at in a second gave it the time to look much further into the future at where different moves would take the game.
By February 1996, the IBM team were ready to take on Kasparov again, this time with Deep Blue. Although it became the first machine to beat a world champion in a game under regular time controls, Deep Blue lost the overall match 4-2. Its 100,000,000 moves a second still weren’t enough to beat the human ability to strategise.
To up the move count, the team began upgrading the machine by exploring how they could optimise large numbers of processors working in parallel – with great success. The final machine was a 30-processor supercomputer that, more importantly, controlled 480 custom intergrated circuits designed specifically to play chess. This custom design was what enabled the team to so highly optimise the parallel computing power across the chips. The result was a new version of Deep Blue (sometimes referred to as Deeper Blue) capable of searching around 200,000,000 moves per second. This meant it could explore how each possible strategy would play out up to 40 or more moves into the future.
Parallel revolution
By the time the rematch took place in New York City in May 1997, public curiosity was huge. Reporters and television cameras swarmed around the board and were rewarded with a story when Kasparov stormed off following his defeat and cried foul at a press conference afterwards. But the publicity around the match also helped establish a greater understanding of how far computers had come. What most people still had no idea about was how the technology behind Deep Blue would help spread the influence of computers to almost ever aspect of society by transforming the way we use data.
Complex computer models are today used to underpin banks’ financial systems, to design better cars and aeroplanes, and to trial new drugs. Systems that mine large datasets (often known as “big data”) to look for significant patterns are involved in planning public services such as transport or healthcare, and enable companies to target advertising to specific groups of people.
These are highly complex problems that require rapid processing of large and complex datasets. Deep Blue gave scientists and engineers significant insight into the massively parallel multi-chip systems that have made this possible. In particular they showed the capabilities of a general-purpose computer system that controlled a large number of custom chips designed for a specific application.
The science of molecular dynamics, for example, involves studying the physical movements of molecules and atoms. Custom chip designs have enabled computers to model molecular dynamics to look ahead to see how new drugs might react in the body, just like looking ahead at different chess moves. Molecular dynamic simulations have helped speed up the development of successful drugs, such as some of those used to treat HIV.
For very broad applications, such as modelling financial systems and data mining, designing custom chips for an individual task in these areas would be prohibitively expensive. But the Deep Blue project helped develop the techniques to code and manage highly parallelised systems that split a problem over a large number of processors.
Today, many systems for processing large amounts of data rely on graphics processing units (GPUs) instead of custom-designed chips. These were originally designed to produce images on a screen but also handle information using lots of processors in parallel. So now they are often used in high-performance computers running large data sets and to run powerful artificial intelligence tools such Facebook’s digital assistant. There are obvious similarities with Deep Blue’s architecture here: custom chips (built for graphics) controlled by general-purpose processors to drive efficiency in complex calculations.
The world of chess playing machines, meanwhile, has evolved since the Deep Blue victory. Despite his experience with Deep Blue, Kasparov agreed in 2003 to take on two of the most prominent chess machines, Deep Fritz and Deep Junior. And both times he managed to avoid a defeat, although he still made errors that forced him into a draw. However, both machines convincingly beat their human counterparts in the 2004 and 2005 Man vs Machine World Team Championships.
Junior and Fritz marked a change in the approach to developing systems for computer chess. Whereas Deep Blue was a custom-built computer relying on the brute force of its processors to analyse millions of moves, these new chess machines were software programs that used learning techniques to minimise the searches needed. This can beat the brute force techniques using only a desktop PC.
But despite this advance, we still don’t have chess machines that resembles human intelligence in the way it plays the game – they don’t need to. And, if anything, the victories of Junior and Fritz further strengthen the idea that human players lose to computers, at least in part, because of their humanity. The humans made errors, became anxious and feared for their reputations. The machines, on the other hand, relentlessly applied logical calculations to the game in their attempts to win. One day we might have computers that truly replicate human thinking, but the story of the last 20 years has been the rise of systems that are superior precisely because they are machines.
Mark Robert Anderson is Professor in Computing and Information Systems at Edge Hill University
This article was originally published on The Conversation.