On 27 January 2014, Google made its biggest European acquisition to date, when it splashed out £400 million on DeepMind, a small startup with a grand ambition: “To solve intelligence and then to use that to solve everything else.”
DeepMind wants to find the holy grail of AI: an artificial general intelligence, a cognitive system as broad as a human brain and capable of completing a vast range of tasks and perhaps overcoming the greatest threats to civilisation. As DeepMind CEO Demis Hassabis put it in an
interview with the Guardian: “What we’re working on is potentially a meta-solution to any problem.”
Hassabis cofounded DeepMind in 2010 alongside Shane Legg, a machine learning researcher from New Zealand, and childhood friend Mustafa Suleyman, who had dropped out of a degree in philosophy and theology at Oxford to set up a counselling service for young Muslims.
Suleyman gave a rare insight into DeepMind’s work during a machine learning conference in London in June 2015.
“Our deep learning tool has now been deployed in many environments, particularly across Google in many of our production systems,” he said.
“In image recognition, it was famously used in 2012 to achieve very accurate recognition on around a million images with about 16 percent error rate. Very shortly after that it was reduced dramatically to about six percent and today we’re at about 5.5 percent. This is very much parable with the human level of ability and it’s now deployed in Google+ Image Search and elsewhere in Image Search across the company.
“As you can see on Google Image Search on G+, you’re now able to type a word into the search box and it will recall images from your photographs that you’ve never actually hand labelled yourself. We’ve also used it for text and scription. We use it to identify text on shopfronts and maybe alert people to a discount that’s available in a particular shop or what the menu says in a given restaurant. We do that with an extremely high level of accuracy today. It’s being used in Local Search and elsewhere across the company.
“We also use the same core system across Google for speech recognition. It trains roughly in less than five days. In 2012 it delivered a 30 percent reduction in error rate against the existing old school system. This was the biggest single improvement in speech recognition in 20 years, again using the same very general deep learning system across all of these.
“Across Google we use what we call Tool AI or Deep Learning Networks for fraud detection, spam detection, handwriting recognition, image search, speech recognition, Street View detection, translation. Sixty handcrafted rule-based systems have now been replaced with deep learning based networks. This gives you a sense of the kind of generality, flexibility and adaptiveness of the kind of advances that have been made across the field and why Google was interested in DeepMind.”
DeepMind’s tech and applications have evolved rapidly since then, but to understand how it happened we have to go back to the 1980s, when a chess prodigy in Finchley, north London was thinking about how the mind works.
Gaming the system
Hassabis has been described as
“the brains behind DeepMind”.
The son of a Greek Cypriot toy salesman and a Chinese-Singaporean John Lewis employee, Hassibis reached the rank of chess master as a 13-year-old, completed his A-levels at the age of the 16, and was then accepted to Cambridge but told to join when he was older. He used the academic hiatus to help design the Theme Park video game, and went on to graduate at 20 with a double-first in computer science.
He then set up his own video games studio before returning to academia for a PhD in cognitive neuroscience at University College London (UCL), where he met his fellow DeepMind founder Shane Legg, then a research associate at UCL’s Gatsby Computational Neuroscience Unit.
Hassabis brought his old friend Suleyman to the team and the trio together founded DeepMind. Their first experiment harked back to Hassabis’ previous career, when the trio taught an algorithm to play old Atari games including Space Invaders and Pong.
The convolutional neural network they built to this was the first deep learning model to successfully learn control policies directly from a high-dimensional sensory input using reinforcement learning. This impressive achievement was soon surpassed by subsequent forays into gaming.
Games had long been the setting of memorable feats in computer science before DeepMind took them on, from DeepBlue defeating Garry Kasparov at chess to IBM Watson winning Jeopardy contests, but one always seemed too complex for a machine to master: a 2,500-year-old Chinese board game called Go.
In chess, there are around 20 possible moves for the average position. In Go, that grows to 200, on a board that has more possible configurations than the number of atoms in the universe.
DeepMind wanted to take on the challenge.
developed a system called AlphaGo that taught itself how to play the game by studying hundreds of games-worth of historical data about positions and moves made by Go champions and then devising its own strategies. In 2016, AlphaGo became the first program to defeat a professional Go player when it won against European champion Fan Hui. The next year, the program defeated Kie Joe, the world’s number one player.
“Ultimately, we want to apply these techniques to important real-world problems like climate modelling or complex disease analysis, right?” Hassibis told the Guardian. “So it’s very exciting to start imagining what it might be able to tackle next…”
Developing use case
The subsequent expansion of DeepMind from research into
real-world applications has not been straightforward.
In September 2015, DeepMind partnered with the Royal Free NHS Trust to develop a patient safety app called Streams that reviews test results for signs of sickness and sends staff instant alerts if an urgent assessment is required. The app also helps clinicians to quickly check for other serious conditions such as acute kidney injury and displays results of blood tests, scans, and x-rays at the touch of a button.
Nurses said the app saved them up to two hours a day, but the data-sharing agreement soon fell foul of privacy laws.
In July 2017, the Information Commissioner’s Office (ICO) ruled that the Royal Free had
breached the Data Protection Act by providing DeepMind with the personal data of around 1.6 million patients.
Concerns grew the next year when Google announced it will directly manage DeepMind Health, contrary to the initial acquisition agreement to allow DeepMind to operate independently. Critics fear that the change will shift the company’s focus from research to products, while privacy campaigners worry that Google will now have access to NHS patient records.
Despite these challenges, the DeepMind’s team has grown rapidly to over 700 employees – 400 of whom have PhDs – plucked from overseas institutions and the golden triangle of research universities in Cambridge, London and Oxford through “acqui-hires” of the academics behind Dark Blue labs and Bision Factory.
Across six floors of a King’s Cross office block, they continue to develop new AI applications from NHS systems that recognise sight-threatening eye diseases from a digital scan of the eye to a feature for Android phones that predicts which apps users will use to boost battery performance.
DeepMind’s future is unclear but its AI leads the world and its founders say they remain committed to their humanitarian mission.
“I’d be much more pessimistic about the way the world is going to go if I didn’t know there was something as game-changing as AI on the way,” Hassabis
told The Times in December 2018.
“There are so many problems out there, from Alzheimer’s disease to climate change, that are hugely complex and where we seem to be making almost no progress. Either we need an exponential improvement in human behaviour or an exponential improvement in technology, and the world doesn’t look like it’s getting its act together on the former.”