Dr Geoffrey Hinton: why did ‘Godfather of AI’ leave Google - what did he say about AI deep learning risk

Watch more of our videos on Shots! 
and live on Freeview channel 276
Visit Shots! now

This article contains affiliate links. We may earn a small commission on items purchased through this article, but that does not affect our editorial judgement.

The pioneering researcher said ‘it's time to retire’ as he warned about the dangers posed by deep learning

Geoffrey Hinton, the man widely regarded as the godfather of artificial intelligence (AI), has resigned from his position at Google in order to raise awareness of the mounting risks posed by the field’s advancements.

Dr Hinton, 75, told the New York Times in a statement that he was leaving Google and that he now regretted his work. Current AI systems like ChatGPT are the result of Hinton’s groundbreaking work in the fields of neural networks and deep learning.

Hide Ad
Hide Ad

Neural networks are artificial intelligence systems that learn and process information similarly to the human brain. They allow AIs to learn from experience, just like a person would. This is called deep learning.

The British-Canadian cognitive psychologist and computer scientist told the BBC that the chatbot may soon surpass the amount of knowledge that a human brain can store. Here is everything you need to know about it.

Why has he left Google?

Hinton joined Google in 2013, when the company acquired DNNresearch, a startup co-founded by Hinton and two of his former students, Alex Krizhevsky and Ilya Sutskever.

DNNresearch was focused on developing neural network technologies, and the acquisition was seen as a major coup for Google in its efforts to advance its machine learning capabilities.

Hide Ad
Hide Ad

After the acquisition, Hinton became a Distinguished Researcher at Google Brain, the company’s AI research division, and continued to work on developing deep learning models and advancing the state of the art in AI.

Hinton’s work at Google Brain has been instrumental in the development of many of Google’s AI-powered products and services, including the Google Assistant, Google Translate and Google Photos.

Computer scientist Geoffrey Hinton (left) has warned about the potential dangers of AI (Photos: Eviatar Bach/Wikimedia Commons/Getty Images)Computer scientist Geoffrey Hinton (left) has warned about the potential dangers of AI (Photos: Eviatar Bach/Wikimedia Commons/Getty Images)
Computer scientist Geoffrey Hinton (left) has warned about the potential dangers of AI (Photos: Eviatar Bach/Wikimedia Commons/Getty Images) | Eviatar Bach/Wikimedia Commons/Getty Images

Hinton emphasised that the tech giant had been "very responsible" in its approach to AI, and that he did not wish to criticise Google. "I actually want to say some good things about Google,” he said. “And they’re more credible if I don’t work for Google."

But he said that AI chatbots like ChatGPT may eventually be able to store more information than the human brain can.

Hide Ad
Hide Ad

“Right now, what we’re seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it’s not as good, but it does already do simple reasoning,” he said. “And given the rate of progress, we expect things to get better quite fast. So we need to worry about that.”

He added: “I’ve come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have. We’re biological systems and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world.

“And all these copies can learn separately but share their knowledge instantly. So it’s as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”

(Photo: Leon Neal/Getty Images)(Photo: Leon Neal/Getty Images)
(Photo: Leon Neal/Getty Images) | Getty Images

Hinton has previously expressed concerns about the potential dangers of artificial intelligence and the need for researchers to be mindful of these risks. In particular, he has warned about the possibility of AI systems becoming too powerful and unpredictable, leading to unintended consequences.

Hide Ad
Hide Ad

Hinton made reference to "bad actors" who would try to exploit AI for "bad things" in the recent New York Times piece.

"You can imagine, for example, some bad actor like [Russian President Vladimir] Putin decided to give robots the ability to create their own sub-goals,” he told the BBC. The scientist said this eventually might "create sub-goals like ‘I need to get more power’".

In a 2017 interview with the BBC, Hinton stated that “we should be very careful about how we use AI because it has the potential to do a lot of good, but it also has the potential to do a lot of harm.”

He went on to explain that one of the biggest risks of AI is the possibility that it could be used to create autonomous weapons, which could cause widespread destruction if they fall into the wrong hands.

Hide Ad
Hide Ad

What has he previously said about AI?

Hinton has also previously warned about the potential for AI systems to be used to manipulate people or perpetuate bias and discrimination.

He has argued that researchers need to be vigilant about ensuring that AI systems are designed in a way that is fair and ethical, and that they do not perpetuate or amplify existing biases in society.

Despite these concerns, Hinton has also remained optimistic about the potential for AI to do good, and has been a vocal proponent of its potential benefits and the transformative impact it could have on a range of fields.

He has argued that AI has the potential to revolutionise healthcare, transportation, education, and many other areas of society, and that it could ultimately help us solve some of the world’s most pressing problems.

Hide Ad
Hide Ad

In particular, Hinton has highlighted the potential of AI to accelerate scientific research and discovery, by enabling researchers to analyse vast amounts of data and identify patterns and insights that would be impossible to uncover using traditional methods.

He has also emphasised the potential of AI to improve healthcare outcomes, by enabling doctors to make more accurate diagnoses and identify the most effective treatments for their patients.

Hinton has also discussed the potential of AI to transform transportation, by enabling the development of self-driving cars that could reduce accidents, decrease traffic congestion, and increase accessibility for people with disabilities.

Additionally, he has emphasised the potential of AI to improve education outcomes, by enabling personalised learning experiences tailored to individual students’ needs and abilities.

Hide Ad
Hide Ad

Who is Geoffrey Hinton?

Hinton is a renowned computer scientist and artificial intelligence researcher, considered one of the pioneers of the field of deep learning, which has enabled many of the recent breakthroughs in AI.

He has made significant contributions to the field of neural networks and deep learning, including the development of backpropagation, a widely used algorithm for training neural networks.

He has also been a key figure in the development of convolutional neural networks (CNNs), which have revolutionised computer vision, and recurrent neural networks (RNNs), which have been applied to natural language processing tasks.

In recognition of his contributions, Hinton has received numerous awards and honours, including the Turing Award, which is considered the highest honour in computer science, in 2018.

Comment Guidelines

National World encourages reader discussion on our stories. User feedback, insights and back-and-forth exchanges add a rich layer of context to reporting. Please review our Community Guidelines before commenting.