Geoffrey Hinton, the British-Canadian computer scientist often hailed as a "godfather" of Artificial Intelligence (AI), has raised his estimate for the likelihood of AI causing human extinction in the next three decades.
Hinton warned that the rapid pace of AI development is "much faster" than anticipated, leading him to revise his previous assessment of the risks posed by the technology.
In an interview on Friday on BBC Radio 4's Today program, Hinton said there is now a "10 percent to 20 percent" chance that AI could lead to humanity's extinction within the next 30 years. This is a shift from his earlier statement, when he suggested a 10 percent chance of AI triggering a catastrophic outcome for humanity.
When asked by guest editor, former UK Finance Secretary Sajid Javid, if he had adjusted his view on the potential for an AI apocalypse, Hinton responded: "We've never had to deal with things more intelligent than ourselves before."
He further explained: "How many examples do you know of a more intelligent thing being controlled by a less intelligent thing? There are very few examples. There's a mother and baby. Evolution put a lot of work into allowing the baby to control the mother, but that's about the only example I know of."
An existential threat
Hinton, a professor emeritus at the University of Toronto, compared the potential relationship between humans and highly advanced AI to that of toddlers and adults. "I like to think of it as: imagine yourself and a three-year-old. We'll be the three-year-olds," he warned.
"I've come to the conclusion that the kind of intelligence we're developing is very different from the intelligence we have," he said. "So it's as if you had 10,000 people and whenever one person learned something, everybody automatically knew it. And that's how these chatbots can know so much more than any one person."
AI, which refers to computer systems capable of performing tasks that typically require human intelligence, has rapidly evolved, raising concerns among experts about its future implications. Hinton has been vocal about the risks of unchecked AI development, especially in light of the potential for "bad actors" to use AI for harmful purposes. He has also expressed concerns that the creation of artificial general intelligence (AGI)—systems smarter than humans—could result in an existential threat, as these systems might eventually evade human control.
Reflecting on his early career, Hinton admitted that he never expected AI to reach its current level of development so quickly. "I didn't think it would be where we [are] now," he said. "I thought at some point in the future we would get here."
He added, "Because the situation we're in now is that most of the experts in the field think that sometime, within probably the next 20 years, we're going to develop AIs that are smarter than people. And that's a very scary thought."
The pace of AI development, Hinton noted, has been "very, very fast, much faster than I expected," and he called for increased government regulation to ensure the technology is developed safely.
"My worry is that the invisible hand is not going to keep us safe. So just leaving it to the profit motive of large companies is not going to be sufficient to make sure they develop it safely," he explained. "The only thing that can force those big companies to do more research on safety is government regulation."
Hinton warned that the rapid pace of AI development is 'much faster' than anticipated. /Mark Blinch/Reuters
Significant carbon footprint
Hinton is widely regarded as one of the "godfathers of AI," having shared the prestigious ACM A.M. Turing Award with two other pioneers of the field. However, one of his peers, Yann LeCun, the chief AI scientist at Meta, has downplayed the existential risks posed by AI, suggesting that the technology "could actually save humanity from extinction."
Despite these differing views, Hinton's stark warning about the dangers of AI and his call for greater oversight reflect growing concerns within the scientific community about the future of artificial intelligence.
AI systems also have a significant impact on the environment.
According to Google's latest environmental report, the company's greenhouse gas emissions in 2023 were 48 percent higher than in 2019. The tech giant attributes this increase to the growing energy demands of its data centers, a trend accelerated by the rapid expansion of AI.
AI-powered services require significantly more computing power—and thus more electricity—than traditional online activities, raising concerns about the technology's environmental footprint. As AI continues to grow, Google has faced mounting warnings about its impact on sustainability.
While the company has set a target to achieve net-zero emissions by 2030, it acknowledges that "as we further integrate AI into our products, reducing emissions may be challenging."
Subscribe to Storyboard: A weekly newsletter bringing you the best of CGTN every Friday