By continuing to browse our site you agree to our use of cookies, revised Privacy Policy and Terms of Use. You can change your cookie settings through your browser.
CHOOSE YOUR LANGUAGE
CHOOSE YOUR LANGUAGE
互联网新闻信息许可证10120180008
Disinformation report hotline: 010-85061466
At the Web Summit in Lisbon, MIT physicist Max Tegmark delivered a stark warning about the rapid advance of artificial intelligence. Speaking with RAZOR's Neil Cairns, he said companies are racing to build systems that may one day surpass human intelligence, yet these efforts are happening without the safety checks that other high-risk technologies must follow.
Tegmark explained that while today's AI tools are narrow and task focused, researchers are pushing toward Artificial General Intelligence and even Superintelligence. Such systems could learn, adapt, and act in the world on their own. He argues that the real danger comes from machines that combine great intelligence, broad abilities, and physical autonomy, because humans might not be able to control them.
READ MORE FROM RAZOR
AI vs Wildfires
Using AI to save the wild Atlantic salmon
Age-defying Animals
Despite the risks, Tegmark believes the situation can still be managed. He points out that industries from aviation to medicine require strict testing before products reach the public, while AI companies face almost no mandatory safety rules. He says governments simply need to apply the same basic standards to AI.
Tegmark ends on a hopeful note. Public concern is growing, and many experts across countries are calling for limits on systems that could escape human control. If society acts in time, he says AI can bring major scientific and medical advances without putting humanity at risk.
In 2014, Tegmark founded the Future Life Institute which campaigns for AI safety and pushes for regulations to be imposed on companies developing AI.