Is out-of-control artificial intelligence (AI) a genuine threat to humanity? According to one EU expert, the issue isn't so much with the progress of the technology, but the way in which humans are already using it.
"It is definitely not one of my main concerns," said professor Virginia Dignum, a member of the EU's High Level Expert Group on AI, tells CGTN Europe on the existential risks of generative AI.
"The risk doesn't come from technology, it comes from people using technology wrongly, to accumulate power and wealth, using technology for the wrong reasons," she says.
"It's not about AI being a risk for humanity, but – like always, unfortunately – it's about people being a risk to other people."
READ MORE
ChatGPT chief: AI should be regulated by a U.S. or global agency
A third of Swedish students use ChatGPT – but is it cheating?
The arrival of a slew of new and powerful generative AI chatbots in recent months, most prominently ChatGPT, has triggered growing concerns that the technology is developing faster than humans can control it.
Questions over AI's rapid and so far largely unhindered progress include its potential to trigger cyber-security catastrophes or take over power plants or vital information systems without even receiving human prompts.
Generative AI chatbots like ChatGPT have triggered concerns the technology is developing faster than humans can control it. /Dado Ruvic/Reuters
In a sign that developers are keenly aware of the dangers of their life work, the Center for AI Safety earlier this week released a statement signed by executives from OpenAI, DeepMind, and other top AI researchers warning that generative AI could potentially extinguish all of humanity.
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the message read.
Focus first on the here and now, 'not the future'
However, Dignum, a professor of Social and Ethical Artificial Intelligence at University of Umeå in Sweden, says we have to focus on the problems humans are already facing due to AI.
"The concern is with the risks and the issues and the wrongdoing that AI systems are already doing now, not the future… in terms of increasing inequality, in terms of bias, in terms of discrimination, in terms of lack of privacy," she says. "Those are issues that we really need to address first."
Directly addressing the warning from some of the top minds in generative AI development, she adds that if those leaders that signed the letter were really concerned, "then they should be the first ones to stop the developments that they are doing and let governments and democratic institutions take over."
Who should regulate AI?
Many of the world's economic powerhouses are in fact already moving to regulate AI's development.
Earlier this month the EU introduced its landmark AI Act, which categorizes applications of AI into four levels of risk: unacceptable risk, high risk, limited risk and minimal or no risk.
Unacceptable risk applications – systems, for example, that use subliminal techniques to distort behavior, exploit vulnerabilities of specific groups, or infer emotions in law enforcement, border management, and the workplace – are now banned by default and cannot be used in the bloc.
But Dignum says such regulation can no longer just be fronted by institutions.
"It's not a matter of who goes first – we all have to go now," she says. "The policymakers, the regulators, the government and also the corporate world – the responsibility is on all of us."
AI expert Virginia Dignum says we need a 'driver's license' for the new generative systems coming out on the market. /Andriy Onufriyenko/Getty Creative
She says that corporations need to take accountability for the systems they are developing and putting out in the world, while governments have to take responsibility in ensuring the safety and protection of their citizens.
In fact, the EU and the U.S. said following a meeting on Wednesday that they would soon release a code of conduct on artificial intelligence, hoping to outline common standards on its development.
The code, however, will only be voluntary, prompting critics to say that legislation is not keeping pace with the development of the technology.
A sense of urgency
Dignum stresses though that such meetings are a promising sign that governments are finally starting to show a "sense of urgency" in addressing the risks.
"I think that what we really can hope for from this type of meeting is, again, the need to ensure that there are things put in place to ensure monitoring and that demand accountability for those that are developing it," she says.
"You cannot just drive a car without a driver's license. We need to start thinking about what the driver's license is for the systems that we are putting into the world."
As for the speed of the legislation, she says that it's not necessarily just the AI that should be regulated, but its application.
"I think we are focusing too much on the regulation of technology and too little on the legislation of the effects of that technology," she says. "If a system is taking decisions about my life, I couldn't care less if that system is the most advanced AI, a spreadsheet, a dice, or just some person."
"If that system is really affecting my life in negative ways, there needs to be accountability, she adds. "There needs to be regulations that say that the system cannot do it – and independently of the technology."
Subscribe to Storyboard: A weekly newsletter bringing you the best of CGTN every Friday