World leaders gathered for the first Artificial Intelligence Safety Summit at Bletchley Park, UK. /Leon Neal/Reuters
At an inaugural Artificial Intelligence (AI) Safety Summit at Bletchley Park, home of Britain's World War II code-breakers, political leaders from the U.S., European Union and China agreed to share a common approach to identifying risks and ways to mitigate them.
So, was this summit a success? Well, yes and no.
It was a significant achievement to get 28 countries including the U.S., the EU nations and China to attend in person and to sign the Bletchley Declaration - an agreement for "global co-operation on tackling the risks" and to keep AI "human-centric, trustworthy, and responsible."
Not only that, but all the big tech companies attended the summit, including Elon Musk of X.
READ MORE
Meta to start ad-free subscription
Chinese AI leader insists 'human' factor still key to success
There was also agreement that "like-minded governments" will be able to test eight leading tech companies' AI models before they are released.
But how will this agreement be implemented? Will the governments have to legally force the companies to submit their data for testing or will they voluntarily submit their safety models? And in case of the former, If they don't agree to hand over the data, what recourse will governments have?
The wording of the Bletchley Declaration was also deliberately loose. It may be an agreement to maximize the benefits of AI and eliminate the risks through global cooperation but how do we get there? And is every signatory on the same page?
Stifling innovation?
The U.S. is forcing companies to hand over AI safety research data through President Joe Biden's recently signed executive order while the EU is passing similar legislation, and China already has rules in place. The UK by contrast has said it will not legislate until it fully understands the problem, with Prime Minister Rishi Sunak suggesting "light touch regulation" last week.
Critics of regulation and legislation say it stifles innovation but supporters say it stops dangers developing before the genie is out of the bottle. Experts say the key is to strike the right balance. On top of that, any global framework has to have an agreed set of legally-binding rules but still allow for healthy competition to drive innovation.
It is also worth noting that China was absent from the two afternoon meetings attended by what was described as "like-minded" countries, also the ones who had signed the AI model-testing agreement. But was China invited to those meetings? CGTN Europe asked the UK foreign office for clarification and at the time of publishing it had not responded. Either way, day two of this summit did not seem as inclusive as the first day.
The next summit will be online, hosted by South Korea in six months time with an in-person summit scheduled to be held in France next year. The UK has started the conversation and everyone is in agreement on where we want to get to but the question remains: how are we going to get there together?
Subscribe to Storyboard: A weekly newsletter bringing you the best of CGTN every Friday