Identifying the risks of artificial intelligence development and working collaboratively to mitigate those potential harms is at the heart of a document signed Wednesday by 28 countries at an AI summit hosted by the U.K.

Signatories of the “Bletchley Declaration”, named for the venue where the two-day gathering is taking place that was also the site of a historic World War II code-breaking effort, agreed to prioritize efforts to investigate the full impacts and anticipate possible “catastrophic” harms of fast-evolving artificial intelligence tools and support “an internationally inclusive network of scientific research on frontier AI safety.”

It also called on those entities engaged with developing AI software and applications to take full responsibility for their innovations.

“We encourage all relevant actors to provide context-appropriate transparency and accountability on their plans to measure, monitor and mitigate potentially harmful capabilities and the associated effects that may emerge, in particular to prevent misuse and issues of control, and the amplification of other risks,” the document reads.

Related
Biden issues sweeping order aiming to safeguard AI advancements
Top artificial intelligence developers commit to security testing, clear labeling of AI-generated content
As FTC launches ChatGPT probe, Utahns weigh in on AI fears

The gathering is notable for the group of wide-ranging participants that includes dozens of government representatives, including China, leaders of the top companies behind emerging AI innovations as well as academics and tech industry advocates.

U.S. Vice President Kamala Harris is due to join the summit on Thursday, just three days after her boss, President Joe Biden, announced a sweeping AI-focused executive order. The set of rules is aiming to create new U.S. regulatory oversight on emerging artificial intelligence technology and build bulwarks against consumer privacy invasions, discrimination and the dissemination of false or misleading information generated by AI-powered tools.

While British Prime Minister Rishi Sunak said the declaration was “a landmark achievement that sees the world’s greatest AI powers agree on the urgency behind understanding the risks of AI — helping ensure the long-term future of our children and grandchildren,” according to The Associated Press, Harris delivered a speech at the U.S. Embassy in London in which she encouraged Britain and other participating countries to do more, and do it faster, as AI-driven tools were already leading to harmful outcomes.

“There are additional threats that also demand our action, threats that are currently causing harm and to many people also feel existential,” Harris said, per AP.

Even though the declaration document lacks any avenues of accountability or enforcement, British digital minister Michelle Donelan said it was an achievement just to get so many key players in one room, according to Reuters. Donelan announced two further AI safety summits, one to be held in South Korea in six months and another in France six months after that.

“For the first time, we now have countries agreeing that we need to look not just independently but collectively at the risk around frontier AI,” Donelan told reporters.

The concerns over future potential harms of artificial intelligence-driven systems are myriad and include widespread job losses due to automation advancements; AI-generated deepfake videos, photos and/or audio recordings that falsely portray statements or incidents; built-in biases from systems trained on information harvested from internet sources; digital privacy violations; and increased economic inequality as AI advances lead to more benefits for wealthy individuals and corporations.

There are also those who believe that AI tools could advance to the point where they represent an existential threat to the human species at the same level as a potential global nuclear war or unforeseen biological disaster.

Earlier this year, a single-sentence missive issued by the nonprofit Center for AI Safety on earned the signatures of a wide-ranging group of distinguished scientists, academics and tech developers including Turing Award winners Geoffrey Hinton and Yoshua Bengio, and leaders of the major AI labs, including Sam Altman of OpenAI and Demis Hassabis of Google DeepMind.

View Comments

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement read.

The Center for AI Safety’s director said the time to take action in anticipating, and mitigating, the potential harms of AI systems is now and it’s one that requires global participation.

“Pandemics were not on the public’s radar before COVID-19,” Dan Hendrycks, director of the Center for AI Safety, said in a June press release. “It’s not too early to put guardrails in place and set up institutions so that AI risks don’t catch us off guard.”

“As we grapple with immediate AI risks like malicious use, misinformation, and disempowerment, the AI industry and governments around the world need to also seriously confront the risk that future AIs could pose a threat to human existence. Mitigating the risk of extinction from AI will require global action. The world has successfully cooperated to mitigate risks related to nuclear war. The same level of effort is needed to address the dangers posed by future AI systems.”

Join the Conversation
Looking for comments?
Find comments in their new home! Click the buttons at the top or within the article to view them — or use the button below for quick access.