Skip to content
Search

Latest Stories

GZERO AI

Global researchers sign new pact to make AI a “global public good”

Global researchers sign new pact to make AI a “global public good”
Annie Gugliotta
A coalition of 21 influential artificial intelligence researchers and technology policy professionals signed a new agreement – the Manhattan Declaration on Inclusive Global Scientific Understanding of Artificial Intelligence – at the United Nations General Assembly in New York on Thursday, Sept. 26.

The declaration comes one week after the UN Secretary-General's High-Level Advisory Body on Artificial Intelligence (HLAB-AI) released its final report detailing seven recommendations for the UN to promote responsible and safe AI governance.

The Manhattan Declaration, which shares some signatories with the HLAB-AI group – including Google’s James Manyika, former Spanish government official Carme Artigas, and the Institute for Advanced Study’s Alondra Nelson – is a 10-point decree seeking to shape the contours of future AI development. It asks researchers to promote scientific cooperation among diverse and inclusive perspectives, conduct transparent research and risk assessment into AI models, and commit to responsible development and use, among other priorities. Nelson co-sponsored the declaration alongside University of Montreal professor Yoshua Bengio, and other signatories include officials from Alibaba, IBM, the Carnegie Endowment for International Peace, and the Center for AI Safety.

This is meant to foster AI as a “global public good,” as the signatories put it.

“We reaffirm our commitment to developing AI systems that are beneficial to humanity and acknowledge their pivotal role in attaining the global Sustainable Development Goals, such as improved health and education,” they wrote. “We emphasize that AI systems’ whole life cycle, including design, development, and deployment, must be aligned with core principles, safeguarding human rights, privacy, fairness, and dignity for all.”

That’s the crux of the declaration: Artificial intelligence isn’t just something to be controlled, but a technology that can – if harnessed in a way that respects human rights and privacy — help society solve its biggest problems. During a recent panel conversation led by Eurasia Group and GZERO Media founder and president Ian Bremmer (also a member of the HLAB-AI group), Google’s Manyika cited International Telecommunication Union research that found most of the UN’s Sustainable Development Goals could be achieved with help from AI.

While other AI treaties, agreements, and declarations – such as the UK’s Bletchley Declaration signed last year – include a combination of governments, tech companies, and academics, the Manhattan Declaration focuses on those actually researching artificial intelligence. “As AI scientists and technology-policy researchers, we advocate for a truly inclusive,

global approach to understanding AI’s capabilities, opportunities, and risks,” the letter concludes. “This is essential for shaping effective global governance of AI technologies. Together, we can ensure that the development of advanced AI systems benefits all of humanity.”

More For You

What we learned from a week of AI-generated cartoons
Courtesy of ChatGPT
Last week, OpenAI released its GPT-4o image-generation model, which is billed as more responsive to prompts, more capable of accurately rendering text, and better at producing higher-fidelity images than previous AI image generators. Within hours, ChatGPT users flooded social media with cartoons they made using the model in the style of the [...]
The flag of China is displayed on a smartphone with a NVIDIA chip in the background in this photo illustration.

The flag of China is displayed on a smartphone with a NVIDIA chip in the background in this photo illustration.

Jonathan Raa/NurPhoto via Reuters
H3C, one of China’s biggest server makers, has warned about running out of Nvidia H20 chips, the most powerful AI chips Chinese companies can legally purchase under US export controls. [...]
​North Korean leader Kim Jong Un supervises the test of suicide drones with artificial intelligence at an unknown location, in this photo released by North Korea's official Korean Central News Agency on March 27, 2025.

North Korean leader Kim Jong Un supervises the test of suicide drones with artificial intelligence at an unknown location, in this photo released by North Korea's official Korean Central News Agency on March 27, 2025.

KCNA via REUTERS
Hermit Kingdom leader Kim Jong Un has reportedly supervised AI-powered kamikaze drone tests. He told KCNA, the state news agency, that developing unmanned aircraft and AI should be a top priority to modernize North Korea’s armed forces. [...]
The logo for Isomorphic Labs is displayed on a tablet in this illustration.

The logo for Isomorphic Labs is displayed on a tablet in this illustration.

Igor Golovniov/SOPA Images/Sipa USA via Reuters
In 2024, Demis Hassabis won a Nobel Prize in chemistry for his work in predicting protein structures through his company, Isomorphic Labs. The lab, which broke off from Google's DeepMind in 2021, raised $600 million from investors in a new funding round led by Thrive Capital on Monday. The company did not disclose a valuation. [...]