Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Breaking: The UN unveils plan for AI
Overnight, and after months of deliberation, a United Nations advisory body studying artificial intelligence released its final report. Aptly called “Governing AI for Humanity,” it is a set of findings and policy recommendations for the international organization and an update since the group’s interim report in December 2023.
“As experts, we remain optimistic about the future of AI and its potential for good. That optimism depends, however, on realism about the risks and the inadequacy of structures and incentives currently in place,” the report’s authors wrote. “The technology is too important, and the stakes are too high, to rely only on market forces and a fragmented patchwork of national and multilateral action.”
Before we dive in, a quick humblebrag and editorial disclosure: Ian Bremmer, founder and president of both Eurasia Group and GZERO Media, served as a rapporteur for the UN High-Level Advisory Body on Artificial Intelligence, the group in charge of the report.
The HLAB-AI report asks the UN to begin working on a “globally inclusive” system for AI governance, calls on governments and stakeholders to develop AI in a way that protects human rights, and it makes seven recommendations. Let’s dive in to each:
- An international scientific panel on AI: A new group of volunteer experts would issue an annual report on AI risks and opportunities. They’d also contribute regular research on how AI could help achieve the UN’s Sustainable Development Goals, or SDGs.
- Policy dialogue on AI governance: A twice-yearly policy dialogue with governments and stakeholders on best practices for AI governance. It’d have an emphasis on “international interoperability” of AI governance.
- AI standards exchange: This effort would develop common definitions and standards for evaluating AI systems. It’d create a new process for identifying gaps in these definitions and how to write them, as well.
- Capacity development network: A network of new development centers that will provide researchers and social entrepreneurs with expertise, training data, and computing. It’d also develop online educational resources for university students and a fellowship program for individuals to spend time in academic institutions and tech companies.
- Global fund for AI: A new fund that would collect donations from public and private groups and disburse money to “put a floor under the AI divide,” focused on countries with fewer resources to fund AI.
- Global AI data framework: An initiative to set common standards and best practices governing AI training data and its provenance. It’d hold a repository of data sets and models to help achieve the SDGs.
- AI office within the Secretariat: This new office would see through the proposals in this report and advise the Secretary-General on all matters relating to AI.
The report’s authors conclude the report by remarking that if the UN is able to chart the right path forward, “we can look back in five years at an AI governance landscape that is inclusive and empowering for individuals, communities, and States everywhere.”
To learn more, Ian will host a UN panel conversation on Saturday, Sept. 21, which you can watch live here. And if you miss it, we’ll have a recap in our GZERO AI newsletter on Tuesday. You can also check out the full report here.
How AI models are grabbing the world's data
In this episode of GZERO AI, Taylor Owen, host of the Machines Like Us podcast, examines the scale and implications of the historic data land grab happening in the AI sector. According to researcher Kate Crawford, AI is the largest superstructure ever built by humans, requiring immense human labor, natural resources, and staggering amounts of data. But how are tech giants like Meta and Google amassing this data?
So AI researcher Kate Crawford recently told me that she thinks that AI is the largest superstructure that our species has ever built. This is because of the enormous amount of human labor that goes into building AI, the physical infrastructure that's needed for the compute of these AI systems, the natural resources, the energy and the water that goes into this entire infrastructure. And of course, because of the insane amounts of data that is needed to build our frontier models. It's increasingly clear that we're in the middle of a historic land grab for these data, essentially for all of the data that has ever been created by humanity. So where is all this data coming from and how are these companies getting access to it? Well, first, they're clearly scraping the public internet. It's safe to say that if anything you've done has been posted to the internet in a public way, it's inside the training data of at least one of these models.
But it's also probably the case that these scraping includes a large amount of copyrighted data, or not publicly necessarily available data. They're probably also getting behind paywalls as we'll find out soon enough as the New York Times lawsuit against OpenAI works its way through the system and they're scraping each other's data. According to the New York Times, Google found out that OpenAI was scraping YouTube, but they didn't reveal it or push or reel it to the public because they too were scraping all of YouTube themselves and didn't just want this getting out. Second, all these companies are purchasing or licensing data. This includes news licensing entering into agreements with publishers, data purchased from data brokers, purchasing companies, or getting access to company datas that have rich data sets. Meta, for example, was considering buying the publisher Simon and Schuster just for access to their copyrighted books in order to train their LLM.
The companies that have access to rich data sets themselves are obviously an advantage here. And in particular, this is Meta and Google. Meta uses all the public data that's ever been inputted into their system. And it said that even if you aren't even on their products or use their product, your data could be in their systems, either from data they've purchased outside of their products, or if you've just appeared, for example, in an Instagram photo, your face is now being used to train their AI. Google has said that they use anything public that's on their platform. So an unrestricted Google Doc, for example, will end up in their training dataset. And they're also acquiring data in creative ways to say the least. Meta has trained its large language model on a dataset called book3, which contains over 170,000 pirated and copyrighted books. So where does this all leave us citizens and users of the internet?
Well, one thing's clear is that we can't opt out of this data collection and data use. Meta's opt out tool they provide is hidden and complicated to use, and it requires you to provide proof that our data has been used to train Meta's AI system before they'll consider removing it from their data sets. This is not the kind of user tools that we should expect in democratic societies. So it's pretty clear that we're going to need to do three things. One, we're going to need to scale up our journalism. This is exactly why we have investigative journalism, is to hold powerful governments and actors and corporations in our society to account. Journalism needs to dig deep into who's collecting what data, how these models are being trained, and how they're being built on data collected on our lives and our online experiences. Second, the lawsuits are going to need to work their way through the system and the discovery that comes with them should be revealing. The New York Times' lawsuit just to take one of the many against OpenAI, will surely reveal whether paywall journalism sits within the training models of these AI systems. And finally, there is absolutely no doubt that we need regulation to provide transparency and accountability of the data collection that is driving AI.
Meta recently announced, for example, that they were going to use data they'd collected on EU citizens in training their LLM. Immediately after the Irish Data Protection Commission pushed back, they announced they were going to pause this activity. This is why we need regulations. People who live in countries or jurisdictions that have strong data protection regulations and AI transparency regimes will ultimately be better protected. I'm Taylor Owen and thanks for watching.
- Electric Company and Water Works ›
- America’s first data security executive order ... underwhelms ›
- Hard Numbers: Unnatural gas needs, Google’s data centers, Homeland Security’s new board, Japan’s new LLM ›
- Can data and AI save lives and make the world safer? ›
- How is AI shaping culture in the art world? - GZERO Media ›
AI plus existing technology: A recipe for tackling global crisis
When a country experiences a natural disaster, satellite technology and artificial intelligence can be used to rapidly gather data on the damage and initiate an effective response, according to Microsoft Vice Chair and President Brad Smith.
But to actually save lives “it's high-tech meets low-tech,” he said during a Global Stage livestream event at UN headquarters in New York on September 22, on the sidelines of the UN General Assembly.
He gave the example of SEEDS, an Indian NGO that dispatches local teens to distribute life-saving aid during heatwaves. He said the program emblemizes the effective combination of “artificial intelligence, technology, and people on the ground.”
The discussion was moderated by Nicholas Thompson of The Atlantic and was held by GZERO Media in collaboration with the United Nations, the Complex Risk Analytics Fund, and the Early Warnings for All initiative.
Watch the full Global Stage conversation: Can data and AI save lives and make the world safer?
- The urgent global water crisis ›
- Armenia faces Karabakh refugee crisis ›
- Is the global food crisis here to stay? ›
- Can data and AI save lives and make the world safer? ›
- The AI power paradox: Rules for AI's power ›
- A vision for inclusive AI governance ›
- An early warning system from the UN to avert global disasters - GZERO Media ›
How AI can be used in public policy: Anne Witkowsky
There are some pretty sharp people all around the world trying to craft policy, but their best efforts are often limited by poor data. Anne Witkowsky, Assistant Secretary of State at the Bureau of Conflict and Stabilization Operations, says that’s about to change.
“Data-driven, evidence-driven decision-making by policymakers is going to be more successful” with the help of artificial intelligence, she said during a Global Stage livestream event at UN headquarters in New York on September 22, on the sidelines of the UN General Assembly.
Witkowsky said the focus needs to be on inclusion and partnership with governments in developing countries to use new technology to “build resilience” against the unrelenting pressure such states face.
The discussion was moderated by Nicholas Thompson of The Atlantic and was held by GZERO Media in collaboration with the United Nations, the Complex Risk Analytics Fund, and the Early Warnings for All initiative.