We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
How AI models are grabbing the world's data
In this episode of GZERO AI, Taylor Owen, host of the Machines Like Us podcast, examines the scale and implications of the historic data land grab happening in the AI sector. According to researcher Kate Crawford, AI is the largest superstructure ever built by humans, requiring immense human labor, natural resources, and staggering amounts of data. But how are tech giants like Meta and Google amassing this data?
So AI researcher Kate Crawford recently told me that she thinks that AI is the largest superstructure that our species has ever built. This is because of the enormous amount of human labor that goes into building AI, the physical infrastructure that's needed for the compute of these AI systems, the natural resources, the energy and the water that goes into this entire infrastructure. And of course, because of the insane amounts of data that is needed to build our frontier models. It's increasingly clear that we're in the middle of a historic land grab for these data, essentially for all of the data that has ever been created by humanity. So where is all this data coming from and how are these companies getting access to it? Well, first, they're clearly scraping the public internet. It's safe to say that if anything you've done has been posted to the internet in a public way, it's inside the training data of at least one of these models.
But it's also probably the case that these scraping includes a large amount of copyrighted data, or not publicly necessarily available data. They're probably also getting behind paywalls as we'll find out soon enough as the New York Times lawsuit against OpenAI works its way through the system and they're scraping each other's data. According to the New York Times, Google found out that OpenAI was scraping YouTube, but they didn't reveal it or push or reel it to the public because they too were scraping all of YouTube themselves and didn't just want this getting out. Second, all these companies are purchasing or licensing data. This includes news licensing entering into agreements with publishers, data purchased from data brokers, purchasing companies, or getting access to company datas that have rich data sets. Meta, for example, was considering buying the publisher Simon and Schuster just for access to their copyrighted books in order to train their LLM.
The companies that have access to rich data sets themselves are obviously an advantage here. And in particular, this is Meta and Google. Meta uses all the public data that's ever been inputted into their system. And it said that even if you aren't even on their products or use their product, your data could be in their systems, either from data they've purchased outside of their products, or if you've just appeared, for example, in an Instagram photo, your face is now being used to train their AI. Google has said that they use anything public that's on their platform. So an unrestricted Google Doc, for example, will end up in their training dataset. And they're also acquiring data in creative ways to say the least. Meta has trained its large language model on a dataset called book3, which contains over 170,000 pirated and copyrighted books. So where does this all leave us citizens and users of the internet?
Well, one thing's clear is that we can't opt out of this data collection and data use. Meta's opt out tool they provide is hidden and complicated to use, and it requires you to provide proof that our data has been used to train Meta's AI system before they'll consider removing it from their data sets. This is not the kind of user tools that we should expect in democratic societies. So it's pretty clear that we're going to need to do three things. One, we're going to need to scale up our journalism. This is exactly why we have investigative journalism, is to hold powerful governments and actors and corporations in our society to account. Journalism needs to dig deep into who's collecting what data, how these models are being trained, and how they're being built on data collected on our lives and our online experiences. Second, the lawsuits are going to need to work their way through the system and the discovery that comes with them should be revealing. The New York Times' lawsuit just to take one of the many against OpenAI, will surely reveal whether paywall journalism sits within the training models of these AI systems. And finally, there is absolutely no doubt that we need regulation to provide transparency and accountability of the data collection that is driving AI.
Meta recently announced, for example, that they were going to use data they'd collected on EU citizens in training their LLM. Immediately after the Irish Data Protection Commission pushed back, they announced they were going to pause this activity. This is why we need regulations. People who live in countries or jurisdictions that have strong data protection regulations and AI transparency regimes will ultimately be better protected. I'm Taylor Owen and thanks for watching.
- Electric Company and Water Works ›
- America’s first data security executive order ... underwhelms ›
- Hard Numbers: Unnatural gas needs, Google’s data centers, Homeland Security’s new board, Japan’s new LLM ›
- Can data and AI save lives and make the world safer? ›
- How is AI shaping culture in the art world? - GZERO Media ›
AI plus existing technology: A recipe for tackling global crisis
When a country experiences a natural disaster, satellite technology and artificial intelligence can be used to rapidly gather data on the damage and initiate an effective response, according to Microsoft Vice Chair and President Brad Smith.
But to actually save lives “it's high-tech meets low-tech,” he said during a Global Stage livestream event at UN headquarters in New York on September 22, on the sidelines of the UN General Assembly.
He gave the example of SEEDS, an Indian NGO that dispatches local teens to distribute life-saving aid during heatwaves. He said the program emblemizes the effective combination of “artificial intelligence, technology, and people on the ground.”
The discussion was moderated by Nicholas Thompson of The Atlantic and was held by GZERO Media in collaboration with the United Nations, the Complex Risk Analytics Fund, and the Early Warnings for All initiative.
Watch the full Global Stage conversation: Can data and AI save lives and make the world safer?
- The urgent global water crisis ›
- Armenia faces Karabakh refugee crisis ›
- Is the global food crisis here to stay? ›
- Can data and AI save lives and make the world safer? ›
- The AI power paradox: Rules for AI's power ›
- A vision for inclusive AI governance ›
- An early warning system from the UN to avert global disasters - GZERO Media ›
How AI can be used in public policy: Anne Witkowsky
There are some pretty sharp people all around the world trying to craft policy, but their best efforts are often limited by poor data. Anne Witkowsky, Assistant Secretary of State at the Bureau of Conflict and Stabilization Operations, says that’s about to change.
“Data-driven, evidence-driven decision-making by policymakers is going to be more successful” with the help of artificial intelligence, she said during a Global Stage livestream event at UN headquarters in New York on September 22, on the sidelines of the UN General Assembly.
Witkowsky said the focus needs to be on inclusion and partnership with governments in developing countries to use new technology to “build resilience” against the unrelenting pressure such states face.
The discussion was moderated by Nicholas Thompson of The Atlantic and was held by GZERO Media in collaboration with the United Nations, the Complex Risk Analytics Fund, and the Early Warnings for All initiative.