How AI models are grabbing the world's data

Inside AI's data frenzy: The controversial practices fueling artificial intelligence | GZERO AI

In this episode of GZERO AI, Taylor Owen, host of the Machines Like Us podcast, examines the scale and implications of the historic data land grab happening in the AI sector. According to researcher Kate Crawford, AI is the largest superstructure ever built by humans, requiring immense human labor, natural resources, and staggering amounts of data. But how are tech giants like Meta and Google amassing this data?

So AI researcher Kate Crawford recently told me that she thinks that AI is the largest superstructure that our species has ever built. This is because of the enormous amount of human labor that goes into building AI, the physical infrastructure that's needed for the compute of these AI systems, the natural resources, the energy and the water that goes into this entire infrastructure. And of course, because of the insane amounts of data that is needed to build our frontier models. It's increasingly clear that we're in the middle of a historic land grab for these data, essentially for all of the data that has ever been created by humanity. So where is all this data coming from and how are these companies getting access to it? Well, first, they're clearly scraping the public internet. It's safe to say that if anything you've done has been posted to the internet in a public way, it's inside the training data of at least one of these models.

But it's also probably the case that these scraping includes a large amount of copyrighted data, or not publicly necessarily available data. They're probably also getting behind paywalls as we'll find out soon enough as the New York Times lawsuit against OpenAI works its way through the system and they're scraping each other's data. According to the New York Times, Google found out that OpenAI was scraping YouTube, but they didn't reveal it or push or reel it to the public because they too were scraping all of YouTube themselves and didn't just want this getting out. Second, all these companies are purchasing or licensing data. This includes news licensing entering into agreements with publishers, data purchased from data brokers, purchasing companies, or getting access to company datas that have rich data sets. Meta, for example, was considering buying the publisher Simon and Schuster just for access to their copyrighted books in order to train their LLM.

The companies that have access to rich data sets themselves are obviously an advantage here. And in particular, this is Meta and Google. Meta uses all the public data that's ever been inputted into their system. And it said that even if you aren't even on their products or use their product, your data could be in their systems, either from data they've purchased outside of their products, or if you've just appeared, for example, in an Instagram photo, your face is now being used to train their AI. Google has said that they use anything public that's on their platform. So an unrestricted Google Doc, for example, will end up in their training dataset. And they're also acquiring data in creative ways to say the least. Meta has trained its large language model on a dataset called book3, which contains over 170,000 pirated and copyrighted books. So where does this all leave us citizens and users of the internet?

Well, one thing's clear is that we can't opt out of this data collection and data use. Meta's opt out tool they provide is hidden and complicated to use, and it requires you to provide proof that our data has been used to train Meta's AI system before they'll consider removing it from their data sets. This is not the kind of user tools that we should expect in democratic societies. So it's pretty clear that we're going to need to do three things. One, we're going to need to scale up our journalism. This is exactly why we have investigative journalism, is to hold powerful governments and actors and corporations in our society to account. Journalism needs to dig deep into who's collecting what data, how these models are being trained, and how they're being built on data collected on our lives and our online experiences. Second, the lawsuits are going to need to work their way through the system and the discovery that comes with them should be revealing. The New York Times' lawsuit just to take one of the many against OpenAI, will surely reveal whether paywall journalism sits within the training models of these AI systems. And finally, there is absolutely no doubt that we need regulation to provide transparency and accountability of the data collection that is driving AI.

Meta recently announced, for example, that they were going to use data they'd collected on EU citizens in training their LLM. Immediately after the Irish Data Protection Commission pushed back, they announced they were going to pause this activity. This is why we need regulations. People who live in countries or jurisdictions that have strong data protection regulations and AI transparency regimes will ultimately be better protected. I'm Taylor Owen and thanks for watching.

More from GZERO Media

This summer, Microsoft released the 2025 Responsible AI Transparency Report, demonstrating Microsoft’s sustained commitment to earning trust at a pace that matches AI innovation. The report outlines new developments in how we build and deploy AI systems responsibly, how we support our customers, and how we learn, evolve, and grow. It highlights our strengthened incident response processes, enhanced risk assessments and mitigations, and proactive regulatory alignment. It also covers new tools and practices we offer our customers to support their AI risk governance efforts, as well as how we work with stakeholders around the world to work towards governance approaches that build trust. You can read the report here.

- YouTube

Brazil’s Supreme Court has sentenced former President Jair Bolsonaro to 27 years in prison for plotting to overturn the 2022 election and allegedly conspiring to assassinate President Lula. In this week's "ask ian," Ian Bremmer says the verdict highlights how “your response… has nothing to do with rule of law. It has everything to do with tribal political affiliation.”

Supporters of main opposition Republican People’s Party (CHP) attend a rally to protest against the arrest of Ekrem Imamoglu, the mayor of Istanbul and main rival of President Tayyip Erdogan, a day after the removal of the CHP's Istanbul provincial head Ozgur Celik by a court over alleged irregularities in a 2023 CHP provincial congress, in Istanbul, Turkey, September 3, 2025.
REUTERS/Umit Bektas/File Photo

After a weekend of mass protests in Turkey, a court in Ankara has postponed its decision in a highly charged case that could oust Turkey’s main opposition leader – and boost the fortunes of long-time President Recep Tayyip Erdoğan.