Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
We’re having AI slop for dinner
Forget spam. We’re drowning in slop. That’s a new term for all that AI-generated garbage that you might have noticed on social media or elsewhere across the internet. Whenever you see crappy Google AI Overview results or Facebook photos of unnatural-looking seascapes posing as nature photography, you’re encountering the wild world of AI slop.
Not all generative AI is, by definition, slop. Just the worst uses of it. But like email spam, it’s unwanted, inaccurate, deceptive, or altogether unnecessary. Some of it is explicitly profit-driven, designed to soak up ad dollars or scam people, but some of it is just the result of popular AI models very often spitting out incorrect information or unbelievable images. It fills space, fuels confusion, and makes the internet a worse place to be. It’s already making Google and Facebook less useful by filling search pages and timelines with junk.
It can be downright dangerous. For instance, mushroom enthusiasts were recently warned to avoid fungus-hunting guides from Amazon due to the proliferation of AI-generated books on their marketplace. One bad hallucination from a bot, and you could be having some pretty wild hallucinations (or worse) of your own.
But preventing slop falls pretty far down the priority list for policymakers, so it could be years before policy meets the problem – as we’ve seen with spam phone calls, email, and texts.