Companies are gradually changing their terms of service to meet the needs of the AI era. Google altered its terms last July to specify that it may use publicly available user data to train its Bard chatbot — now called Gemini — and cloud services offering. Snap and X have made similar changes to their terms of service, while Meta notified European users that public posts on Facebook and Instagram will be used to train its AI.


More recently, Adobe faced public outrage when devoted users read into ambiguities in its new privacy policy. The company changed its terms of use earlier this month, noting that it “may access [user] content through both automated and manual methods,” including machine learning. Adobe wrote a blog post clarifying that it’s not peering into NDA-protected Photoshop projects, but rather describing the way it uses AI to monitor its ecosystem for illegal content such as child sexual abuse material.

There’s an old truism in tech, “If you’re not paying for it, you’re the product.” Well, Adobe’s products aren’t cheap, so, let’s rework this. How about: “If you’re using it, you’ve become AI training data.” Oh, and if you’re concerned about privacy, you should always read the fine print.

More For You

- YouTube

How is the US is reshaping global power dynamics, using tariffs and unilateral action to challenge the international order it once led? Michael Froman joins Ian Bremmer on GZERO World to discuss.

- YouTube

At the 2026 Munich Security Conference, Brad Smith announces the launch of the Trusted Tech Alliance, a coalition of global technology leaders, including Microsoft, committing to secure cross-border tech flows, ethical governance, and stronger data protections.