Companies are gradually changing their terms of service to meet the needs of the AI era. Google altered its terms last July to specify that it may use publicly available user data to train its Bard chatbot — now called Gemini — and cloud services offering. Snap and X have made similar changes to their terms of service, while Meta notified European users that public posts on Facebook and Instagram will be used to train its AI.


More recently, Adobe faced public outrage when devoted users read into ambiguities in its new privacy policy. The company changed its terms of use earlier this month, noting that it “may access [user] content through both automated and manual methods,” including machine learning. Adobe wrote a blog post clarifying that it’s not peering into NDA-protected Photoshop projects, but rather describing the way it uses AI to monitor its ecosystem for illegal content such as child sexual abuse material.

There’s an old truism in tech, “If you’re not paying for it, you’re the product.” Well, Adobe’s products aren’t cheap, so, let’s rework this. How about: “If you’re using it, you’ve become AI training data.” Oh, and if you’re concerned about privacy, you should always read the fine print.

More For You

President Trump unveiled “Project Freedom,” an initiative to escort ships and restore traffic through the Strait of Hormuz, on Sunday. By Tuesday evening, he had unceremoniously suspended it by Truth Social post, shortly after Secretary of State Marco Rubio told reporters how committed the administration was to it.

- YouTube

Do you trust us? A recent Pew Research Center poll found that fewer than half of Americans have trust in journalists to act in the public’s best interests — a share that has been falling for years. At the same time, partisanship is surging, and generative AI is challenging the very notion of truth.