Scroll to the top

Troubling images plague Microsoft’s Copilot

​In this photo illustration, Microsoft Copilot AI logo is seen on a smartphone screen.

In this photo illustration, Microsoft Copilot AI logo is seen on a smartphone screen.

(Photo by Pavlo Gonchar / SOPA Images/Sipa USA)

Mere weeks after Google suspended its Gemini text-to-image generator for producing offensive images, Microsoft is facing similar turmoil over one of its products.

According to CNBC, which replicated the results, an engineer at Microsoft was able to use its Copilot tool to generate “demons and monsters alongside terminology related to abortion rights, teenagers with assault rifles, sexualized images of women in violent tableaus, and underage drinking and drug use.”


It also generated disturbing images that doubled as potential copyright violations, such as Disney-branded handguns, beer cans, and vape pens. And it gets more troubling: The tool created images of Elsa from “Frozen” amid the wreckage in the Gaza Strip, but also wearing an Israel Defense Forces uniform.

The Microsoft employee, Shane Jones, recently notified the Federal Trade Commission of what he saw while working as a red-teamer tasked with testing this technology, which is powered by OpenAI through a partnership with the ChatGPT and DALL-E maker.

In response, Microsoft has begun blocking some of the terms that generated offensive imagery, including “pro-choice,” “pro-life,” and “four twenty.” Microsoft told CNBC the changes were due to “continuously monitoring, making adjustments, and putting additional controls in place.”

This reflects an ongoing cycle: The worst abuses of generative AI will only come through people testing and finding out just what horrors it can produce, which will lead to stricter usage policies – and new limits to push. Of course, when copyright violations are involved, the cycle can very quickly get disrupted by lawsuits from IP holders desperate to protect their brands.

GZEROMEDIA

Subscribe to GZERO's daily newsletter