Scroll to the top

{{ subpage.title }}

Israel's Lavender: What could go wrong when AI is used in military operations?
Israel's Lavender: What could go wrong when AI is used in military operations? | GZERO AI

Israel's Lavender: What could go wrong when AI is used in military operations?

In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, examines the Israeli Defence Forces' use of an AI system called Lavender to target Hamas operatives. While it reportedly shares hallucination issues familiar with AI systems like ChatGPT, the cost of errors on the battlefront is incomparably severe.
Read moreShow less

People take part in the annual Gay Pride parade in support of LGBT community, in Santiago, Chile, June 22, 2019.

REUTERS/Rodrigo Garrido

AI struggles with gender and race

Generative AI keeps messing up on important issues about diversity and representation — especially when it comes to love and sex.

According to one report from The Verge, Meta’s AI image generator repeatedly refused to generate images of an Asian man with a white woman as a couple. When it finally produced one of an Asian woman and a white man, the man was significantly older than the woman.

Meanwhile, Wired found that different AI image geneators routinely represent LGBTQ individuals as having purple hair. And when you don’t specify what ethnicity they should be, these systems tend to default to showing white people.

Read moreShow less
OpenAI is risk-testing Voice Engine, but the risks are clear
- YouTube

OpenAI is risk-testing Voice Engine, but the risks are clear

Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she says that while OpenAI is testing its new Voice Engine model to identify its risks, we've already experienced the clear dangers of voice impersonation technology. What we need is a more independent assessment of these new technologies applying equally to companies who want to tread carefully and those who want to race ahead in developing and deploying the technology.
Read moreShow less
Social media's AI wave: Are we in for a “deepfakification” of the entire internet?
Social media's AI wave: Are we in for a “deepfakification” of the entire internet? | GZERO AI

Social media's AI wave: Are we in for a “deepfakification” of the entire internet?

In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, looks into the phenomenon he terms the "deepfakification" of social media. He points out the evolution of our social feeds, which began as platforms primarily for sharing updates with friends, and are now inundated with content generated by artificial intelligence.

Read moreShow less
Should we regulate generative AI with open or closed models?
Title Placeholder | GZERO AI

Should we regulate generative AI with open or closed models?

Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. Fresh from a workshop hosted by Princeton's Institute for Advanced Studies where the discussion was centered around whether regulating generative AI should be opened to the public or a select few, in this episode, she shares insights into the potential workings, effectiveness and drawbacks of each approach.

Read moreShow less

This image of NGC 5468, a galaxy located about 130 million light-years from Earth, combines data from the Hubble and James Webb space telescopes.

NASA/ESA/CSA/STScI/Adam G. Riess via Reuters

Hard Numbers: Understanding the universe, Opening up OpenAI,  Bioweapon warning, Independent review, AI media billions

100 million: AI is helping researchers better map outer space. One recent simulation led by a University College London researcher was able to show 100 million galaxies just across a quarter of the Earth’s southern hemisphere sky. This is part of a wider effort to understand dark energy, the mysterious force causing the expansion of the universe.

30,000: The law firm WilmerHale, which completed its investigation of Sam Altman’s brief December ouster from OpenAI, examined 30,000 documents as part of its review. The contents of the report haven’t been made public, but new board chairman Bret Taylor said that the review found the prior board acted in good faith but didn’t anticipate the reaction to removing Altman, who is now rejoining the board. The SEC, meanwhile, is still investigating whether OpenAI deceived investors, but it’s unclear whether WilmerHale will give their findings to the agency.

90: More than 90 scientists have pledged not to use AI to develop bioweapons as part of an agreement forged somewhat in response to congressional remarks given by Anthropic CEO Dario Amodei last year. Amodei said while the current generation of AI technology couldn’t handle such a task, it’s only two or three years away.

100: More than 100 AI researchers have signed an open letter asking the leading companies to allow independent investigators access to their models to ensure that risk assessment is thorough. “Generative AI companies should avoid repeating the mistakes of social media platforms, many of which have effectively banned types of research aimed at holding them accountable,” the letter said.

8 billion: The media company Thomson Reuters says it has an $8 billion “war chest” to spend on AI-related acquisitions. In addition to publishing the Reuters newswire, the company sells access to services like Westlaw, a popular legal research platform. It’s also committed to spending at least $100 million developing in-house AI technology to integrate into its news and data offerings.

AI and Canada's proposed Online Harms Act
Canada wants to hold AI companies accountable with proposed legislation | GZERO AI

AI and Canada's proposed Online Harms Act

In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, takes at a look at the Canadian government’s Online Harms Act, which seeks to hold social media companies responsible for harmful content – often generated by artificial intelligence.

Read moreShow less
Voters beware: Elections and the looming threat of deepfakes
2024 02 17 Global Stage Clip Brad Smith 03 FINAL

Voters beware: Elections and the looming threat of deepfakes

With AI tools already being used to manipulate voters across the globe via deepfakes, more needs to be done to help people comprehend what this technology is capable of, says Microsoft vice chair and president Brad Smith.

Smith highlighted a recent example of AI being used to deceive voters in New Hampshire.

“The voters in New Hampshire, before the New Hampshire primary, got phone calls. When they answered the phone, there was the voice of Joe Biden — AI-created — telling people not to vote. He did not authorize that; he did not believe in it. That was a deepfake designed to deceive people,” Smith said during a Global Stage panel on AI and elections on the sidelines of the Munich Security Conference last month.

“What we fundamentally need to start with is help people understand the state of what technology can do and then start to define what's appropriate, what is inappropriate, and how do we manage that difference?” Smith went on to say.

Watch the full conversation here: How to protect elections in the age of AI

Subscribe to our free newsletter, GZERO Daily

Latest