AI and Canada's proposed Online Harms Act

Canada wants to hold AI companies accountable with proposed legislation | GZERO AI

In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, takes at a look at the Canadian government’s Online Harms Act, which seeks to hold social media companies responsible for harmful content – often generated by artificial intelligence.

So last week, the Canadian government tabled their long-awaited Online Harms legislation. Similar to the Digital Services Act in the EU., this is a big sweeping piece of legislation, so I won't get into all the details. But essentially what it does is it puts the onus on social media companies to minimize the risk of their products. But in so doing, this bill actually provides a window in how we might start regulate AI.

It does this in two ways. First, the bill requires platforms to minimize the risk of exposure to seven types of harmful content, including self-harm content directed to kids or posts that incite hatred or violence. The key here is the obligation is on social media platforms, like Facebook or Instagram or TikTok, to minimize the risk of their products, not to take down every piece of bad content. The concern is not with all of the each individual pieces of content, but the way that social media products and particularly their algorithms might amplify or help target its distribution. And these products are very often driven by AI.

Second, one area where the proposed law does mandate a takedown of content is when it comes to intimate image abuse, and that includes deepfakes or content that's created by AI. If an intimate image is flagged as non-consensual, even if it's created by AI, it needs to be taken down within 24 hours by the platform. Even in a vacuum, AI generated deepfake pornography or revenge porn is deeply problematic. But what's really worrying is when these things are shared and amplified online. And to get at that element of this problem, we don't actually need to regulate the creation of these deepfakes, we need to regulate the social media that distributes them.

So countries around the world are struggling with how to regulate something as opaque and unknown as the existential risk of AI, but maybe that's the wrong approach. Instead of trying to govern this largely undefined risk, maybe we should be watching for countries like Canada who are starting with the harms we already know about.

Instead of broad sweeping legislation for AI, we might want to start with regulating the older technologies, like social media platforms that facilitate many of the harms that AI creates.

I'm Taylor Owen and thanks for watching.

More from GZERO Media

Saudi Crown Prince and Prime Minister Mohammed bin Salman meets with U.S. Secretary of State Antony Blinken at the Al Yamamah Palace in Riyadh, Saudi Arabia, April 29, 2024.
REUTERS/Evelyn Hockstein

Saudi Arabia is reportedly showing fresh interest in a roadmap to peace in Yemen that was iced late last year in the wake of the Oct. 7 attacks in Israel.

EDITORS' NOTE: Reuters and other foreign media are subject to Iranian restrictions on leaving the office to report, film or take pictures in Tehran. A security personnel looks on at oil docks at the port of Kalantari in the city of Chabahar, 300km (186 miles) east of the Strait of Hormuz January 17, 2012.
REUTERS/Raheb Homavandi

On Monday, India signed a 10-year-long agreement to operate and develop Iran’s Chabahar port.

FILE PHOTO: Russian President Vladimir Putin and Chinese President Xi Jinping walk during a meeting at the Kremlin in Moscow, Russia March 21, 2023.
Sputnik/Grigory Sysoyev/Kremlin via REUTERS

Russian President Vladimir Putin will be in Beijing on Thursday for talks with Chinese President Xi Jinping, in a rare overseas trip to publicly underline strong relations.

Happy young couple hide behind paper hearts to kiss.
IMAGO/Pond5 Images via Reuters

ChatGPT is a prude. Try to engage with it about sex or other risqué topics, and it’ll turn you down. The OpenAI chatbot’s usage rules specify that even developers who build on the platform must be careful to design their applications so they’re age-appropriate for children, meaning no “sexually explicit or suggestive content,” except for scientific and educational purposes. But the company is reportedly now looking into its blue side.

The MI6 secret service headquarters on the bank of the River Thames at Vauxhall in London.
PA Images via Reuters Connect

Microsoft has revealed that it has its own artificial intelligence that’s just for spies.