Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
AI and Canada's proposed Online Harms Act
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, takes at a look at the Canadian government’s Online Harms Act, which seeks to hold social media companies responsible for harmful content – often generated by artificial intelligence.
So last week, the Canadian government tabled their long-awaited Online Harms legislation. Similar to the Digital Services Act in the EU., this is a big sweeping piece of legislation, so I won't get into all the details. But essentially what it does is it puts the onus on social media companies to minimize the risk of their products. But in so doing, this bill actually provides a window in how we might start regulate AI.
It does this in two ways. First, the bill requires platforms to minimize the risk of exposure to seven types of harmful content, including self-harm content directed to kids or posts that incite hatred or violence. The key here is the obligation is on social media platforms, like Facebook or Instagram or TikTok, to minimize the risk of their products, not to take down every piece of bad content. The concern is not with all of the each individual pieces of content, but the way that social media products and particularly their algorithms might amplify or help target its distribution. And these products are very often driven by AI.
Second, one area where the proposed law does mandate a takedown of content is when it comes to intimate image abuse, and that includes deepfakes or content that's created by AI. If an intimate image is flagged as non-consensual, even if it's created by AI, it needs to be taken down within 24 hours by the platform. Even in a vacuum, AI generated deepfake pornography or revenge porn is deeply problematic. But what's really worrying is when these things are shared and amplified online. And to get at that element of this problem, we don't actually need to regulate the creation of these deepfakes, we need to regulate the social media that distributes them.
So countries around the world are struggling with how to regulate something as opaque and unknown as the existential risk of AI, but maybe that's the wrong approach. Instead of trying to govern this largely undefined risk, maybe we should be watching for countries like Canada who are starting with the harms we already know about.
Instead of broad sweeping legislation for AI, we might want to start with regulating the older technologies, like social media platforms that facilitate many of the harms that AI creates.
I'm Taylor Owen and thanks for watching.
- When AI makes mistakes, who can be held responsible? ›
- Taylor Swift AI images & the rise of the deepfakes problem ›
- Ian Bremmer: On AI regulation, governments must step up to protect our social fabric ›
- AI regulation means adapting old laws for new tech: Marietje Schaake ›
- EU AI regulation efforts hit a snag ›
- Online violence means real-world danger for women in politics - GZERO Media ›
- Social media's AI wave: Are we in for a “deepfakification” of the entire internet? - GZERO Media ›
What is a technopolar world?
Who runs the world? In a series of videos about artificial intelligence, Ian Bremmer, founder and president of GZERO Media and Eurasia Group introduces the concept of a technopolar world––one where technology companies wield unprecedented influence on the global stage, where sovereignty and influence is determined not by physical territory or military might, but control over data, servers, and, crucially, algorithms.
We aren’t yet in a fully technopolar world, but we do exist in a digital order where major tech companies hold sway over standards, operations, interactions, security and economics in the virtual realm. And Bremmer says this is just the beginning. He highlights two key advantages that technology companies have: their dominance over the digital space, which profoundly impacts the lives of billions of people every day, as well as their role in providing critical digital infrastructure required to run a modern economy and society.
As artificial intelligence and other transformative technologies advance, and more and more of our daily life shifts online, Bremmer predicts a shift in power dynamics, where tech companies extend their reach beyond the digital sphere into economics, politics, and even national security. This will almost certainly challenge traditional ideas about global power, which may be determined as much by competition between nation states and tech companies as it is, say, between the US and China. Incorporating tech firms into governance models may be necessary to effectively navigate the complexity of a technopolar world, Bremmer argues. Ultimately, how these companies choose to wield power and their interactions with governments will shape the trajectory of our economic, social, and political futures.
See more of GZERO Media's coverage on artificial intelligence and geopolitics,
Why social media is broken & how to fix it
Social media companies play an outsize role in global politics — from the US to Myanmar. And when they fail, their actions can cost lives.
That's why Frances Haugen blew the whistle against her then-employer, Facebook, when she felt the company hadn't done enough to stop an outrage-driven algorithm from spreading misinformation, hate, and even offline violence.
On GZERO World, Haugen tells Ian Bremmer why governments need to rethink how they regulate social media. A good example is the EU, whose new law mandating data transparency could have global ripple effects.
Haugen also explains why those annoying messages about sharing your cookies are actually a good thing, and why she still believes social media companies can change for the better.
Finally, don't miss her take on Elon Musk having second thoughts about Twitter.
- Big Tech: Global sovereignty, unintended consequences - GZERO ... ›
- GOP battle with Big Tech reaches the Supreme Court - GZERO Media ›
- The Graphic Truth: Twitter doesn't rule the social world - GZERO Media ›
- Be more worried about artificial intelligence - GZERO Media ›
- What is Section 230, the 90's law governing the internet? - GZERO ... ›
- How social media harms democracy - GZERO Media ›
- Norway's school phone ban aims to reclaim "stolen focus", says PM Jonas Støre - GZERO Media ›
What happens in Europe, doesn’t stay in Europe — why EU social media regulation matters to you
The EU just approved the Digital Services Act, which for the first time will mandate social media companies come clean about what they do with our data.
Okay, but perhaps you don't live there. Why should you care?
First, transparency matters, says Facebook whistleblower Frances Haugen.
Second, she tells Ian Bremmer on GZERO World, the EU is not telling social media firms exactly how to change their ways — but rather saying: "We want a different relationship. We want you to disclose risks. We want you to just actually give access to data."
And third, Haugen believes that if it works in Europe, the DSA will help shape law in other parts of the world too.
Watch the GZERO World episode: Why social media is broken & how to fix it
- The next great game: Politicians vs tech companies - GZERO Media ›
- QR codes and the risk to your personal data - GZERO Media ›
- EU & US: democracy frames tech approaches; Australia & Facebook ... ›
- A “techlash” is coming this year - GZERO Media ›
- Was Elon Musk right about Twitter's bots? - GZERO Media ›
- Whistleblowers & how to activate a new era of digital accountability - GZERO Media ›
GOP battle with Big Tech reaches the Supreme Court
Jon Lieber, head of Eurasia Group's coverage of political and policy developments in Washington, discusses Republican states picking fights with social media companies.
Why are all these Republican states picking fights with social media companies?
The Supreme Court this week ruled that a Texas law that banned content moderation by social media companies should not go into effect while the lower courts debated its merits, blocking the latest effort by Republican-led states to try and push back on the power of Big Tech. Florida and Texas are two of the large states that have recently passed laws that would prevent large social media companies from censoring or de-platforming accounts that they think are controversial, which they say is essential for keeping their users safe from abuse and misinformation. The courts did not agree on the constitutionality of this question. One circuit court found that the Florida law probably infringes on the free speech rights of the tech companies.
Yes, companies do have free speech rights under the US constitution while a different circuit court said that the state of Texas did have the ability to dictate how these firms choose how to moderate their platforms. These questions will likely eventually be settled by the Supreme Court who are going to be asked to weigh in on the constitutionality of these laws and if they conflict with the provision of federal law that protects the platforms from liability for content moderation, known as Section 230. But the issue is also likely to escalate once Republicans take control of the House of Representatives next year. These anti-Big Tech laws are part of a broader conservative pushback against American companies who Republicans think have become too left-leaning and way too involved in the political culture wars, most frequently on the side of liberal causes.
And states are taking the lead because of congressional inertia. Democrats are looking at ways to break up the concentrated power of these companies, but lack a path towards a majority for any of the proposals that they've put forward so far this year. Social media, in particular, is in the spotlight because Twitter and Facebook continue to ban the account of former president Donald Trump. And because right-leaning celebrities keep getting de-platformed for what the platforms consider COVID disinformation and lies about the 2020 election.
But recent trends strongly suggest that when Republicans are in charge, they're likely to push federal legislation that will directly challenge the platform's ability to control what Americans see in their social media feeds, a sign that the tech wars have just begun.
The Graphic Truth: Twitter doesn't rule the social world
Elon Musk aside, does anybody else love Twitter? The platform’s 280-character tweets are an essential tool for governments, institutions, politicians, and journalists — as well as eccentric billionaires, of course — but in the grander scheme, not a lot of regular folks are hooked. We look at the brave — and scary — user numbers of social media, where not many care whether you RT’d or simply liked their thread.
Meta's moves to malign TikTok reveal common dirty lobbying practices
Marietje Schaake, International Policy Director at Stanford's Cyber Policy Center, Eurasia Group senior advisor and former MEP, discusses dirty lobbying practices by the biggest tech companies.
Meta reportedly hired a GOP firm to malign TikTok. How dangerous is this move to the public?
Well, I think it is important that we know these kinds of dirty lobbying practices that apparently looked attractive and acceptable to Meta or Facebook. It seems like a desperate effort to polish a tarnished image of the company and they must have thought that offense is the best defense. But generally, the public, the audience, readers of the news have no way of knowing which stories have been planted or that they are planted in media at all. And I think the fact that this is a common practice is revealing and cynical. But the problem is that for many of the biggest tech companies all kinds of lobbying, sponsoring, influencing has become accessible in ways that very few can compete with, they just have a lot of money to spend. I was surprised to hear, for example, that WhatsApp's lead, Will Cathcart, claimed this week that his company was not heard by European legislators when it came to the Digital Markets Act while a public consultation was held. And Meta, which owns WhatsApp, spent 5.5 million euros on lobbying in Brussels last year. So I'm pretty sure they did have an opportunity to engage.
Now on a different note after this week, you won't be hearing from me with Cyber in 60 for a while. I'm taking leave for personal reasons as well as to focus on writing on my book, about which I'm sure you'll hear later. But there are many other 60 Second videos on other themes that you might appreciate on GZERO Media. And I look forward to reconnecting in the future very soon.
- Is Facebook like a car or a cigarette? - GZERO Media ›
- The next great game: Politicians vs tech companies - GZERO Media ›
- Big Tech's big challenge to the global order - GZERO Media ›
- Graphic Truth: The world is crazy for TikTok - GZERO Media ›
- What We're Watching: Tick Tock for TikTok, Netanyahu loses support ... ›
Tech companies' role in the spread of COVID-19 misinformation
Marietje Schaake, International Policy Director at Stanford's Cyber Policy Center, Eurasia Group senior advisor and former MEP, discusses trends in big tech, privacy protection and cyberspace:
Why is misinformation about the COVID-19 test spreading so fast across social media platforms?
One underlying reason is that the US has been so reluctant to hold tech companies to account at all. There are understandable sensitivities about online speech, and the First Amendment gives tech companies a lot of room to say that they simply don't want to censor anyone. Or that they're just platforms, connecting messenger and audience, buyer and seller, without responsibility. But what is missing in these reflections is how other rights or principles can get crushed, public health being an obvious one in the case of the COVID-19 pandemic. Companies so far have taken a hands-off approach. They've not been reigned in by lawmakers. And some very cynical actors are happy to profit off the pandemic or to spread conspiracy theories. Sadly, they are having a field day.
How does the pandemic itself impact these dynamics?
It's a mix of people spending more time online combined with more uncertainty about the evolving facts and knowledge around the virus, new variants, and the latest recommendations by officials on how to best mitigate risk. They all create the optimal conditions for dis and misinformation about COVID tests, but also other COVID related matters. And I think it's tragic how some people will cynically use a crisis for their own agendas and benefits.