GZERO AI Video
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, explores the issues of responsibility and trust with the widespread deployment of AI. Who bears responsibility when AI makes errors? Additionally, can we rely on AI, and should we trust it?
So last week, a Canadian airline made headlines when a customer sued its chatbot. Not only is this story totally weird, but I think it might give us a hint at who will ultimately be responsible when AI messes up. So, this all started when Jake Moffatt's grandmother passed away and he went to the Air Canada website to see if they had a bereavement policy. He asked the chatbot this question, which told him to book the flight and that he had 90 days to request a refund. It turns out though, that you can't request bereavement refunds retroactively, a policy stated elsewhere on the Air Canada website. But here's where it gets interesting. Moffatt took Air Canada and their AI chatbot to British Columbia's Civil Resolution Tribunal, a sort of small claims court. Air Canada argued that the chatbot is a separate legal entity that is responsible for its own actions.
The AI is responsible here. They lost though and were forced to honor a policy that a chatbot made up. They've since deleted their chatbot. This case is so interesting because I think it strikes at two of the questions at the whole core of our AI conversation, responsibility and trust.
First, who's responsible when AI gets things wrong? Is Tesla responsible when their full self-driving car kills somebody? Is a newspaper liable when its AI makes things up and defames somebody? Is a government responsible for false arrests using facial recognition AI? I think the answer is likely to be yes for all of these, and this has huge implications.
Second, and maybe more profound though, is the question of whether we can and should trust AI? Anyone who watched the Super Bowl ads this year will know that AI companies are worried about this. AI has officially kicked off its PR campaign and at the core of the PR campaign is the question of trust.
According to a recent Pew Study, 52% of Americans are more concerned than they are excited about the growth of AI. So, for the people selling AI tools, this could be a real problem. A lot of these ads then seek to build public trust in the tools themselves. The ad for Microsoft Copilot, for example, shows people using AI assistant to help them write a business plan and to drop storyboards for a film to make their job better, not take it away. The message is clear here, "We're going to help you do your job better, trust us." Stepping back though, the risk of being negligent and moving fast and breaking things is that trust is really hard to earn back once you've lost it, just ask Facebook.
In Jake Moffatt's Air Canada case, all that was at stake was a $650 refund, but with AI starting to permeate every facet of our lives, it's only a matter of time before the stakes are much, much higher.
I'm Taylor Owen, and thanks for watching.
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, reflects on the missing connection between human rights and AI as she prepares for her keynote at the Human Rights in AI conference at the Mila Quebec Institute for Artificial Intelligence. GZERO AI is our weekly video series intended to help you keep up and make sense of the latest news on the AI revolution.
I'm in the hallway of the Mila Quebec Institute for Artificial Intelligence, where there's a conference that deals with human rights and artificial intelligence. And I'm really happy that we focus on this uniquely today and also tomorrow, because too often the thoughts about, the analysis of and the agenda for human rights in the context of AI governance is an afterthought.
And so it's great to hear the various ways in which human rights are at stake, from facial recognition systems to, you know, making sure that there is representation in governance from marginalized communities, for example. But what I still think is missing is a deeper connection between those people who speak AI, if you will, and those people who speak human rights. Because still the worlds of policy and politics and the worlds of artificial intelligence, and within those, the people who care about human rights tend to speak in parallel universes. And so what I'll try to do in my closing keynote today is to bring people's minds to a concrete, positive political agenda for change in thinking about how we can frame human rights for a broader audience, making sure that we use the tools that are there, the laws that apply both international and national and doubling down on enforcement. Because so often the seeds for meaningful change are already in the laws, but they're not forceful in the way that they are being held to account.
And so we have a lot of work ahead of us. But I think the conference was a good start. And I'll be curious to see the different tone and the focus on geopolitics as I go to the Munich Security Conference with lots of the GZERO team as well.
- Siddhartha Mukherjee: CRISPR, AI, and cloning could transform the human race ›
- Why human beings are so easily fooled by AI, psychologist Steven Pinker explains ›
- New AI toys spark privacy concerns for kids ›
- Emotional AI: More harm than good? ›
- Singapore sets an example on AI governance ›
- UK AI Safety Summit brings government leaders and AI experts together ›
- Making rules for AI … before it’s too late ›
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, examines how Taylor Swift's plight with AI deepfake porn sheds light on the complexities of the information ecosystem in the biggest election year ever, which includes the US elections.
Okay, so full disclosure, I don't love the NFL and my ten-year-old son is more into Ed Sheeran than Taylor Swift, so she hasn't yet flooded our household. However, when one of the most famous people in the world is caught in a deepfake porn attack driven by a right-wing conspiracy theory, forcing one of the largest platforms in the world to shut down all Taylor Swift-related content, well, now you have my attention. But what are we to make of all this?
First thing I think is it shows how crazy this US election cycle is going to be. The combination of new AI capabilities, unregulated platforms, a flood of opaque super PAC money, and a candidate who's perfectly willing to fuel conspiracy theories means the information ecosystem this year is going to be a mess.
Second, however, I think we're starting to see some of the policy levers that could be pulled to address this problem. The Defiance Act, tabled in the Senate last week, gives victims of deepfakes the right to sue the people who created them. The Preventing Deepfakes of Intimate Images Act, stuck in the House currently, goes a step further and puts criminal liability on the people who create deepfakes.
Third, though, I think this shows how we need to regulate platforms, not just the AI that creates the deepfakes, because the main problem with this content is not the ability to create them, we've had that for a long time. It's the ability to disseminate them broadly to a large number of people. That's where the real harm lies. For example, one of these Taylor Swift videos was viewed 45 million times and stayed up for 17 hours before it was removed by Twitter. And the #TaylorSwiftAI was boosted as a trending topic by Twitter, meaning it was algorithmically amplified, not just posted and disseminated by users. So what I think we might start seeing here is a slightly more nuanced conversation about the liability protection that we give to platforms. This might mean that they are now liable for content that is either algorithmically amplified or potentially content that is created by AI.
All that said, I would not hold my breath for the US to do anything here. And probably, for the content regulations we may need, we're going to need to look to Europe, to the UK, to Australia, and this year to Canada.
So what should we actually be watching for? Well, one thing I would look for is how the platforms themselves are going to respond to what is both now an unavoidable problem, and one that has certainly gotten the attention of advertisers. When Elon Musk took over Twitter, he decimated their content moderation team. But Twitter's now announced that they're going to start rehiring one. And you better believe they're doing this not because of the threat of the US Senate but because of the threat of their biggest advertisers. Advertisers do not want their content but put aside politically motivated, deepfake pornography of incredibly popular people. So that's what I'd be watching for here. How are the platforms themselves going to respond to what is a very clear problem, in part as a function of how they've designed their platforms and their companies?
I'm Taylor Owen, and thanks for watching.
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she talks about how Taylor Swift's traumatic experience with AI deepfake porn could be the turning point in passing laws that protect individuals from harmful Generative AI practices, thanks to the pop star's popularity.
Today I want to talk about Taylor Swift, and that may suggest that we are going to have a lighthearted episode, but that's not the case. On the contrary, because the pop icon has been the subject of one of the most traumatizing experiences that anyone can live through online in relation to AI and new technology.
Taylor Swift was the victim of the creation of non-consensual sexually explicit content or a pornographic deepfake. Now, the term deepfake may ring a bell because we've talked about the more convincing messages that generative AI can create in the context of election manipulation, disinformation. And that is indeed a grave concern of mine. But when you look at the numbers, the vast majority of deepfakes online are of a pornographic nature. And when those are non-consensual, imagine, for example, when it's not a pop icon that everybody knows and can come to the rescue for, but a young teenager who is faced with a deepfake porn image of themselves, classmates sharing it, you can well imagine the deep trauma and stress this causes, and we know that this kind of practice has unfortunately led to self-harm among young people as well.
So, it is high time that tech companies do more, take more responsibility for preventing this kind of terrible nonconsensual use of their products and the ensuing sharing and virality online. So, if there's one silver lining to this otherwise very depressing experience of Taylor Swift than it is that she and her followers may be able to do what few have managed to succeed in, which is to move Congress to pass legislation. There seems to be bipartisan movement and all I can hope is that it will lead to better protection of people from the worst practices of generative AI.
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, discusses how the emergence of ChatGPT and other generative AI tools have thrown a new dynamic into his teaching practice, and shares his insights into how colleges have attempted to handle the new phenomenon.
What does education look like in a world with generative AI?
The bottom line here is that we, students, universities, faculty, are simply in unchartered waters. I start teaching my digital policy class for the first time since the emergence of generative AI. I'm really unsure about how I should be handling this, but here are a few observations.
First, universities are all over the place on what to do. Policies range from outright bans, to updated citation requirements, to broad and largely unhelpful directives, to simply no policies at all. It's fair to say that a consensus has yet to emerge.
The second challenge is that AI detection software, like the plagiarism software we've used before it, are massively problematic. While there are some tools out there, they all suffer from several, in my view, disqualifying flaws. These tools have a tendency to generate false-positives, and this really matters when we're talking about academic integrity and ultimately plagiarism. What's more, research shows us that the use of these tools leads to an arms race between faculty trying to catch students and students trying to deceive. The other problem though, ironically, is that these tools may be infringing on students' copyright. When student essays are uploaded into these detection software, their writing is then stored and used for future detection. We've seen this same story with earlier generation plagiarism tools, and I personally want nothing to do with it.
Third, I think banning is not only impossible, but pedagogically irresponsible. The reality is that students, like all of us, have access to these tools and are going to use them. So, we need to move away from this idea that students are the problem and start focusing on how educators can improve their teaching instead.
However, I do worry that a key cognitive skillset that we develop at universities of reading and processing information and new ideas and developing ones on top of them is being lost. We need to ensure that our teaching preserves this.
Ultimately, this is going to be about developing new norms in old institutions, and we know that that is hard. We need new norms around trust in academic work, new methods of evaluating our own work and that of our students, teaching new skill sets and abandoning some old ones, and we need new norms for referencing and for acknowledging work. And yes, this means new norms around plagiarism. Plagiarism has been in the news a lot lately, but the status quo in an age of generative AI is simply untenable.
Perhaps I'm a Luddite on this, but I cannot let go of the idea entrenched in me that regardless of how a tool was used for research and developing ideas, that final scholarly products should ultimately be written by people. So, this term, I'm going to try a bunch of things and I'm going to see what works. I'll let you know what I learned. I'm Taylor Owen and thanks for watching.
- Artificial intelligence and the importance of civics ›
- Education’s digital revolution: why UN Secretary-General António Guterres says it's needed ›
- How will education change in the era of A.I.? ›
- AI's impact on jobs could lead to global unrest, warns AI expert Marietje Schaake ›
- AI agents are here, but is society ready for them? ›
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, Schaake is live from the World Economic Forum meeting in Davos, where AI is one of the dominant themes. Interestingly, she says, the various conversations about AI have been nuanced: it's been acknowledged as a top risk for the year as much as for its immense potential.
Hi, my name is Maritje Schaake, we are in Davos at the World Economic Forum, where AI really is one of the key topics that people are talking about. And I think what stands out and what I've heard referenced in various meetings is that the WEF's risk report of this year has signaled that this information, especially as a result of the uptake of emerging technologies, is considered one of the key risks that people see this year.
Of course, this being a year in which many elections around the world will take place, but you know, disinformation about health, about geopolitics also factoring in there. So, there is more emphasis on risk as a result of that report than I would normally expect here, where companies are the dominant voices, companies that normally sell you know, all the great visions that they have for what AI can achieve. And what's interesting is that while there are a lot of panels and other sessions around artificial intelligence focusing on global governance, with the role of the United Nations, for example, on trust and elections, on healthcare and AI, geopolitics and AI, you know, AI in the frontlines, these discussions seem to be kind of happening in parallel universes where there are those who are focusing very much on their concerns for civil liberties and the risk of state surveillance.
There are others who are saying, well, scientific breakthroughs are going to save the world. So what I hope will happen either here or in the coming year is that the analysis of what we must expect from AI will start leading to much more concrete policies and enforceable action, because otherwise we're going to continue to see this rapidly changing technology that has deep and wide impact on people all around the world without consequences. And I think we need to make sure that there are guardrails and that these are firm and that, yes, opportunities can be reaped, but certainly risks can be prevented. And hopefully the report and the discussions here in Davos with people coming into these mountains from around the world can actually be meaningful and have impact the coming year.
- Russian war crimes exhibit at Davos reveals civilian toll in Ukraine ›
- A pinch of the Davos "secret sauce"? ›
- The AI power paradox: Rules for AI's power ›
- Davos 2024: China, AI & key topics dominating at the World Economic Forum ›
- Regulate AI, but how? The US isn’t sure ›
- Ukraine pushes to stay top of mind at Davos 2024 - GZERO Media ›
- How is the world tackling AI, Davos' hottest topic? - GZERO Media ›
- This year's Davos is different because of the AI agenda, says Charter's Kevin Delaney - GZERO Media ›
Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode of the series, Taylor Owen looks at the first election in Taiwan and the implications it could have for the future of technology, including AI.
Hi, I'm Taylor Owen. This is GZERO AI. So welcome to 2024, the year where over 50 Democratic countries head to the polls. And we're only a few days away from the first.
On January 13, Taiwanese voters will head to the ballot to elect a new president in an election that could have a profound effect on the global economy and on the future of AI. Let me explain. So the front-runner in this election is Lai Ching-te, a member of the incumbent Democratic Progressive Party. Lai is generally viewed as being in favor of Taiwanese independence, but the Chinese Communist Party has called him a separatist with a confrontational mentality.
But what does this have to do with the future of AI? Well, it all revolves around a single company, the Taiwan Semiconductor Manufacturing Company or TSMC. TSMC makes more than 90% of the world's most advanced chips, the kinds of chips that power much of artificial intelligence. And they make those chips on the Western coast of Taiwan, only 110 miles from mainland China.
So let's assume that Democratic Progressive Party wins, as many expect they will, and that the conflict with Beijing escalates. Well, what happens then? Well, it seems to me there are at least two possibilities. One is that because China is so dependent on TSMC, as we all are, for their chips, that they wouldn't risk an actual attack. This is often referred to as Taiwan Silicon Shield, a kind of new era of mutually assured destruction.
The other possibility, though, is that China does attack Taiwan. And if that happens, it's not inconceivable that Taiwan would preemptively destroy TSMC's manufacturing facilities. And even if China did take control, before that happens, it's unlikely they could continue production. Chip manufacturing is just too contingent on global cooperation.
If TSMC ultimately goes down, the global technology industry could be thrown into turmoil. Virtually no country in the world would be able to build cell phones or cell phone towers. PC production would fall by at least a third, maybe half, and everything from the appliance industry to the automotive industry would take a hit. It would be a global economic crisis, and the progress on AI would be set back years.
While it remains to be seen how this story will play out, one thing is really clear. The global computing industry has a number of incredibly vulnerable choke points, companies like TSMC that an entire industry is dependent on. While diversifying something as complex as chip manufacturing will be difficult and require a ton of capital and real democratic leadership, it may be essential if you want to stabilize the industry. Otherwise, the future of technology may be vulnerable to the whims of volatile players like the CCP.
I'm Taylor Owen and thanks for watching.