Scroll to the top

When AI makes mistakes, who can be held responsible?

When AI makes mistakes, who can be held responsible?
Accountability in AI: Who bears responsibility for errors? | GZERO AI

In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, explores the issues of responsibility and trust with the widespread deployment of AI. Who bears responsibility when AI makes errors? Additionally, can we rely on AI, and should we trust it?


So last week, a Canadian airline made headlines when a customer sued its chatbot. Not only is this story totally weird, but I think it might give us a hint at who will ultimately be responsible when AI messes up. So, this all started when Jake Moffatt's grandmother passed away and he went to the Air Canada website to see if they had a bereavement policy. He asked the chatbot this question, which told him to book the flight and that he had 90 days to request a refund. It turns out though, that you can't request bereavement refunds retroactively, a policy stated elsewhere on the Air Canada website. But here's where it gets interesting. Moffatt took Air Canada and their AI chatbot to British Columbia's Civil Resolution Tribunal, a sort of small claims court. Air Canada argued that the chatbot is a separate legal entity that is responsible for its own actions.

The AI is responsible here. They lost though and were forced to honor a policy that a chatbot made up. They've since deleted their chatbot. This case is so interesting because I think it strikes at two of the questions at the whole core of our AI conversation, responsibility and trust.

First, who's responsible when AI gets things wrong? Is Tesla responsible when their full self-driving car kills somebody? Is a newspaper liable when its AI makes things up and defames somebody? Is a government responsible for false arrests using facial recognition AI? I think the answer is likely to be yes for all of these, and this has huge implications.

Second, and maybe more profound though, is the question of whether we can and should trust AI? Anyone who watched the Super Bowl ads this year will know that AI companies are worried about this. AI has officially kicked off its PR campaign and at the core of the PR campaign is the question of trust.

According to a recent Pew Study, 52% of Americans are more concerned than they are excited about the growth of AI. So, for the people selling AI tools, this could be a real problem. A lot of these ads then seek to build public trust in the tools themselves. The ad for Microsoft Copilot, for example, shows people using AI assistant to help them write a business plan and to drop storyboards for a film to make their job better, not take it away. The message is clear here, "We're going to help you do your job better, trust us." Stepping back though, the risk of being negligent and moving fast and breaking things is that trust is really hard to earn back once you've lost it, just ask Facebook.

In Jake Moffatt's Air Canada case, all that was at stake was a $650 refund, but with AI starting to permeate every facet of our lives, it's only a matter of time before the stakes are much, much higher.

I'm Taylor Owen, and thanks for watching.

GZEROMEDIA

Subscribe to GZERO's daily newsletter