scroll to top arrow or icon

Keeping your promises

​An Air Canada plane is seen in the air after departing from Pearson International Airport in Toronto, Ontario, Canada May 16, 2022.

An Air Canada plane is seen in the air after departing from Pearson International Airport in Toronto, Ontario, Canada May 16, 2022.

REUTERS/Carlos Osorio

In 2022, a grieving passenger went on Air Canada’s website and asked its AI-powered chatbot about the airline’s bereavement policy. The chatbot said yes, there are reduced fares if you’re traveling after the death of a loved one and you have 90 days after taking the flight in order to file a claim. The problem: That’s not Air Canada’s policy. The airline specifically requires passengers to apply for and receive the discount ahead of time — not after the flight.


Now, a Canadian court says that Air Canada has to honor the promises made by its AI chatbot, even though they were incorrect and inconsistent with the airline’s policies.

“While a chatbot has an interactive component, it is still just a part of Air Canada’s website,” the judge in the case wrote. “It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot.”

It’s a big ruling that could set new precedent, at least in Canada, that AI companies — or their clients — are legally liable for the accuracy of their chatbots’ claims. And that’s no simple thing to fix: Generative AI models are notorious for hallucinating — or making stuff up. If using AI becomes a major liability, it could drastically change how AI companies act, train their models, and lawyer up.

And it would immediately make AI a tough product to sell.

GZEROMEDIA

Subscribe to GZERO's daily newsletter