AI Lawsuit against Air Canada
Air Canada paid $812.02 in damages and court fees to a passenger after losing in small claims court. The passenger bought a plane ticket after confirming with Air Canada’s Chatbot that the airline offers reduced bereavement fares when traveling due to imminent death or a death within a person’s immediate family. When the passenger pursued a full refund, he was denied and only offered a $200 flight voucher because of a discrepancy between the airline’s Chatbot answer and the actual policy. Air Canada explained that Chatbot’s answer included a link to the airline’s bereavement fares policy which states that the policy does not apply retroactively.
Air Canada attempted to explain that the chatbot was a separate legal entity that should be held responsible for its actions. However, the Tribunal did not find this argument persuasive and stated the passenger’s claim constituted negligent misrepresentation by Air Canada. The Tribunal explains that Air Canada did not demonstrate why it should not be held liable for information provided by their agents, representatives, or chatbots. The Tribunal further stated the chatbot was part of Air Canada’s website and it is obvious the airline is responsible for all information available via their website. Air Canada should have taken “reasonable care” to ensure the chatbot was accurate with the information it was releasing.
Overall, Air Canada failed to demonstrate why the webpage for their bereavement travel was more trustworthy than the chatbot’s answer, why the customers should double check information, and why the airline should not be held liable for the information AI released. This ruling could set the precedent for an airline’s liability regarding the performance of its AI systems.
This case has drawn substantial attention to whether consumer rights will be protected against AI’s vulnerability to hallucinations. Since AI perceives patterns or objects that are nonexistent or imperceptible to human observers, it will sometimes generate answers and outputs that are nonsensical or altogether inaccurate.
This controversy further pointed out airlines’ vulnerability to liability when AI misinforms customers due to handling significant transactions despite not being financial institutions. The U.S. Consumer Financial Protection Bureau (CFPB), which monitors AI and its impact on consumer rights regarding technology, found that chatbots are good at answering simple questions; however, as the situation or question becomes more complex the effectiveness of AI begins to wane. Financial institutions risk violating legal obligations, eroding customer trust, and causing consumer harm when deploying chatbot technology. Airlines must continue to comply with federal consumer financial laws due to security risks and privacy concerns.