Radio Host Sues OpenAI: Defamation, AI Hallucinations, and the Future of Legal Responsibility

Dystopian courtroom scene, AI entity on trial, perplexed lawyers and judge, intense shadows, tense atmosphere, chiaroscuro light, mix of Neo-Noir & Baroque style. Center: Radio host with furrowed brow, AI code fragments hovering, blurred lines between reality & AI hallucination, charged emotions.

In a recent, precedent-setting case, Georgia radio host Mark Walters has filed a lawsuit against OpenAI after its widely-used ChatGPT accused him of embezzlement. Walters’ attorney, John Monroe, claims that the AI developer defamed his client by fabricating lies about him. The issue arose when ChatGPT, in response to a journalist’s inquiry, falsely identified Walters as the defendant in the Second Amendment Foundation (SAF) v. Robert Ferguson case.

SAF v. Ferguson had nothing to do with Walters, but ChatGPT reportedly generated text accusing him of defrauding and embezzling funds from SAF for personal expenses. Walters’ legal team is now demanding a jury trial, unspecified general and punitive damages, and attorney’s fees.

While suing AI developers is a relatively new legal field, Monroe is confident in their chances of success. However, some experts are less certain. Cal Evans, in-house counsel for Stonehouse Technology Group, commented on the difficulty of proving damages in defamation claims. He adds that ChatGPT is not an individual communicating facts, but merely a software that correlates and communicates information available online.

The issue of AI hallucinations, or instances when an AI produces untrue results that are not supported by real-world data, has caused other similar problems in the past. After being falsely accused of sexual assault by ChatGPT, U.S. criminal defense attorney and law professor Jonathan Turley claimed that the AI had invented a Washington Post article to back up its claim.

In another incident, Steven A. Schwartz, a lawyer in the Mata v. Avianca Airlines case, admitted to using ChatGPT as a source in his research, only to find that the information it provided was completely fabricated. Schwartz later acknowledged that he should have confirmed the sources provided by ChatGPT before relying on them.

OpenAI has acknowledged the issue of AI hallucinations and announced new training methods in May to address the problem. According to the company, “Mitigating hallucinations is a critical step towards building aligned AGI.”

While some argue that OpenAI’s disclaimer, which states that ChatGPT may produce inaccurate information about people, places, or facts, may protect them from liability, the outcome of this case will likely have significant implications for AI developers’ responsibility in future similar situations. As of now, OpenAI has not responded to requests for comment on the matter.

Source: Decrypt

Sponsored ad