ChatGPT Just Got Sued for Defamation
News trending shows that ChatGPT just got sued for defamation. A man named Mark Walters, who works as a radio presenter from Georgia, is suing OpenAI because OpenAI’s AI chatbot gave responses to a journalist named Fred Riehl during a chat that Mark Walters was defrauding and embezzling funds from the organization it said he worked for.
The responses ChatGPT gave to Riehl about the case and Walters were completely fabricated and in truth, Walter had nothing to do with any defrauding or embezzling of funds from the Second Amendment Foundation. Many asked me why ChatGPT just got sued for defamation.
The case of The Second Amendment Foundation v. Ferguson itself does not pertain to financial fraud, but rather it involves a lawsuit filed by the gun group against Attorney General Bob Ferguson regarding Washington state’s gun laws and in actual sense, Walters’s name was not part of the 30-page lawsuit filed by The Second Amendment Foundation.
This incident has made Mark Walters not only angered but he is also suing OpenAI in court. This occurrence might be unprecedented because it could be challenging to establish in court that an AI chatbot can genuinely damage someone’s reputation. Nevertheless, the lawsuit holds significance as it could establish a precedent for future matters.
According to the lawsuit, Walters’ attorney alleges that OpenAI’s chatbot propagated untrue details about Walters when a reporter requested a summary of a legal case involving an attorney general and The Second Amendment Foundation.
The AI chatbot incorrectly claimed that Walters was involved in the case and held an executive position in the foundation, which was not the case. In reality, Walters had no affiliation with the foundation or the case whatsoever.
Although the journalist did not publish the false information, they did verify it with the lawyers involved in the case. The lawsuit argues that companies such as OpenAI should bear responsibility for the errors committed by their AI chatbots, particularly if they have the potential to harm individuals.
The crucial question now is, should the court will deem fabricated information from AI chatbots like ChatGPT as defamation? A professor of law believes this is possible because OpenAI acknowledges that its AI can make mistakes, without promoting it as a joke or work of fiction. Does it worth it to see ChatGPT just got sued for defamation?
However, the lawsuit could hold significant implications for the future utilization and advancement of AI, particularly regarding the legal treatment of information generated by AI.
While Walters might be the initial litigant to challenge “AI illusions” in a legal setting, the prevalence of AI generating deceptive data will probably lead to a surge in similar instances in the foreseeable future.
Just this April of 2023, an Australian mayor even went so far as to issue a legal threat against OpenAI when ChatGPT falsely asserted that he was a guilty felon involved in a corruption scandal, despite the fact that he was the one who exposed the misconduct as a whistleblower.
WHAT COULD THIS MEAN GOING FORWARD
This legal action could entail several notable consequences:
- AI Liberty and Regulation
- Understanding of AI Limitations
- Enhancement of AI Systems
- Ethical Considerations in AI
- Legal Standing of AI