The Johannesburg Regional Court has condemned lawyers for using fake legal references generated by ChatGPT.
The case involved a defamation suit between a woman and her company. The question of whether a company can be sued for defamation had apparently been answered in previous cases, the plaintiff’s lawyers argued.
You can see where this is going.
The lawyers used ChatGPT to find and cite those cases. In the two months after the first hearing, the attorneys involved tried to find those cited cases, but they couldn’t because they were fake citations made up by ChatGPT. They led to the wrong cases that had nothing to do with the issue at hand.
Judge Arvin Chaitram criticized the lawyers’ reliance on AI-generated misinformation, which resulted in their client being ordered to pay costs.
Do some “old-fashioned reading”
The judge criticized their reliance on artificial intelligence and imposed punitive costs on their client. The lawyers were deemed “overzealous and careless,” but not intentionally misleading the court.
While technology helps with legal research, Chaitram advised that it should be supplemented with “good old-fashioned independent reading” to avoid similar embarrassments in the future. The shame of this case should be punishment enough for the responsible lawyers.
Peter LoDuca and Steven A. Schwartz know the feeling all too well. In an unprecedented malpractice case, the attorneys used ChatGPT to file the lawsuit, citing non-existent legal cases generated by the AI tool.
New York Judge Kevin Castel ruled that lawyers have a gatekeeping role to ensure the accuracy of filings, underscoring the importance of verifying the output of AI tools in the legal profession. They were each fined $5,000 for failing to meet their responsibilities.