1

Lawyer cited 6 fake cases made up by ChatGPT; judge calls it “unprecedented”

 10 months ago
source link: https://arstechnica.com/tech-policy/2023/05/lawyer-cited-6-fake-cases-made-up-by-chatgpt-judge-calls-it-unprecedented/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Artificial intelligence + human laziness —

Lawyer cited 6 fake cases made up by ChatGPT; judge calls it “unprecedented”

Judge weighs punishment for lawyer who didn't bother to verify ChatGPT output.

Jon Brodkin - 5/30/2023, 6:52 PM

Robotic hand points to a line on a document while a human signs it with a pen. A judge's gavel is in the background.
Getty Images | Andrey Popov

A lawyer is in trouble after admitting he used ChatGPT to help write court filings that cited six nonexistent cases invented by the artificial intelligence tool.

Lawyer Steven Schwartz of the firm Levidow, Levidow, & Oberman "greatly regrets having utilized generative artificial intelligence to supplement the legal research performed herein and will never do so in the future without absolute verification of its authenticity," Schwartz wrote in an affidavit on May 24 regarding the bogus citations previously submitted in US District Court for the Southern District of New York.

Schwartz wrote that "the use of generative artificial intelligence has evolved within law firms" and that he "consulted the artificial intelligence website ChatGPT in order to supplement the legal research performed."

The "citations and opinions in question were provided by ChatGPT which also provided its legal source and assured the reliability of its content," he wrote. Schwartz admitted that he "relied on the legal opinions provided to him by a source that has revealed itself to be unreliable," and stated that it is his fault for not confirming the sources provided by ChatGPT.

Advertisement

Schwartz didn't previously consider the possibility that an artificial intelligence tool like ChatGPT could provide false information, even though AI chatbot mistakes have been extensively reported by non-artificial intelligence such as the human journalists employed by reputable news organizations. The lawyer's affidavit said he had "never utilized ChatGPT as a source for conducting legal research prior to this occurrence and therefore was unaware of the possibility that its content could be false."

Judge weighs “unprecedented circumstance”

Federal Judge Kevin Castel is considering punishments for Schwartz and his associates. In an order on Friday, Castel scheduled a June 8 hearing at which Schwartz, fellow attorney Peter LoDuca, and the law firm must show cause for why they should not be sanctioned.

"The Court is presented with an unprecedented circumstance," Castel wrote in a previous order on May 4. "A submission filed by plaintiff's counsel in opposition to a motion to dismiss is replete with citations to non-existent cases... Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations."

The filings included not only names of made-up cases but also a series of exhibits with "excerpts" from the bogus decisions. For example, the fake Varghese v. China Southern Airlines opinion cited several precedents that don't exist.

"The bogus 'Varghese' decision contains internal citations and quotes, which, in turn, are nonexistent," Castel wrote. Five other "decisions submitted by plaintiff's counsel contain similar deficiencies and appear to be fake as well," Castel wrote.

The other five bogus cases were called Shaboon v. Egyptair, Petersen v. Iran Air, Martinez v. Delta Airlines, Estate of Durden v. KLM Royal Dutch Airlines, and Miller v. United Airlines.

Page: 1 2 Next →


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK