A New York attorney named Steven Schwartz of the law firm Levidow, Levidow, and Oberman evidently figured he could save time and effort by having ChatGPT write a legal brief for him and it went terribly wrong with the artificial intelligence construct “hallucinating” and citing fake cases all over the place.
When the judge in the case found out, he wasn’t amused in the least and now Schwartz is potentially facing sanctions according to The New York Times. The lawyer claimed in an affidavit, “I was unaware of the possibility that [ChatGPT’s] content could be false.”
“Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” Judge P. Kevin Castel wrote in a request for clarification from the law firm.
The attorney used ChatGPT while prepping a lawsuit against the Colombian airline Avianca on behalf of Roberto Mata, who claims he was injured on a flight to John F. Kennedy International Airport in New York City. The disastrous results of using ChatGPT to write the brief or do legal research highlight potential drawbacks in legal practice. The legal brief prepared by ChatGPT was filled with references to court rulings that simply did not exist, which, of course, the judge caught upon review.
(Video Credit: Breaking Points)
Schwartz submitted a 10-page brief arguing for the continuation of the lawsuit in response to Avianca’s request for the case to be dismissed. Over a dozen court rulings were cited in the brief, which included “Miller v. United Airlines,” “Martinez v. Delta Airlines,” and “Varghese v. China Southern Airlines.” All of them were made up or, in tech speak, “hallucinated.”
In his affidavit, Schwartz asserted that he used ChatGPT to “supplement” his case-related research.
The attorney’s screenshots indicated that he questioned ChatGPT about the veracity of the cases it cited. In its affirmative response, the AI construct contended that the rulings could be found in “reputable legal databases,” such as Westlaw and LexisNexis. Not so much it would seem.
“I greatly regret using ChatGPT and will never do so in the future without absolute verification of its authenticity,” Schwartz said in the affidavit.
A lawyer used ChatGPT to do “legal research” and cited a number of nonexistent cases in a filing, and is now in a lot of trouble with the judge pic.twitter.com/AJSE7Ts7W7
— Daniel Feldman (@d_feldman) May 27, 2023
— Daniel Feldman (@d_feldman) May 27, 2023
From the defense attorneys:
“The authenticity of many of these cases is questionable” pic.twitter.com/cJeSXyl7Uc— Daniel Feldman (@d_feldman) May 27, 2023
Another thread https://t.co/MCrXHSSNJI
— Daniel Feldman (@d_feldman) May 27, 2023
The legal community is abuzz over the case and many are taking it as a tale of caution when using ChatGPT. In its current form, the construct cannot replace actual legal research or, for that matter, any form of intense personal work that requires not only research but in-depth knowledge of a subject. It just can’t be trusted yet to not make things up to fit its task at hand.
So far, ChatGPT has made false sexual misconduct allegations against attorney and legal scholar Jonathan Turley and has shown leftist political bias. It is quickly destroying any faith that has been placed in it and the results for those that do trust it could be dangerous as well as embarrassing.
The judge called the situation an “unprecedented circumstance.” A hearing has been set to discuss possible penalties for Schwartz’s actions on June 8 by the judge overseeing the case.
Get the latest BPR news delivered free to your inbox daily. SIGN UP HERE
DONATE TO AMERICAN WIRE
If you are fed up with letting radical big tech execs, phony fact-checkers, tyrannical liberals and a lying mainstream media have unprecedented power over your news please consider making a donation to American Wire News to help us fight them.
- NJ implements ‘See Something, Say Something’ roadway campaign, spiking resident terrorism fears - September 23, 2023
- Doocy trolls Fetterman and Dems, asks KJP if Biden would wear ‘shorts and a hoodie’ to work - September 23, 2023
- Can AI thwart greedy porch pirates? UPS is already trying it out to ‘optimize delivery outcomes’ - September 23, 2023
Comment
We have no tolerance for comments containing violence, racism, profanity, vulgarity, doxing, or discourteous behavior. If a comment is spam, instead of replying to it please click the ∨ icon below and to the right of that comment. Thank you for partnering with us to maintain fruitful conversation.