An attorney in New York was caught using OpenAI’s ChatGPT to write a legal brief that was submitted to the court with several egregious errors — as the artificial intelligence (AI) cited court rulings that do not exist.
Steven Schwartz, an attorney at the law firm Levidow, Levidow and Oberman, is facing potential sanctions after using ChatGPT for the legal brief. Schwartz later claimed in an affidavit that he was “unaware of the possibility” that ChatGPT’s “content could be false.”
The AI-generated legal brief was filed as part of a lawsuit against the Colombian airline Avianca — with Schwartz representing Roberto Mata, who claims to have been injured on a flight to John F. Kennedy International Airport in New York City.
The ten-page brief argued for the continuation of the lawsuit in response to a request from Avianca to dismiss the case. However, the AI-generated document cited more than a dozen court rulings, including “Miller v. United Airlines,” “Martinez v. Delta Airlines” and “Varghese v. China Southern Airlines.”
All of these cases do not exist, as ChatGPT made them up. According to Breitbart News, “When AI chatbots like ChatGPT make up information, it is referred to in the tech industry as ‘hallucinating.’ ChatGPT and other similar tools suffering from such ‘hallucinations’ is an extremely common occurrence.”
In his affidavit, Schwartz asserted that he had used ChatGPT to “supplement” his research on the case.
Screenshots released by the attorney show that he had even questioned the AI chatbot about the accuracy of the cases it had cited in the brief. Rather than researching them himself, Schwartz apparently accepted ChatGPT’s assertions that the cases could be found in “reputable legal databases” such as Westlaw and LexisNexis.
Schwartz expressed regret for his actions, stating: “I greatly regret using ChatGPT and will never do so in the future without absolute verification of its authenticity.”
His actions have prompted increased attention from the legal community, as this is allegedly the first time that artificial intelligence has been used in this manner.
The judge overseeing Schwartz’s case has scheduled a hearing to discuss potential penalties for his actions for June 8.