latest news

Blog

thumbnail

Lawyers Pay Sanctions for Defending AI-Generated Cases: A Hard Lesson on AI Use in Law

Take a moment to mark June 22, 2023, on your calendar, legal professionals. It was a day of reckoning when U.S. District Judge P. Kevin Castel slapped a hefty $5,000 fine on a couple of lawyers for recklessly trusting lawsuits churned out by ChatGPT, an AI language model by OpenAI. The culprits? Peter LoDuca and Steven A. Schwartz from the firm Levidow, Levidow & Oberman, as reported by the American Bar Association (ABA).

Let's rewind and get the lowdown on what led to this debacle. Attorneys LoDuca and Schwartz got a bit too comfortable with AI, specifically ChatGPT. They used the AI model for legal reserach in court pleadings, leaning heavily on its capability to generate human-like text. Unfortunately, their trust was misplaced.

ChatGPT, despite its impressive abilities, bungled up. The AI model churned out bogus cases that our duo ended up defending in court, and that too without verifying their authenticity. One such fictitious case, the "Varghese" case, was cited by Judge Castel as total "gibberish," even though it contained internal citations and quotes from nonexistent cases. As Judge Castel put it, "other cases cited in 'Varghese' are real cases, but they don't stand for the propositions cited."

The plot thickened when Schwartz admitted in a May 25 affidavit that he used ChatGPT to "supplement" his research after finding no help from Fastcase. What started as a supplement quickly turned into the main course, leading the duo down a slippery slope.  The lawyers did not check the case law provided by ChatGPT.

Judge Castel weighed the "significant publicity" surrounding the case and the sincerity of the lawyers. He took into account their stated "embarrassment and remorse" and the fact that they had a clean disciplinary record with a low likelihood of repeating such actions.

Even so, the penalties came down hard. LoDuca, Schwartz, and their firm were slapped with a $5,000 fine. Responding to the order, Levidow, Levidow & Oberman told the New York Times and Law.com that while they intended to comply fully with Judge Castel's order, they respectfully disagreed with the finding that their lawyers acted in bad faith.

So, there you have it!  A poignant lesson for everyone who's gotten a bit too comfortable with AI in the legal field. Let June 22, 2023, serve as a reminder that while AI is a powerful tool, it can't replace the human touch and expertise required in law.

As the ABA puts it, "Lawyers must stay competent with changes in the law and its practice, including the benefits and risks associated with relevant technology." Don't let the allure of AI lead you astray. Use it, but don't let it use you!

Source: https://www.abajournal.com/web/article/lawyers-who-doubled-down-and-defended-chatgpts-fake-cases-must-pay-5k-judge-says?utm_source=sfmc&utm_medium=email&utm_campaign=weekly_email&utm_term=&utm_id=690238&sfmc_id=50767392

thumbnail

Lawyer's Reliance on AI Sparks Ethical Concerns in the Legal Profession

In a highly scrutinized court hearing, lawyer Steven A. Schwartz faced repercussions (potential sanctions) after it was revealed that he had relied on an AI chatbot, known as ChatGPT, to create a legal brief filled with made-up case law and legal citations. The incident has sparked a heated debate about the implications of AI in the legal profession and the ethical responsibilities of legal professionals.

During the hearing, Schwartz appeared visibly distressed as he faced questioning from Judge P. Kevin Castel. He expressed deep remorse and embarrassment, admitting that he did not conduct further research into the cases provided by the AI chatbot. Schwartz's lack of comprehension regarding the AI's ability to fabricate cases became a focal point of the proceedings.

This case serves as a cautionary tale, highlighting the potential pitfalls of relying solely on AI-generated content without proper verification. While AI tools can assist in legal research and analysis, they lack the critical thinking skills and contextual understanding that human lawyers possess. Blindly accepting AI-generated content without human oversight can compromise the integrity of legal proceedings and undermine the trust placed in the legal system.

The incident has ignited discussions about the ethical considerations surrounding the integration of AI in the legal profession. It emphasizes the importance of legal professionals understanding the limitations and risks associated with AI tools. Lawyers must recognize that AI algorithms generate responses based on statistical models trained on vast amounts of internet data. Consequently, skepticism and critical evaluation are necessary when relying on AI-generated content.

One of the key ethical considerations is the need for lawyers to maintain their responsibility for verifying the authenticity and accuracy of AI-generated information. While AI can enhance efficiency and streamline legal processes, it cannot replace the judgment and expertise of human lawyers. Legal professionals must exercise due diligence, ensuring that the information obtained through AI systems is thoroughly reviewed and verified.

The case involving Schwartz also underscores the significance of human judgment and accountability in the legal profession. AI can serve as a valuable tool, aiding lawyers in their work. However, it is crucial that lawyers do not abdicate their professional responsibility to AI systems. Instead, they should use AI as a supportive tool, subjecting its outputs to critical analysis and applying their legal expertise to ensure the accuracy and validity of the information.

Moving forward, the legal profession must address the ethical implications of AI adoption. There is a pressing need for guidelines and regulations that emphasize responsible and ethical use of AI tools. Lawyers should receive adequate training and education to navigate the evolving landscape of AI technology. By striking a balance between leveraging AI's potential and upholding ethical standards, the legal profession can harness the benefits of AI while ensuring the integrity of the legal system.

In conclusion, the court hearing involving lawyer Steven A. Schwartz serves as a stark reminder of the ethical challenges posed by AI in the legal profession. The incident highlights the importance of understanding the limitations and risks associated with AI tools. Legal professionals must exercise caution, maintain accountability, and ensure that AI is used responsibly, upholding the integrity of the legal system and the trust of the public.