Close
Login to MyACC
ACC Members


Not a Member?

The Association of Corporate Counsel (ACC) is the world's largest organization serving the professional and business interests of attorneys who practice in the legal departments of corporations, associations, nonprofits and other private-sector organizations around the globe.

Join ACC

Key Highlights:

  • An AI sea change. Unlike prior “extractive” AI legal tools, “generative” AI applications can create original content.
  • Watch out for “fake” cases. A New York federal judge recently sanctioned attorneys for submitting “decisions” that had been fabricated by ChatGPT.
  • Practical takeaways. Attorneys can maximize AI’s efficiencies and avoid potential pitfalls by following three simple steps.
  • Reason for optimism. Generative AI has an extraordinary upside that should allow attorneys to practice “at the top of their license.”

Many attorneys already use some form of artificial intelligence. Document review programs like Relativity rely on “technology-assisted review” and “active learning” to “amplify the manual review process.” Legal research products like Westlaw Edge employ “AI-enhanced capabilities” to automate searches. And legal writing tools like BriefCatch marshal machine learning to catch typos, clean up citations, and streamline sentences. 

But unlike earlier “extractive” technology, “generative” AI — such as the software application ChatGPT launched in November 2022 — can create original content. This emerging technology comes with an entirely new set of opportunities and pitfalls for attorneys. 

A March 2023 survey of more than 1,000 U.S. lawyers found that 80% had not yet used generative AI in their work, 73% had mixed or negative feelings about the technology, and 87% had ethical concerns. 

The recent story of two New York attorneys “duped” by ChatGPT into citing “fake” cases in a court submission— and the sanctions imposed for their failure to promptly acknowledge and correct their mistakes — illustrate some of the risks.

But the attorney missteps in the ChatGPT case are entirely avoidable. And the emergence of generative AI carries extraordinary potential if attorneys can learn to use the technology wisely.

The recent sanctions ruling in Mata v. Avianca, Inc.

In February 2022, Roberto Mata filed an action alleging he was injured when a metal serving cart struck his knee during an international flight (See Mata v. Avianca, Inc., F. Supp. 3d, 22-cv-1461 (PKC), 2023 WL 4114965, at *2 (S.D.N.Y. June 22, 2023)).

Avianca moved to dismiss Mr. Mata’s claims as time-barred under the Convention for the Unification of Certain Rules Relating to International Carriage by Air (the “Montreal Convention”). 

The opposition filings submitted by Mr. Mata’s lawyers — Peter LoDuca and Steven A. Schwartz of Levidow, Levidow & Oberman P.C. — set off an unprecedented sequence of events illustrating the risks of generative AI.

In Avianca’s reply brief, Avianca’s attorneys explained they had been “unable to locate” many authorities cited in the “Affirmation of Opposition” prepared by Mr. Schwartz and signed by Mr. LoDuca.

Messrs. LoDuca and Schwartz did not immediately withdraw the Affirmation of Opposition or otherwise address the apparent non-existence of these cases, the first of a series of compounding errors that ultimately led Judge P. Kevin Castel of the Southern District of New York to sanction both attorneys.

After the Court also could not locate the cases, it ordered Mr. LoDuca to file an affidavit attaching copies of the cited cases. After requesting an extension based on a false claim of a “vacation,” Mr. LoDuca submitted an affidavit purportedly containing all but one of the “decisions” (or at least the portions allegedly available on an unnamed “online database”).

In fact, Mr. Schwartz had prepared the affidavit and attached copies of “decisions” fabricated by ChatGPT when he asked the chatbot to identify favorable rulings addressing the tolling effect of a bankruptcy stay under the Montreal Convention. He even asked ChatGPT if the cited cases were “real,” only to be reassured that the cases “indeed exist” and “can be found in reputable legal databases such as LexisNexis and Westlaw.” 

In a fascinating discussion, Judge Castel analyzed three fake cases and identified numerous attributes of those decisions that should have led a reasonable reviewing attorney to question their legitimacy. 

The first “decision” included “gibberish” legal analysis and a “nonsensical” and internally inconsistent procedural history before it “abruptly” ended without a conclusion. The second decision ran a scant two paragraphs containing multiple obvious factual errors before abruptly ending in a sentence fragment. The third decision confused the “District of Columbia with the state of Washington” before citing “itself as precedent.” Other submitted “decisions” exhibited similar deficiencies. 

This prompted Judge Castel to issue an order to show cause why Messrs. LoDuca and Schwartz should not be sanctioned, which led to Mr. Schwartz filing an affidavit containing misstatements regarding submission of the fake cases. Making matters worse, Mr. Schwartz submitted another affidavit that Judge Castel found offered “shifting and contradictory explanations.”

Against this background, the Court found both attorneys acted with “subjective bad faith” sufficient for sanctions under Federal Rule of Civil Procedure 11. 

The Court held Mr. LoDuca violated Rule 11 by (1) failing to read the cited cases or otherwise take any action to ensure the legal assertions in the Affirmation in Opposition “were warranted by existing law,” (2) “swearing to the truth” of his first affidavit “with no basis for doing so,” and (3) telling the Court he was on vacation when in fact it was Mr. Schwartz on vacation.

The Court also held Mr. Schwartz violated Rule 11 by failing to acknowledge in the first affidavit that he was “aware of facts that alerted him to the high probability” that at least two of the fake cases “did not exist” and by making other false statements about his use of ChatGPT in preparing the Affirmation in Opposition. 

Judge Castel ordered the attorneys to send Mr. Mata a copy of the sanctions order, a transcript of the sanctions hearing, and a copy of the affidavit submitting the fake cases.

Consistent with his discussion of the serious risks to the “integrity” of “federal judicial proceedings” posed by fake federal judicial opinions, Judge Castel also ordered the attorneys to send these materials to the judges improperly identified as having issued the bogus cases. Finally, the attorneys and their law firm had to pay a $5,000 penalty. 

Practical lessons from Mata 

The Mata sanctions illustrate a worst-case scenario for attorneys experimenting with generative AI. But for all the drama in that case, Judge Castel recognized that “[t]echnological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance.”

Attorneys can avoid the missteps (and sanctions) in Mata by following these three steps:

    1.    Become familiar with the strengths and limitations of AI.

Comment 8 to Mode Rule of Professional Conduct 1.1 provides that “a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.”

Had the Mata lawyers understood the “risks associated with” ChatGPT, they would not have filed a brief containing fake cases. Indeed, Mr. Schwartz testified at the sanctions hearing that he was “operating under the false perception that [ChatGPT] could not possibly be fabricating cases on its own. . . . And if I knew that, I obviously never would have submitted these cases.” 

Apart from the possibility of generative AI models “making things up,” attorneys should also keep in mind that some models pose a risk of “reveal[ing] information related to the representation of a client” without the client’s “informed consent” in violation of Model Rule 1.6.

Experts have explained that attorneys using “open source” AI platforms like ChatGPT can violate this duty by including sensitive case information within queries that are then shared with “AI trainers.” [Atlanta Lawyer, ChatGPT – What’s the Buzz?] New proprietary products like CoCounsel and Harvey, on the other hand, contain confidentiality controls in place to minimize disclosure risks. [Law360, AI Practices Law at the Speed of Machines.]

   2.    Verify AI work product and identify tasks only attorneys can perform.

Just as lawyers “having direct supervisory authority over another lawyer” must make “reasonable efforts” to ensure that submissions are accurate and work product is up to par, cf. Model R. 5.1(b), attorneys using generative AI must carefully review any work they did not produce.

This obligation featured prominently in the Mata case, with Judge Castel emphasizing the “gatekeeping role” attorneys must play “to ensure the accuracy of their filings.” A close review of the bogus decisions Mr. Schwartz received from ChatGPT would have raised multiple red flags and thus provided another offramp to avoid an inaccurate submission and sanctions. 

AI legal assistant tools, moreover, arguably qualify as “nonlawyer[s] employed or retained by or associated with a lawyer” for purposes of Model Rule 5.3 [Atlanta Lawyer, What’s the Buzz?].

Law firms and lawyers relying on these tools should therefore “establish guidelines and protocols for [their] use—and, when possible, contracts establishing direction and oversight by the attorney for uses involving separate non-attorney entities associated with [an AI] model.” [Id.]  Lawyers should also note that overreliance on AI “could spark claims of unauthorized practice of law” under state ethics rules similar to Rule 5.5. [Id.]

Attorneys should also evaluate which tasks can be delegated to AI and which tasks only they can reliably perform. In researching cases or reviewing documents, for example, an associate might ask a generative AI tool to produce an initial summary that the associate would then double-check for completeness and accuracy.

While potentially promising great efficiency, using generative AI to perform these typical “young associate” tasks could materially impact the attorney training track, requiring future attorneys to learn — and future law school classes to teach — a different set of “quality control” skills focused on the potential blind spots of AI. 

Similar flexibility will be required for legal writing. An AI assistant may be able to draft a brief within minutes, but attorneys must craft the writing prompt and carefully check the draft. Attorneys must also take the time to “humanize” an AI-generated brief to reflect personal knowledge about more nuanced aspects of the case and to maximize the persuasive effect the brief will have on the intended human reader.

    3.    Correct mistakes immediately.

Many of the adverse findings in the Mata sanctions order address the attorneys’ failure to acknowledge and rectify their mistake earlier on.

For example, “no Respondent sought to withdraw the March 1 Affirmation” containing the fake cases. Mr. LoDuca instead tried to stall by “ma[king] a knowingly false statement to the Court that he was ‘out of the office on vacation.’” And Mr. Schwartz “consciously avoided confirming” facts “that alerted him to the high probability” that some of the “decisions” he had submitted did not exist.

While implementing new technologies will inevitably involve trial and error, attorneys must immediately identify and correct any mistakes—just as they should do with any other mistake.

Looking back and looking forward

The same steps that will help lawyers navigate the emergence of AI have also helped lawyers successfully incorporate other new technologies.

Innovations like online databases, electronic discovery, and computer-assisted review each required lawyers to (1) understand the promises and pitfalls of the technology, (2) provide appropriate human oversight, and (3) adapt and improve over time.

Generative AI is no different. Harnessed correctly, it can positively transform the practice of law and empower attorneys to achieve what medical professionals have described as “working at the top of your license” [Law360, Speed of Machines].

AI can “allow lawyers to more quickly focus on the judgment and the advice and the strategic components of being a lawyer” that are most valuable. [Id.] The Mata case notwithstanding, there are plenty of reasons for optimism.

Authors: William A. Ryan, Senior Vice President & Chief Compliance Officer (AT&T Services Inc.), Allen Garrett, Partner (Kilpatrick Townsend & Stockton LLP), and Brad Sears, Associate (Kilpatrick Townsend & Stockton LLP)

 

Region: United States
The information in any resource collected in this virtual library should not be construed as legal advice or legal opinion on specific facts and should not be considered representative of the views of its authors, its sponsors, and/or ACC. These resources are not intended as a definitive statement on the subject addressed. Rather, they are intended to serve as a tool providing practical advice and references for the busy in-house practitioner and other readers.
ACC

This site uses cookies to store information on your computer. Some are essential to make our site work properly; others help us improve the user experience.

By using the site, you consent to the placement of these cookies. For more information, read our cookies policy and our privacy policy.

Accept