The number of lawyers being punished for citing false cases or citations created by generative artificial intelligence tools continues to grow.
Earlier this summer, U.S. District Judge Thomas Cullen ordered the attorney to show cause why she should not be disciplined under Fed.R.Civ.Pro. 11 and also referred her to the state bar for disciplinary proceedings because she cited several false cases and used false citations in a brief. See, Iovino vs. Michael Stapleton AssociatesLTD, 2024 US Dist. LEXIS 130819 (WD Va July 24, 2024).
In his scathing opinion, Cullen joined judges in New York, Massachusetts and North Carolina, among others, in concluding that improper use of AI-generated authority can result in sanctions and disciplinary charges.
In IovinoCullen issued the order after being unable to verify several cases and citations submitted by the plaintiff’s attorney. He said attorneys who fail to ensure the accuracy of filings or who submit documents containing fabricated case law or citations should be subject to scrutiny.
Cullen was particularly troubled by the attorney’s conduct after the false references were revealed. He asked the attorney to provide additional references and asked her to explain why the previous transcript contained false quotes. The attorney provided additional references, but she did not explain “where her apparently fabricated quotes and references came from and who was primarily responsible for this gross error.”
Cullen said, “The silence is deafening.” (Aside: If you’re a regular reader of my columns, you’ll know that I would have advised the lawyer to answer the judge’s questions directly.)
It is obvious that a lawyer should not cite false cases or use false citations in a brief. It is also obvious that AGI in the legal profession is here to stay. But what is not obvious is the impact AGI will have on the legal profession. Changes are coming fast.
Accordingly, on July 29, 2024, the American Bar Association Standing Committee on Ethics and Professional Conduct issued Official Opinion 512 on Generative Artificial Intelligence Tools. The ABA Standing Committee issued this opinion primarily because generative artificial intelligence tools are a “rapidly evolving target” that may create significant ethical issues. The committee felt it was necessary to offer “general guidance to lawyers trying to navigate this emerging landscape.”
The committee’s general guidance is helpful, but the general nature of Opinion 512 underscores part of my primary concern: the GAI has a profound impact on how lawyers practice, an impact that will grow over time. Not surprisingly, at present, the GAI involves at least eight ethical rules ranging from competence (Maryland Rule 19-301.1), to disclosure (Maryland Rule 19-301.4), to fees (Maryland Rule 19-301.5), to confidentiality (Maryland Rule 19-301.6), to supervision obligations (Maryland Rule 19-305.1 and Maryland Rule 305.3), and to a lawyer’s duties before a court to be candid and to pursue meritorious claims and defenses (Maryland Rules 19-303.1 and 19-303.3).
As a technological feature of the practice, lawyers simply cannot ignore the AGI. The requirement of competence under Rule 19-301.1 includes technical competence, and the AGI is just another step forward. It is here to stay. We must embrace it, but use it intelligently.
Let it become an addition to your practice rather than having Chat GPT write your brief. Make sure your staff understand that GAI can be useful, but that the work product needs to be checked for accuracy.
After considering the ethical implications and putting the right processes in place, implement GAI and use it to your clients’ advantage.
This article was originally published in The Daily Record September 5, 2024.