A new legal ethics opinion on the use of generative AI in the practice of law makes a very clear point: lawyers are required to maintain their skills in all technological means relevant to their practice, and this includes use of generative AI.
Opinionissued jointly by the Pennsylvania Bar Association And Philadelphia Bar Associationwas published to educate lawyers about the pros and cons of using generative AI and to provide ethical guidelines.
Although the opinion focuses on AI, it repeatedly emphasizes that a lawyer’s ethical obligations regarding this emerging form of technology are no different than those for any other form of technology.
Related: Is Generation AI creating a law firm divide between the haves and the have-nots?
“Lawyers must be as proficient in the use of technological tools as they are in the employment of traditional methods,” the opinion states. “Whether it’s understanding how to navigate legal research databases, use e-discovery software, use your smartphone, use email, or otherwise protect client information in digital formats, lawyers must maintain their skills on all technological means relevant to their practice.
That said, the opinion recognizes that generative AI raises unique issues thus far encountered in legal technology – including its ability to generate text and, in the course of generating text, to hallucinate. The opinion states that this technology’s ability to generate text “opens a new frontier in our ethical focus.”
“Rather than focusing on the merits of an attorney’s choice of specific legal arguments, some attorneys have used generative AI platforms without verifying citations and legal arguments,” the opinions explain. “Essentially, the AI tool is giving lawyers exactly what they were looking for, and lawyers, having achieved positive results, are failing to do due diligence on those results. »
The opinion also raises AI’s potential for bias, noting that it “is not a blank slate, free from biases and preconceptions.”
“These biases can lead to discrimination, favoring certain groups or perspectives over others, and can manifest in areas such as facial recognition and hiring decisions,” the advisory states.
In light of such questions, the opinion states that lawyers have an obligation to communicate with their clients about the use of AI technologies in their practice. In some cases, the opinion advises, lawyers should obtain client consent before using certain AI tools.
12 points of responsibility
The 16-page opinion provides a concise introduction to the use of generative AI in the practice of law, including a brief history of the technology and a summary of other states’ ethics opinions.
But above all, he concludes with 12 points of responsibility relating to lawyers using generative AI:
- Be honest and specific: The notice warns that lawyers must ensure that AI-generated content, such as legal documents or advice, is truthful, accurate and based on sound legal reasoning, respecting the principles of honesty and fairness. integrity in their professional conduct.
- Check all citations and the accuracy of cited documents: Lawyers must ensure that the citations they use in legal documents or arguments are accurate and relevant. This involves checking that quotes accurately reflect the content they refer to.
- Ensure competence: Lawyers must be proficient in the use of AI technologies.
- Maintain confidentiality: Lawyers must protect information relating to a client’s representation and ensure that AI systems processing confidential data adhere to strict confidentiality measures and prevent the sharing of confidential data with third parties not protected by professional secrecy of the lawyer.
- Identify conflicts of interest: Lawyers should be vigilant, according to the opinion, when identifying and addressing potential conflicts of interest arising from the use of AI systems.
- Communicate with customers: Lawyers should communicate with their clients about the use of AI in their practice, providing clear and transparent explanations of how these tools are used and their potential impact on case outcomes. If necessary, lawyers must obtain client consent before using certain AI tools.
- Make sure the information is unbiased and accurate: Lawyers must ensure that the data used to train AI models is accurate, unbiased, and ethically sourced to avoid perpetuating bias or inaccuracies in AI-generated content.
- Make sure AI is used correctly: Lawyers must be vigilant against misuse of AI-generated content, ensuring that it is not used to deceive or manipulate legal processes, evidence or outcomes.
- Adhere to ethical standards: Lawyers must remain informed of relevant regulations and guidelines governing the use of AI in legal practice to ensure compliance with legal and ethical standards.
- Exercising professional judgment: Lawyers should exercise professional judgment in conjunction with AI-generated content and recognize that AI is a tool that assists but does not replace legal expertise and analysis.
- Use appropriate billing practices: AI offers enormous time-saving capabilities. Lawyers must therefore ensure that AI-related expenses are reasonable and appropriately disclosed to clients.
- Maintain transparency: Lawyers should be transparent with clients, colleagues, and courts regarding the use of AI tools in legal practice, including disclosing any limitations or uncertainties associated with AI-generated content.
My advice: don’t be stupid
Over the years I have written about legal technology and legal ethics, I have developed my own shortcut rule for avoiding trouble: Do not be stupid.
Like: if you ask ChatGPT to find cases to support your argument and then file them in court without even bothering to read or shepard them, that’s stupid.
For example: if you ask a generative AI tool to create a court filing or customer email and then send it without editing, that’s stupid.
In their joint opinion, the Pennsylvania and Philadelphia ethics boards put these “don’t be stupid” guidelines in more polite terms, warning that generative AI tools should be used by lawyers who understand their risks and consequences. benefits.
“They should be used with caution and in conjunction with careful attorney review of the “work product” created by these types of tools. These tools are not a substitute for personal reviews of cases, laws and other legislative documents.
You can read the full review here: Joint formal opinion 2024-200.