We’re entering the second year of generative artificial intelligence (GAI) availability, and if you haven’t been paying attention to this cutting-edge tool, there’s no time like the present. GAI is growing at an exponential rate and is quickly being integrated into the software tools you use every day. From legal research and contract drafting to law firm management and document editing, GAI is everywhere and avoiding it is no longer an option.
This is especially true now that legal ethics committees across the country are taking up the challenge and issuing GAI guidance. The first to do so was the Committee on Professional Responsibility and Conduct of the State Bar of California, which released a detailed roadmap in November 2023 for the ethical adoption of generative AI in law firms. lawyers. As I explained in another article, the advice provided was detailed and covered many different ethical issues, including technological competence, confidentiality and the requirement for candor, both with legal clients and the courts.
More recently, Florida and New Jersey have issued guidelines: Florida issued a 24-1 ethics opinion on January 19 and New Jersey issued preliminary guidelines on January 24. Below I will detail Florida’s opinion, and in my next article I will address New Jersey’s guidelines.
In Florida Bar Ethics Opinion 24-1 (online: https://www.floridabar.org/etopinions/opinion-24-1/), the Board Review Committee on Professional Ethics concluded that lawyers can use GAI, but, of course, must do so ethically. The Committee addressed a wide range of ethics issues in its opinion, ranging from confidentiality and pricing to attorney supervision and advertising.
At the outset, the Committee explained that GAI tools can “create original images, analyze documents, and write briefs based on written messages,” but in doing so they can hallucinate, meaning provide “inaccurate answers that seem convincing. Accordingly, the Committee cautioned that all results should be carefully reviewed for accuracy.
Next, the Committee examined GAI in the context of privacy and indicated that attorneys using GAI must have a thorough understanding of how GAI technology handles data entry and whether it uses data entry to train the GAI system. .
According to the Committee, a key way to achieve this is to monitor GAI providers in the same way as cloud computing providers. Take steps to ensure that: 1) Supplier has an enforceable obligation to maintain the confidentiality and security of data, 2) Supplier will notify Customer in the event of a breach or process service requiring the production of information on the customer, 3) investigate the reputation, security measures and policies of the supplier, including any limitations on the liability of the supplier; and 4) Determine whether Provider retains information submitted before and after discontinuation of Services, or otherwise claims ownership rights in the information.
Regarding client consent, the Committee noted that one way to mitigate privacy concerns would be to use internal IAM tools like the specific legal tools I referenced above, “when the “use of a generative AI program does not involve the disclosure” of confidential information to a third party”, in which case “a lawyer is not required to obtain informed consent from a client pursuant to Rule 4 -1.6”.
Next, the Committee reiterated the obligation to carefully review the results of any GAI tool and that attorneys must ensure that all users in the firm are informed to do so as well. The Committee warned that lawyers should not delegate work to a GAI tool that “constitutes the practice of law, such as claims negotiation,” nor should GAI be used to create a chatbot for law enforcement. ‘admission to a website that might inadvertently “provide legal advice, fail.” to immediately identify itself as a chatbot, or to fail to include clear and reasonably understandable disclaimers limiting the lawyer’s obligations.
Another key topic discussed was the legal costs associated with the use of GAI. The Committee explained that clients should be informed, preferably in writing, of the lawyer’s intention to charge them for the actual cost of using generative AI” and that lawyers can charge for the “reasonable time dedicated to researching and writing a specific case when using generative AI.” AI.
Importantly, the Committee recognized that learning GAI is part of a lawyer’s duty of technological competence and explained that lawyers are required to develop skills in their use of new technologies like GAI, including understand their risks and benefits. However, customers cannot be charged for “time spent developing minimal skills in using generative AI.” In other words, you cannot charge customers for the cost of maintaining your ethical duty of technological competence.
The opinion also covers other ethical issues, so be sure to read it in full. I don’t think some requirements will stand the test of time, such as the need to inform customers that you plan to use GAI. In the past, when ethics committees required clients to be informed about the use of technology, this requirement has faded over time as the technology has become commonplace. GAI will follow the same route, but it will happen much more quickly.
This is why these ethics opinions are so important: they provide lawyers with a roadmap for the ethical use of these tools. That being said, I don’t think these GAI opinions are technically necessary. Previous opinions provide sufficient guidance for the ethical adoption of technologies that can be easily applied to GAI. However, from a practical perspective, these opinions serve a purpose: they provide a framework for moving forward and encourage lawyers to embrace GAI and the future it brings. This means you have no excuses: there is no better time than now to familiarize yourself with GAI and its significant potential.
Nicole Black is a Rochester, New York attorney, author, journalist, and head of SMB and external training at MyCase and LawPay, AffiniPay companies. She is the nationally recognized author of “Cloud Computing for Lawyers” (2012) and co-author of “Social Media for Lawyers: The Next Frontier” (2010), both published by the American Bar Association. She is also co-author of “Criminal Law in New York,” a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences on the intersection of law and emerging technologies. She is an ABA Legal Rebel and featured on the Fastcase 50 and ABA LTRC Women in Legal Tech lists. She can be contacted at (email protected).