5 Strategies for Realizing the Potential of AI in the Legal Profession — While Mitigating Its Risks
As the hype surrounding generative artificial intelligence such as OpenAI’s ChatGPT continues to build, law firms have become increasingly interested in understanding its myriad risks and benefits in the unique context of legal work. In order to aid in this process, Everlaw recently dedicated an educational webinar on leveraging AI while mitigating risks, hosted by lawyer-turned-technologist and Everlaw Senior Product Lead, Mondee Lu, alongside our Strategic Discovery Advisor, Chuck Kellner.
Here is a brief overview of the insights shared, including some legal and ethical obligations to keep in mind when deploying generative AI, a description of the technology and how it works, and five steps firms can take to realize AI's potential while mitigating its various risks.
Ethical Obligations of Technical Competence & Duty to Maintain Client Confidentiality
Before even considering the use of generative AI, attorneys must first ensure that their GenAI tools will not compromise any relevant ethical or legal obligations. And while it’s true that the regulatory landscape surrounding AI is still in its early stages of development, and will no doubt vary depending on the specific jurisdiction, many attorneys will still need to abide by a number of well-established standards and expectations.
For example, roughly 40 states now enforce variations of the American Bar Association’s Duty of Technical Competence (ABA Model Rule 1.1 Comment 8), which broadly states that lawyers must remain current on their knowledge of the benefits and risks associated with legal technology, including actively seeking education on how certain tools work and how they may be utilized in the context of legal work.
Additionally, lawyers have a strict obligation to maintain client confidentiality in accordance with ABA Model Rule 1.6, and must make “reasonable efforts to prevent the inadvertent or unauthorized disclosure” of protected information.
Now, while most attorneys will be more than familiar with a duty as basic as maintaining client confidentiality, it’s important to note that meeting this and other obligations becomes considerably more complex when working with generative AI. However, to understand why this is the case, one must first understand the basics of how large language models (LLMs) like ChatGPT actually work.
A Quick Introduction to Generative AI
When we use the term “generative AI” today, it’s generally assumed that we are referring to popular LLM-based systems such as ChatGPT or Google’s Bard. But the truth is that these tools represent only the latest and most advanced iterations of so-called "traditional machine-learning” technologies.
In as simple terms as possible, machine learning uses a variety of statistical and mathematical techniques to find basic patterns and relationships in data. These are the kinds of techniques that have led to the “auto-complete” feature in text applications, which predict which word will come next based on the principle of co-occurrence and/or semantic reasoning, essentially calculating the statistical likelihood that the use of one word will lead to the use of another.
But LLMs are more than just fancy auto-complete tools.
The central systems at play here are known as “neural networks,” which are initially trained on large sets of data before being fine-tuned through corrective feedback loops to improve accuracy. However, it wasn’t until the introduction of the transformer in 2017 that such tools could go beyond co-occurrence and semantics to represent the actual meaning of words, and most importantly, the specific context in which they are being used.
Needless to say, the ability of advanced LLMs to generate rich, meaningful responses to nearly any mildly coherent prompt is nothing short of astonishing—so astonishing, in fact, that it can be easy to overlook their very real risks and limitations.
For example, LLMs can be prone to so-called “hallucinations,” in which the AI delivers outright false information to the user. And this is exactly what happened in the case of Mata v. Avianca; relying solely on ChatGPT for research, an attorney submitted an official brief to a judge citing court decisions that turned out to be entirely fictional.
Beyond being wary of hallucinations, attorneys will also need to ensure client confidentiality isn’t breached when prompting generative AI, particularly when the tools being leveraged are created and controlled by a third party. Some GenAI tools may also use inputs to continue to train the AI model, with the potential of client data being reflected in later generative outputs.
Speaking more broadly to the risks associated with generative AI, Mondee emphasized that “these systems are fundamentally probability language machines, not systems with any inherent notion of truth or falsity or accuracy or ethics or pragmatics. And so, in order to use these tools well and responsibly, lawyers also have to understand their shortcomings and things that they have to look out for. ”
5 Steps to Mitigate AI Risk in Practice
The potential impact of GenAI on the practice of law is staggering. As a profession whose core skill is deep knowledge and whose main expression of that skill is the written word, GenAI could have the ability to radically transform core legal tasks. Indeed, many in the profession view the emergence of generative AI as a “get on board or get left behind” moment. In a recent survey by Everlaw, the International Legal Technology Association, and the Association of Certified E-Discovery Specialists, 72 percent of respondents said that the legal profession was not prepared for the impacts of GenAI. Forty percent were either using or planning to use it anyway.
Lawyers should not let potential risks dissuade them from realizing GenAI’s rewards. Instead, careful implementation can address possible risks while ensuring access to technology’s benefits.
Strategy is everything when it comes to the successful implementation of any new technology, and having some basic guidelines to follow can make a world of difference in terms of achieving the desired result.
Here are just five steps that any firm can take when getting started with GenAI.
1. Learn Through Well-Defined Procedures
In our view, legal professionals that will have the most success with generative AI will be those who begin with a “growth mindset,” respecting the fact they won’t—and can’t—know everything about the subject matter immediately. Above all, regard AI as a tool to help further develop your talent and abilities and establish well-defined procedures to track your progress.
2. Make It Part of the Job
Leveraging AI effectively will only happen if it's made an organizational priority. In other words, adopting such a powerful technology will require considerable resources and should not be considered a short-term transformation. Moreover, adding structure to your Legal Ops team's AI journey will make it significantly easier to satisfy your firm’s duty to technical competence.
3. Develop an AI Governance Framework
Attorneys are no strangers to abiding by a concrete framework of principles and controls, and your firm's relationship with AI should be no different. Fortunately, there are a number of publicly available resources that can help guide your development of an AI Governance Framework, such as the NIST’s Artificial Intelligence Risk Management Framework.
4. Evaluate Opportunities for AI and Start Small
It’s critical to identify opportunities to work with AI that are unique to your practice and only then develop a staged rollout of AI tools alongside a roadmap for adoption. When choosing a specific solution, look for a legal software platform you can trust rather than an off-the-shelf or open-source application, and be sure to have the right data protections in place when relying on a third-party provider.
5. Check the AI's Work
Lastly, the importance of maintaining insight and visibility into the source documents being used, as well as exercising a healthy suspicion of generative AI’s output, cannot be overstated. And while checking the AI's work may require some manual effort, it will be worth knowing that your attorneys aren’t relying on misleading information or “hallucinated" facts.
Want to learn more about how to realize generative AI's full potential in the context of legal work? Click here to watch the webinar in its entirety.