A Conversation About Generative AI and Access to Justice with Judge Paul Grimm
Since the introduction of ChatGPT and subsequent large language models, generative AI has firmly cemented itself as the most exciting technological advancement of the past quarter century, if not ever.
While it’s still in the early stages, generative AI is at the start of a promising journey that will fundamentally change technology itself. And as it clears hurdles both regulatory and otherwise, its potential for transforming the legal landscape has come increasingly into focus.
Perhaps no voice has been more prominent in examining its impact thus far and looking toward its future than Judge Paul Grimm. As a former U.S. District Court Judge with over 25 years of experience on the bench, Judge Grimm has been present for nearly the entire evolution of the ediscovery process and the innovation of new legal technologies. He’s authored several papers regarding the potential impacts of generative AI in the legal system, and is frequently called upon to give talks as an expert in the field.
Coming off a panel at Everlaw Summit with Professor Maura Grossman in which they spoke extensively about generative AI and the need for equal access to it, Judge Grimm continued the conversation with Everlaw.
Let’s start off by touching on your panel with Dr. Grossman at Everlaw Summit. Much of the discussion centered around the need to create clear standards for how AI technology can be used and presented in courtrooms, and the consequences of falling behind on that front. How can AI evidence either advance or hinder the administration of justice?
We know that in the world today, algorithmically-powered software applications are everywhere. They're being used in medicine, finance, education, and employment decisions. They're being used for case evaluation, financing litigation, claims evaluations by insurance companies, benefits evaluations by government agencies like Social Security.
There's an arms race going on right now that's only going to continue. So, that means that all of the interactions that take place among human beings, whether they're governments or individuals, whether they're personal or public, are being influenced by these artificial intelligence software applications.
The concern is now we have a lawsuit or a criminal charge, and we're in court. Where is the evidence going to come from? Well, what are people doing in their lives? They're engaged in all of these activities using artificial intelligence-powered algorithmic evidence.
We can't stop it. Now we have a dispute, and the dispute comes in, and what evidence is going to prove or disprove the dispute? It's going to involve images, and audio, and visuals, and decisions made relying on algorithmic software.
Then the issue is what gets introduced into court. We worry about the difference in resources. Are you always going to need to have a technology expert in all these cases? Or is there another way that you could deal with it?
But those are ordinary kinds of in-court decisions that can be made. The existing rules of evidence have the tools available that are flexible enough to be applied with a proper discovery and management by a court to get fair information out there and adjust results. So, that's not necessarily what we're worried about.
"Now, fake evidence has always been out there. But the problem is, fake evidence can now be so good, and so realistic, and so many other people can have access to it."
Deep fakes are another matter.
Let me give you a hypothetical example. You and I have known each other, we've been in programs, we've done work together. I've had Zooms with you. You've had Zooms with me. I've left you voicemail messages in the past. We've met at some conferences.
All of a sudden, one day, I get a voicemail message on my cell phone that sounds exactly like you. It's making this demand that if I don't pay money, they're going to disclose some horrible dark secret in my past, which is not true. They're going to put it on social media and my reputation will be ruined. And so, I go to the police, or I file a civil suit, and I say, "Listen to this. I know this guy. I know what he sounds like. That's him."
You come in and say, "Oh, that's a deep fake. I didn't do that. I would never do that." And as a matter of fact, from the information that we have, this supposedly was left on my phone at 9:35 in the morning on a certain date. I wasn't available then. Maybe I was in a doctor's appointment, or maybe I have witnesses saying that I was out of town.
Now what happens is one side says this is real evidence. One side says it's fake. What do we do with that situation?
When you have that dispute of fact that has to be resolved to decide if it's relevant, the judge doesn't make that call. The jury does. And we know from psychological studies that once you start putting evidence into audio-visual format, and juries hear it and see it, it alters the way they look at the whole case, even if they have some suspicion that it might not be legitimate.
So, when one person says it's fake, and the other person says it's not fake, how are we going to have the tools to deal with that new kind of evidence when it's everywhere? When it happens in a case in domestic relations where one-third of the people don't even have lawyers, and they're standing there in a court and they're showing there's something on their cell phone that they got as a text message. That's what we're worried about.
Now, fake evidence has always been out there. But the problem is, fake evidence can now be so good, and so realistic, and so many other people can have access to it. If I was a master forger of paintings, that takes a lot of time and skill, right? Not everybody on the street could do that, but everybody on the street can get an application where for five bucks and one minute's worth of your voice, someone can make you appear to say anything they want just by typing in a script.
In your article for the Duke Law & Technology Review titled “The GPTJudge: Justice in a Generative AI World,” you and your co-authors cite a statistic that “‘jurors who hear oral testimony along with video testimony are 650% more likely to retain the information.’” How can juries be better protected from seeing this sort of evidence in the first place, and what (if any) tools can they receive to help them spot deep fake evidence? And, going one step further, do you think there’s a place for AI evidence in the courtroom, or would you rather see a courtroom where AI evidence on its own is inadmissible?
The genie's out of the bottle on that, and we'll never go back.
Remember, we have two situations. The situation where I say it's AI, and you say it's AI, and you say it works, and I say it doesn't. Everybody is acknowledging it's AI. It's just a matter of looking at the software, seeing how it was trained, seeing the population it was applied to, finding out what the error rate is, and determining all those kinds of things that can be done.
Now, you may have access to justice problems because you need an expert to be able to do that, but those are not particularly challenging. How you would do it in an individual case could be challenging, but the steps you have to follow are pretty well established. It's the deep fake situation. Because everybody knows deep fakes are out there.
So, you say, "It's a deep fake." And the jury goes, "Well, yeah, they're out there, so maybe it is a deep fake, what do I know?" And there's nobody who can explain it. They assume that legitimate evidence is fake and they disregard it. Or on the flip side, they see fake evidence and they conclude that it's legitimate because it's so compelling. And remember, the threshold of whether or not a jury gets to hear it is 51% more likely than not. That's just a slight bit better than a coin toss. So, you have a double whammy to the truth-making process.
We're always going to be one step behind technology. And then you’re having to constantly keep up with whether the lawyers understand this technology. Do they know about it? Do the judges? They're very busy, they're overwhelmed with cases. Do you have to have an expert in every case? What are these ramifications? Are we going to make it even more expensive for people to have access to the justice system? And if more people don't have access, how do you have faith in a system that you can't access because it's too expensive?
These are some of the challenges we're facing, and they're big challenges.
I don't know any judges in any courts, whether they're local courts or federal courts, that don't have a workload which has far more demands than they have time. By the time I was getting ready to retire, I was working seven days a week, and I still was never as prepared as I wanted to be. I was always as prepared as I could be, but I was never as fully prepared as I wanted to be.
What do you make of courts requiring standing orders for attorneys to submit declarations of whether they’ve used AI at any point during the trial preparation process? Should there be codified standards for using AI as a practitioner vs. using AI for evidence?
Professor Grossman and I wrote an article on this that was just published by Judicature, and our position is that they're well-intentioned. We know why the judges did that. And I don't fault them for wanting to make sure people were not filing things in court that they had not checked to make sure the facts were accurate and the cases were real. That obligation has existed forever. It's part of the rules of civil procedures, and part of the ethical obligations lawyers have for candor to the court.
The challenge I have is that with some of those orders, they might end up being a bit all over the place. First of all, you've got, what, a thousand federal judges out there? So, is each judge going to have their own policy? Maybe some of them will follow the same one. Someone's going to tinker with it, someone else is going to change the language, and some of them are just overbroad. I think this increasing number of one-off orders creates confusion. Some of them are not drafted as clearly as they need to be, as to show what is and is not acceptable.
In fact, some of those orders would preclude the use of artificial intelligence applications that could help reduce costs or increase access to justice. I think the motivation is well-intentioned, but trying to follow through on these things and monitor them and keep track of them, if you're a lawyer that practices in many jurisdictions, can be nearly impossible.
For example, in my court, the U.S. District Court for the District of Maryland, we had 10 active district judges and 8 magistrate judges. At a given time we’d also have three or four senior judges, and five or six bankruptcy judges. Any one of those could have their own order. And sometimes they would not be consistent. What Professor Grossman and I said in our article was, that if you're going to go that route, have a local rule applicable to the whole court.
That's what the Fifth Circuit has recently done. They're saying, "If you're going to file something in the court in the Fifth Circuit, here's the rule that we're going to require." And they put it out for comment. And then that way, a lawyer can look at it and say, "Whoa, timeout, you're saying this, but what do you mean by it? Because I know that this program, which is just Westlaw, uses that. Are you saying that I have to disclose that Westlaw, which everybody's been using for 20 years, which now has algorithms powering it, can’t be used when I'm doing research for a brief?" No, that's not what we mean. That kind of notice publication and then public comment and then redrafting and having that order apply for the entire court, all the judges in that court, whether it's for an entire circuit or a single district, that's the better way to do it.
There's also nothing wrong with going on your website and posting something that says whatever you file with this court, whether it's a product of algorithmic research or not, you are responsible for complying with the rules that require you to investigate, to make sure that the facts are what you say they are, and the law is the real law, and it applies to the facts of this case. And if you fail to do that, you're subject to sanctions.
I don't have any problem with those kinds of notice requirements. It's just that now you've got a lot of judges who probably don't have much technical sophistication grabbing some sort of language that they either came up with themselves or somebody else got for them saying, you can't do this without doing these other things.
There are now third-party vendors (including Everlaw), who are releasing their own generative AI systems to assist attorneys with everything from document analysis to building case narratives and more. What would your guidance be for attorneys who are incorporating this technology into their everyday practice?
The ethical issues that affect lawyers are very interesting, and they're not necessarily easy. 40 states have adopted a requirement that the competence to take on a case includes technical competence. In addition, you can't overbill clients, and you have a duty to clients. You also have to maintain client competencies and confidentiality about the subject matter of the litigation. So, there's a whole host of really tricky ethical issues that come out.
"There's a lot about generative AI that may very well make sure that more people have access to filing decent, legitimate complaints in a way that allows them to be heard, which would be a great thing."
There's a research project that was recently done where there were four assignments given to a group of law students, and some of them used generative AI, and some didn't. It was a blind study. And what they found is that when you compare in blind grading, the quality didn't seem to change one way or the other very much, but the speed with which you could do it changed a lot.
If it turns out that by using generative AI and then checking it and making sure it works right, you can do something in an hour-and-a-half that used to take a day, then it is going be a potential ethical violation not to use it if you're charging your clients seven hours more than what they should have been charged if you’d done it by a more efficient, technically acceptable way.
As Professor Grossman says, AI is a tool. It's like a hammer. If you have to drive a nail into a piece of wood, there's not much better than a hammer. But a hammer could also be used to knock someone over the head and injure them. And it's not the hammer that's bad. It's the way in which it's used. These AI systems are a tool. How they’re used, how they’re designed, is what makes them either good or bad.
There's a lot about generative AI that may very well make sure that more people have access to filing decent, legitimate complaints in a way that allows them to be heard, which would be a great thing.
There's pluses to it and there's minuses to it. Right now, it's like the gold rush. You've got all these covered wagons, from all over the United States. They're all heading out to California to try and get to someplace that they're going to put their grub stake out there and make a billion dollars, and they're all heading towards that promised land.
And some of them will get there and they're going to do it, and they're going to do it successfully. There's new stuff coming in all the time. They have promise. But again, there will have to be guardrails to establish how they're going to be used.
In “The GPTJudge: Justice in a Generative AI World”, you and your co-authors write about the need for experts to help root out deep fake and GenAI evidence. With the rapid advancement of AI systems, how do you manage the proliferation of AI technology so that both parties enter the courtroom on a level playing field, and have access to technology experts and technology itself regardless of cost?
I think that's a tremendously important question. Let’s say you're one of the top 100 law firms. The per partner share is $4 million. You're paying your associates $300,000 right out of law school, which is more than a federal judge is making. You have all these resources.
Then you have a civil rights case where someone comes in and says that their civil rights have been violated, they don't have resources, so we're going to level the playing field. We're going to say that if one side is going to use AI, that either there's some way of letting the other side have access to that or the equivalent of that, or we're going to try and appoint lawyers who can work pro bono to provide access to that technology.
We already know that we have an access to justice crisis in this country right now. There's an old joke that we have the most amazing civil justice system that no one can afford. According to the World Justice Project, the United States is ranked 115th out of 142 countries in terms of the accessibility and affordability of civil justice. And if you don't have members of the public who can have access to the justice system, they're not going to respect what it does. And that’s just going to lead to even further deterioration.
Now, there is some innovative stuff being done. Utah has what they call this regulatory sandbox that's being operated by its Supreme Court. It allows non-lawyers and lawyers to work together in a way that is supervised by the court to provide low-cost, effective access to technology and methods to represent themselves in a variety of things like domestic relations and banking and education issues and employment issues and bankruptcy.
The sandbox allowed these legal professionals and non-legal professionals to work together to come up with products that can increase access to justice. And of course, the technology is always developing.
I think it’ll be a continued challenge for judicial systems to have as great an access as they can so the public can have confidence in the judiciary.
What is one thing you want the next generation of judges and attorneys to know about generative AI’s role in the future of the legal system, and how can they best educate themselves on its impact?
I am technophobic. I am terrified that every time I push a button, I'm going to destroy what I spent the last four hours working on. My kids are not. When I have a problem, I go to them, and they solve it so fast, I can't even follow them.
"Discovery is the big enchilada. That's everything. You're going to win or lose in the real world based upon something that's barely taught in a lot of law schools."
These digital natives, these people who have grown up with this technology and have a natural acceptance of it and a tolerance for it, also have a great curiosity and lack of fear. So, that's good. We already know that they're ahead of the game in that way.
They get the importance of it because they interact on all these platforms all the time. They're on social media, they text, they tweet, they do all these things. What I think the law schools need to do is make sure that they have courses that allow this. For example, if you take your standard civil procedure class, you're not going to spend much time on discovery. Maybe you get a day. But if you're in litigation, I mean, Everlaw makes money because 98% of all civil cases don't go to trial. They get resolved, and discovery is the name of the game. After discovery, you either win it, you settle it, you win it on a motion, or you go to trial and win in the case of the discovery.
Discovery is the big enchilada. That's everything. You're going to win or lose in the real world based upon something that's barely taught in a lot of law schools. There need to be courses that do teach that, and there need to be courses that are multi-disciplinary.
I think that law schools would do their students a better service if they made sure there was a full availability of courses that can help train students to have career-enhancing technological skills for when they come out. So far, that's been pretty hit or miss depending upon whether you had a professor that was interested in technology and wanted to offer those courses or an adjunct that wanted to teach it.
A lot of law schools, particularly some of the more prestigious ones, look at themselves as training people to think. Well, that's good, but the legal system doesn't exist in an abstract, it exists to function, and we need to have people who are technically proficient and equal to the task.
And that's a problem, because when I was a judge, I had hundreds of civil cases, and I could not possibly devote equal amounts of time to every case. So I had to pick and choose which ones I wanted to devote time to. And I had experts that dealt with psychology, engineering, economics, accounting, mental health, medical issues. How was I supposed to be equal to the task of sorting out those things in each of those areas all by myself?
You could have a court expert. That's a rule of evidence that says you can appoint one, but it doesn't say who's going to pay for them or whether the resources are there to do it. So, we're going to find that with the promise of new technology comes challenges, and we're going to need a really great partnership among the tech people and the legal people who really want to make the system work the way it's supposed to.
Responsible Practices for the Next Generation
Generative AI is an undoubted contribution to the legal system, and has already established itself as an important tool used by attorneys on both sides. From reducing costs to streamlining the ediscovery process to automating time-consuming work, there are numerous advantages that can help play a role in transforming the way the law is practiced. Not only that, but it can also increase access for underrepresented communities, and create a system of justice that is truly equitable and fair.
That said, we must also maintain a commitment to using this technology responsibly, and for the benefit of everyone, not just ourselves. As someone who’s seen the role of legal technology evolve multiple times over his 25 years on the bench, Judge Grimm is more than aware of the successes and pitfalls that can come with the introduction of new technologies. In order to help generative AI fulfill its promise as the next great transformative creation, we first need to ensure it’s truly and wholly available for all.