Judge Scott Schlegel Talks ChatGPT, Transforming the Courtroom, and More with Everlaw
Creating a different type of courtroom isn’t for the faint of heart.
There’s history, tradition, rules, and practices that are layered together into a system that often takes decades to move.
But with the rush of new technology into the market, and the onset of generative AI in particular, what once moved in decades is now faster than ever.
And while many courts struggle to gain a foothold in the present, there's a judge in Louisiana who is already finding his footing in the future.
Judge Scott Schlegel serves on the Fifth Circuit Court of Appeals, and as the Chair of the Louisiana Supreme Court Technology Commission, in addition to being recently appointed to the Advisory Council of the ABA Task Force on the Law and Artificial Intelligence.
He is one of the most technologically forward thinking judges in the country. He’s embraced things like online calendaring, Zoom, and other ways of “digitizing” the courtroom to make it more accessible and aligned with modern technology.
Judge Schlegel spoke with Everlaw about his 21st-century courtroom, ChatGPT, how courts can start implementing change now, and more.
I want to start out with the topic of technology generally. As a tech leader in a historically tech-averse profession, what initially drew you to technology, and sparked your interest in generative AI specifically?
I see generative AI as the shiny new object. It’s great, and it’s interesting, but something like ChatGPT isn’t designed for the law. That said, generative AI has a lot of potential to help with simply modernizing our courtrooms.
For example, probably three-fourths of judges don't have their own personalized website. The general government might have a page with a couple judge's information on there, but that’s really it. If you have a true website of a court that's been updated, you probably have all the handbooks for whatever you might need. They contain answers to things like: How do you change your name? Where do you go to court? If you’re a juror, what should you expect?
All these different questions, you can actually start feeding them into generative AI to clean those handbooks up and make them more efficient, modernize them, and then start building things out. You can start building audio explanations on these processes. You can start building deep fake videos that are actually useful in that use case in order to present them. You can start layering it with online calendars.
When you're sending the links to everybody, whether it's a domestic case, a criminal case, or a civil case, you can have these “what to expect” type videos, not necessarily regarding the practice of law, but just regarding what to expect when you come to court.
Why aren’t we using generative AI to build these sorts of things out? Why aren’t we sending links to these videos, or audio, or improved PDF handbooks?
There are plenty of positive use cases for generative AI that are not affecting confidentiality or privacy. This is how I see generative AI having the greatest impact.
"When a court says 'Tell me whether you use genAI or not, or you cannot use it,' well, we don't ask if you used a law clerk. We don't ask if you used a paralegal. If you cut and paste from a brief that you filed three years ago that's completely useless now, we don't ask if you did that."
And I understand my role as a judge, and appreciate how much authority I have. But I need to try and reduce as many barriers as I can to make a more efficient, effective, and accessible justice system and put a little effort into it by using these generative AI tools because I don't have time to do everything. But I do have time to go, "Hey, generative AI, read this handbook," or, "Hey, generative AI, I'm going to build a chatbot built on this knowledge base only".
You’re involved in a lot of different mediums. You have a substack, a podcast, and your own personal website. Do you wish more judges utilized these platforms to spread the educational word about technology, and took a more active role in being involved in how transformative this technology is?
Not necessarily. I mean, judges are judges, and technologists are technologists. There's no reason to go learn how to build a Squarespace website if you're a judge.
The reason I have these different mediums is because I can't scale myself. I want people to see what the 4-minute mile looks like and show them that it's possible so that they can go, "Hey, I am willing to go and get people to do this for me." Judges are busy. They have 100 cases on their criminal docket. They have a murder trial they have to handle, and in between, they have to sign a search warrant or an arrest warrant. They then have to go flip over their hat and handle child custody or divorce proceedings. There's a lot going on.
And maybe it sounds simple for a judge to go build a website, but in reality it’s very difficult because the justice system is not made up of the judge. It's the judge, the clerk of court, the district attorneys, the public defenders, the private practitioners, and on and on. There's no budget for any of this. There aren’t any true technologists that understand legal design. And so to say, "Hey, judge, stop and learn how to do this," is not very realistic.
I'm doing this so that somebody else might go, "We should start embedding legal technologists and actually start to budget for it.” And then you have a roadmap you can use to actually go change your justice system overnight.
And it's not that every judge should learn how to become a legal technologist. That is certainly not the case. It's to get the judges interested to see that it's possible because we are the leaders of the justice system.
That leads into another question I have about the recent standing orders that have been issued by courts, like the proposed rule change by the Fifth Circuit requiring the disclosure of generative AI used in documents prepared for court. You wrote a letter coming out against those orders, and used ChatGPT to do so. I was curious if you could go a little more into that and perhaps speak to why you used ChatGPT to write the letter.
Well, I don't have time to sit here and write letters, and that's a great use of ChatGPT. I'm not using any confidential information. I simply input my thoughts and ideas that I need to spit out quickly, and I can take what I want, delete what I don’t, and get it out. That's why I use it.
I'm not asking ChatGPT, “What do you think about these orders?” I'm saying, "Hey, ChatGPT, this is what I think of these orders. I don't think they're necessary. I think it's overregulation. I think it's stifling innovation. I think that we have plenty of rules and regulations, like Rule 11, that allow us to sanction the attorneys that don't do what they're supposed to do.”
I can tell ChatGPT all that in two paragraphs and have it spit out something cleaner for me that I can then edit. I use it as an assistant as opposed to letting it be my whole thought process. I'm the one providing it with the direction in the prompting. You need to use it properly.
I think we're actually seeing it play out. We have enough tools to ensure that the practice of law is done properly in the sense of Rule 11, at least in federal court. If you're throwing out nonsense to the court and you use a generative AI tool that’s hallucinating and making up cases, you're going to get sanctioned. And that's what's happening right now. There's no need for a rule on this stuff in the sense of a new rule for generative AI. We don't need it because Rule 11 exists.
A judge is going to sanction you if you aren’t doing your job as an attorney and you're providing fake cases. You have a candor to the tribunal required under Rule 3.3 of the Rules of Professional Conduct that says you better understand what you're telling the court. And every time you sign that order or you sign that memo, you're telling the judge everything in it is true and correct to the best of your ability and understanding. Period.
We can apply this to any technology, because there's going to be something after generative AI, there's going to be something after these large language models, and I think our rules and regulations can stand the test of time. And sure, we’ll probably have to modify a few of them, but that doesn't mean we have to change everything.
"You have to understand the nuance of law and the practice of law before you just accept these outputs provided by generative AI. You're not going to really understand and build that scarring that you need to understand the nuance because the law is nuance. The practice of law is nuanced."
When a court says “Tell me whether you use genAI or not, or you cannot use it,” well, we don't ask if you used a law clerk. We don't ask if you used a paralegal. If you cut and paste from a brief that you filed three years ago that's completely useless now, we don't ask if you did that. When you sign that document, you are telling me this is your work product and it's the best that you could do and it's true and correct to the best of your ability to understand. Period, the end.
We're talking on the heels of OpenAI's announcement of Sora. On your website, you have a whole section dedicated to deep fakes. The introductory video on there is a deep fake. With technology like this coming out and new technology emerging every day, how do you see it playing out in the courtroom? Do you see a technology that can validate the authenticity of deepfakes as a necessary tool, or is there a different route?
If you've read all of my stuff, you know deep fakes scare the heck out of me. Now, Sora is so good, you can probably tell that that's not like somebody taking a surveillance video. That's like movie production-level good. The deep fake video in today's form, the consumer-based video, you can just buy off the shelf for a few hundred bucks a year. That stuff is still off. You can see there's something off about them. So that doesn't scare me as much today necessarily, but at the same time, I started playing with this technology a year ago, and it's even better today than it was then. And so, again, in 18 months, I'm sure it's going to be even better.
But it's the audio that scares the heck out of me. You can clone audio pretty easily. That's why I use cloned audio in my podcast, to show how easy it is. You can tell it’s cloned, but I'm not putting forth much effort there. I've used my phone to record a quick one-minute voice memo and submit it, and it doesn’t sound that bad.
If somebody comes in and says, "Hey, judge, I have a voicemail of my husband threatening me," and that person is able to use ChatGPT and feed it real information, with the name of the dog, the names of the kids, where you live, where you work, and then they have your voice from all the messages you've left them in the past, they can feed that into this voice cloning technology, and use it to build this script out. And then they have a convincing voicemail from the cloned phone number that they show the judge, who then says, "Protective order put in place, please serve the defendant. Remove firearms. The children are now to be in the custody of the mother. They are not to go to the house anymore." That's scary. And that's temporary.
Then two weeks later, to prove it upon your jurisdiction, there’s a hearing, and the husband comes in and says, "Judge, that's fake." The judge is going to say “Yeah, right.”
Who are the experts? Nobody subpoenaed the phone records because that takes a while, and then it’ll take even longer to find that it's a cloned phone number. And then you hire an expert to say, "That's not his voice." Well, who's an expert these days? I mean, what is a judge going to do from a Daubert perspective? Are they going to have a hearing to determine whether or not that's a real expert in the field?
And to your point with the images and videos, are we watermarking them yet? Do we have the necessary audio tags that are tagging the data? This is really scary stuff.
Most domestic cases, most civil cases, have at least one side without a lawyer. There's no money to go hire these experts, to find them and fly them down. So, this is what scares me the most.
Switching gears a little bit to the attorney side of things, I was reading your writeup of the ABA TECHSHOW, and you talked about how before getting into AI, attorneys need to understand the current processes they have in place. How do you recommend they do that? How can an attorney assess whether an AI-powered tool is right for them?
Again, I go back to how lawyers and judges in the justice system have been doing this forever. This isn't new. Like, should I use the fax machine? Should I use email? There are still people that say, "Don't use email, pick up the phone and call the person, go get a cup of coffee and visit with opposing counsel, knock off this email stuff."
I think it’s both.
You have to learn how to use email, and you have to learn how to pick up the phone. We like to make this very linear decision, and it's either yes or no, good or bad. Do you use PowerPoint instead of the foam board? Well, do you know how to use PowerPoint and present it to the jury? If you don't, don't use PowerPoint. Use the foam board, and then go learn how to use PowerPoint so you can become a better trial lawyer who can present in court.
"Don't go buy in wholeheartedly on the tech and think it's going to solve every problem. The technology is a tool to be used in certain situations, and it's a tool to be avoided in certain situations."
Should you use the iPad so you can flick it up to the screen? Well, if you don't know how to use it, don’t. Use the ELMO in court, but then go learn how to use the iPad. And don't use it all the time. Sometimes it's better to just hand the physical object to the jury or hold that actual photograph in front of them.
You really need to understand how to use these tools. Let's say you’re an attorney who's been doing product liability for 20 years. You can say "Hey, ChatGPT, draft 20 interrogatories regarding seat belt malfunctions." You don’t give any private information.
You can get it to spit out a word document, cut and paste, go, "Yes, yes. No, no. Like it, don't like it," because you've been doing this forever, and you just saved 30 minutes, as opposed to going into an old file, cutting and pasting. It's just a much more sophisticated cut and paste for a lawyer that's been doing it for 20 years in certain situations. I wouldn't write client letters for it because if you don’t turn off the feature that keeps it all, you're training a large language model based upon your client information.
But an associate that's been practicing for one, two, three years, maybe you don’t do something like the example I just gave, but still learn how to use it so you can be of benefit to the other lawyers in your firm.
ChatGPT-3.5 was my 15-year-old, saying "Dad, I'm telling you, this is the answer." And you can go, "Yeah. Get out of here kid. That's not right." And it goes " No dad, I'm right,” because it's obstinate and knows it's right.
But now my son's 18, and he's ChatGPT-4. And so he comes in, and if I'm not paying attention or I'm tired and don't feel like dealing with it, it’s going to be very convincing. And I'll be like, "You know, that's good enough." And then I’ll file it.
You have to understand the nuance of law and the practice of law before you just accept these outputs provided by generative AI. You're not going to really understand and build that scarring that you need to understand the nuance because the law is nuance. The practice of law is nuanced.
With this next generation of attorneys that are just now coming out of law school, generative AI might very well be part of their daily legal practice. What's something you think they can do to best educate themselves on the impacts of generative AI? Do you see the need for them to still be able to do the manual work you referred to? Or do you think that they can just go right into the technology?
I think they need to do the manual work.
Back in the day, to do legal research, you had to actually go to the books and you'd have to look at the cases and you'd have to shepardize them, meaning you'd have to go and physically find out, is this case still a valid case? Because that book is 12 years old, but we've shepardized it, meaning I've now looked to see, is it still good law?
Today, you just use Westlaw. Westlaw will tell you no good, still good, red flag, yellow flag. Do I think you need to go learn how to shepardize with the books? No. Do I think you need to understand the concept of how to do the research? Yes. By going through the books, it taught me how to use headnotes. It taught me what search terms really look like. But I don't have to spend a year on the hardback and shepardizing physically anymore, I just go use Westlaw.
So, my point would be the same for the new lawyers. Do I need to spend as much time on everything that I did as a young associate when this technology didn't exist? No. But I better go learn the basics first.
Don't go buy in wholeheartedly on the tech and think it's going to solve every problem. The technology is a tool to be used in certain situations, and it's a tool to be avoided in certain situations.
Just to wrap things up with the last question, what's something you wish other judges would know about generative AI? How do you wish you would see it incorporated in the court moving forward?
Generative AI is the shiny new object that's starting a conversation, and I love that. But can we go and get rid of the wire baskets in the courtrooms first? Can we get everybody e-signature capability? Can we get everybody an actual Office 365 account? And please don't think I'm suggesting that we need a Microsoft product, that I'm a spokesman.
My point is, we still have judges that use free Gmail accounts as opposed to products with FedRAMP certification. There are so many issues that we should be going back to instead of sitting here saying generative AI is going to solve the world.
Go back to the basics. Optimize what we have, implement what we don't have. Generative AI is going to do nothing for us if we don't have the right security levels and workflows in place.
I don't care if anybody knows about generative AI today. There are going to be some people that need to learn about it, but let's get technologists in the courtrooms first so they can design a more efficient system.
We haven't even talked about the record, and how people take records, and they’re still being saved on CD-ROMs in certain locations. And they don't even know the difference between on-prem and off-prem and how to push record and get a transcript in a timely fashion that people can afford so they can take a writ or take an appeal. All that could be solved with the right software, and the right microphone placement, and the right on-prem versus off-prem storage facilities with which we can then layer AI on top of. But in three to five years, I don't know where we're going to be with court reporters. So, let's go back to the basics first.