skip to content

Responsibly Diving into Generative AI with Judge Xavier Rodriguez

by Justin Smith

As generative AI promises to become more ingrained in the legal system and the daily practices of attorneys, judges are playing a prominent role in ensuring their courtroom remains a level playing field.

And while we’ve already seen some judges try to guarantee this through standing orders restricting generative AI’s use, there have also been judges championing its potential.

Judge Xavier Rodriguez is a U.S. District Court Judge for the Western District of Texas, and one of those advocating for a legal system where generative AI is a tool that attorneys, pro se litigants, and even judges can responsibly take advantage of.

It’s a topic he cared enough about that he sat down to speak with Everlaw during a brief recess in an ongoing trial.

Judge-Xavier-Rodriguez-social-sharing
Judge Xavier Rodriguez

Some of the geekiness runs in the DNA. I have an identical twin who just retired as a chief technology officer for a hospital chain in Ohio, and so I think some of my interest in technology is that shared DNA geekiness rubbing off.

The other part of my interest is the real world experience I’ve gained, where I’ve seen how important technology can be.

For example, back around 2001 when I was still practicing as a lawyer, we had a case where there was an email that our client claimed contained proof that the other side had consented to a contractual arrangement.

Long story short, at the end of the day, we could never find that email.

And I thought, there's just got to be some better way to find this sort of stuff, since this is going to be the wave of the future. Even back then, people were creating and agreeing to huge deals over email. This case in particular was worth several million dollars, and people were communicating about it through email.

That experience helped me realize that technology was going to be something we were all going to have to learn more about.

And with generative AI specifically, was it that same initial interest in technology that sparked your intrigue?

It was. I keep my own little notes about different types of cases and developments. And once generative AI started taking off, I went back through my old notes and thought, “How did I miss this?”

I found something in my notes from about four years ago that mentioned generative AI, but of course, I didn't pay any attention to it because then it was sort of this pie in the sky-type idea. No one thought it would go anywhere that fast, so I disregarded it. And like everybody else, it just sort of took off, and it caught me completely by surprise.

Then the president of the State Bar of Texas asked me to be on the AI Task Force, and I went all in.

I also wanted to touch on your paper for The Sedona Conference Journal, which I'm sure you've talked a lot about. I was curious what compelled you to write the paper, and what you made of the recognition it received?

[Editor’s note: In this paper, Judge Rodriguez presents a comprehensive examination of how the rapid growth of artificial intelligence is poised to reshape the legal profession. Read it here.]

Of course, I didn't write it for recognition, but I’ve been glad to see people engage with it. The genesis of the paper was the notes I started taking when I got appointed to this task force. Initially, it was all just research.

Our AI task force has been a little on the slow side. So frankly, when I got a little frustrated by how slow we were going, I decided to go off on my own. I took all my notes that were already heavily annotated, and that was the birth of the Sedona paper.

In the paper, you discussed generative AI and how it will help with access to justice and pro bono attorneys who are under-resourced. Could talk a little more in-depth about that, and what you see as the advantages and disadvantages technology can have on access to justice?

I want to put out some qualifiers that generative AI won’t just magically help with access to justice issues. There are a whole bunch of other issues that are going to have to happen along the way in tandem with the technology to really affect change.

For example, the ethics committees of state bars all across the country are going to have to figure out and give guidance to pro bono providers about how to use chatbots as an auxiliary to their provision of legal services.

"I'm all in. I think the use of AI as a tool, as a first draft that needs to be verified, can potentially offer a lot of cost savings."

I do see a lot of value here. For example, legal aid providers are given a whole bunch of work with limited resources, so they need to augment their work with technology.

Now, it's going to become an issue whether or not this technology can be used in the way someone like me is envisioning it. I'm envisioning that we're going to have legal aid providers use generative AI for things like answering basic preliminary questions through chatbots. I think there’ll be a lot of value in using the technology just to keep clients informed.

Going down the spectrum of legal services, using chatbots might not always be the answer for providing initial quasi-legal advice to people for different situations. That’s where it can get a little more problematic. Is the bot engaged in the unauthorized practice of law? Are any statements made by the bot attributable to the legal aid provider? How is that going to be monitored for accuracy? Are attorney-client relationships being formed through those interactions?

We have a lot of legal ethics issues to work through in tandem with the technology issues.

When it comes to the potential of pro se litigants or pro-bono attorneys actually using the technology in the courtroom, courts might not have all the resources needed to deal with that. What do you see as an immediate need from the court's perspective in being able to handle someone like a pro se litigant using generative AI in the courtroom?

Two things there. One, clerks courts, either federal courts or state courts, ought to consider whether there’s any way to provide procedural steps for pro se litigants.

Courts and clerks offices try to be user-friendly with having information on their websites, but a lot of that stuff pro se litigants may or may not be able to read and understand it. That’s an instance where having a conversant chatbot deliver that same kind of information would be really helpful.

Clerks offices need to start exploring that kind of availability.

Now, the second part of the question is, what are we going to expect pro se litigants to do now in terms of filings? I fully expect that pro se litigants are going to go use ChatGPT or some other AI tool in drafting their complaints, petitions, motions, and briefs.

And that could be good. It might be better than the scribble that we get sometimes. Maybe it'll make what they're trying to allege more understandable.

There could be a lot of value in pro se litigants using this technology, but clerks offices and courts are going to need to advise about how to use it. They need to learn how to check the accuracy of the product. You can't just type in a prompt, get a response, and paste it into a motion and file it, because it may not be accurate.

But we're in a circular world here, right? So, how does a pro se litigant check the accuracy? And maybe we need to be providing more resources in clerks' offices about where to check the accuracy of the cases the AI tool has given. I think we're going to have to do some extra hand holding.

The other concern I have about pro se litigants using AI tools is it may give them an overinflated idea of the strength of their case. And that might be problematic.

Let's talk about attorneys. You’d think by now that we would all know we can't just take whatever our associate or our intern or our law clerk drafts and send it to a court for filing, as is, without checking it. And somehow, when it comes to these AI tools, it's taken at least a dozen mistakes on the part of attorneys before the message seems to get across. We really need to use these things as tools. They're not final products.

I'm all in. I think the use of AI as a tool, as a first draft that needs to be verified, can potentially offer a lot of cost savings.

It can get us to be a little more creative and think outside the box.

"Lawyers are only going to be able to be cautious for so long, because I think at some point they're going to find themselves at a competitive disadvantage if they don’t adopt it."

We could use these AI tools and prompt them to act in a different character, like in the character of the judge or opposing counsel and get some different viewpoints that we may not have considered. And then when we start editing the draft that the AI tools provided, we might have some more wholesome material out there.

So I see a lot of value. It just comes down to using it responsibly.

In the ediscovery world, I see a lot of value coming down the pike with ediscovery vendors now embedding AI tools.

In the world I would like to see, and I'm not sure we're there yet, we would have two phases of discovery. We would have an initial mandatory disclosure of relevant non-privileged hot documents, and we would share those with the opposing side. And if the case can't resolve at that point, we’d do a more traditional discovery subject to 26(g) and subject to proportionality and all the other factors.

I see AI and ediscovery as a way that we can find responsive documents faster and get them produced faster and do early case assessments and potential settlement discussions earlier on. That's my dream world.

For organizations that have banned the use of generative AI among their attorneys, do you think those attorneys are now at a disadvantage compared to the firms that are encouraging its use and embracing it?

I'm not going to criticize them. I understand the caution. We as lawyers are just cautious. These lawyers and law firms and organizations that are banning it, I think they more fully want to understand where their data is being kept, how it's being safeguarded, what's happening to their prompts. I think AI tool providers need to be offering a lot clearer answers to bring down the anxiety levels that some might be feeling and adopting.

Going back to state bars and their ethics officials, we need to have a lot more ethics guidance about permissibility under these parameters. I think once we start getting there, we're going to see a lot more adoption.

Lawyers are only going to be able to be cautious for so long, because I think at some point they're going to find themselves at a competitive disadvantage if they don’t adopt it.

Transitioning to the court side of things, a number of courts have either proposed rule changes or have issued standing orders about the use of AI in filings and documents prepared for the court. What do you think about these, and how do you see those orders affecting the use of AI among attorneys?

Those orders aren’t very helpful. I think a lot of the orders are kind of inarticulate. They talk about artificial intelligence. They mention AI.

AI is embedded in a lot of things already. It's embedded in Westlaw's new tool, Lexis's new tool, Grammarly, and I can go down the list. Do we really want to be notified every time somebody uses Westlaw Precision, for example? That's probably unnecessary and unhelpful to have a mandatory disclosure about that.

"High school and college educators especially are becoming very concerned about AI tools and whether our kids are going to learn anything or are just going to default to these tools. I take the attitude that this is going to be part of their future practice as lawyers. They're going to need exposure to AI."

Prohibiting the use of AI is even worse, and some judges are doing that. And then one judge has gone even further than everybody else and is requiring prompts to be saved. That's really putting the cart before the horse there.

When I talk to my fellow colleagues, I generally tell them this is inadvisable.

You're unnecessarily chilling the use of practicing and experimenting with these tools. And what I've been recommending is if courts are going to do anything, it ought to be something generic like declaring that all litigants, both lawyers and pro se litigants, have a responsibility to ensure the accuracy of their filings and leave it tech agnostic. That’s been my recommendation.

Apparently judges are already doing it.

I spoke to about 200 state judges in Texas three weeks ago, and I asked the question, "How many of you have been using or experimenting with an AI tool?" I was astonished to see the number of hands that went up, probably three-quarters of the group. Now, I didn't ask them how they're using it. But judges were out there already experimenting.

There's some value there, especially for the state judges. Many state judges across the country are not even given a law clerk as a resource. So this is a tool, not a final product, but a tool in assisting with the drafting process that could be very valuable.

I hope we start seeing RAG [Retrieval Augmented Generation] legal specific AI tools being rolled out so we can have a lot more confidence in the verifiability and accuracy of the prompt response. Westlaw and Lexis, for example, are coming out with RAG legal specific tools. I think that's going to be a big help.

There's a tool out there that will help us identify non-existent cases, but money is an issue. Our budgets are all tight. We're not given a lot of money to pay for subscriptions to these devices or tools. That's going to be an issue. I think ultimately we have to be adopters too. And we're going to need funding to get these tools.

High school and college educators especially are becoming very concerned about AI tools and whether our kids are going to learn anything or are just going to default to these tools. I take the attitude that this is going to be part of their future practice as lawyers. They're going to need exposure to AI.

I tell my students they’re allowed to use an AI tool in my course in response to any of the written assignments I give.

But I have the expectation that it’s going to be used as a tool for a first draft, and then strengthened with references to cases we've talked about either in lectures or the materials. That's how I'm making them ensure they have an understanding of the material as opposed to just a wholesale lifting of an AI prompt response.

This is going to be the world they live in.

Clients are going to have the expectation of lawyers to use these tools to lower costs. We're going to have to expand the discussion of AI in our classrooms in law schools. Corporate attorneys now have to be aware that AI tools are being used for due diligence work, like identifying inconsistencies or aberrations between various clauses and contracts.

This is going to be the world for litigators and corporate attorneys. And we need to be teaching our kids this.

It's just a matter of diving in. I keep telling everybody, “Come on in, the water's fine.”

You just have to get in there and play with the tools. I'm lucky enough that I teach an ediscovery class and vendors such as Everlaw and others have donated their tools to my class to use. I give my students a real world response for production exercises they complete in my ediscovery class. You just have to dive in.