© 2024 Western New York Public Broadcasting Association

140 Lower Terrace
Buffalo, NY 14202

Mailing Address:
Horizons Plaza P.O. Box 1263
Buffalo, NY 14240-1263

Buffalo Toronto Public Media | Phone 716-845-7000
WBFO Newsroom | Phone: 716-845-7040
Your NPR Station
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Recent cases raise questions about the ethics of using AI in the legal system

STEVE INSKEEP, HOST:

Earlier this year, a Colorado judge suspended a lawyer for using ChatGPT to draft a legal document. This was not good because when you request information from large language models, as they're called, they sometimes give you facts and sometimes fakery, what are called hallucinations. And the chatbot made up legal citations for that document. They were offered in a real-life court case. The episode points to a bigger question - how, if at all, should artificial intelligence influence the courts? Andrew Miller says we should begin by noticing that AI is already there.

ANDREW MILLER: I think it's fair to say that artificial intelligence of some kind or another has been a part of how lawyers do their job for quite a while, even if they weren't really aware of it. We rely on electronic archives. We have for decades. Famous ones are Westlaw and LexisNexis.

INSKEEP: Miller is a lecturer and director of the Yale Law School Center for Private Law. He says AI may not be new in the law, but its role is expanding.

MILLER: I think there's a lot that's different. When you use an AI tool, be it ChatGPT or something else, to gather information, essentially, you're at the front end of the legal research process as a lawyer. You're gathering facts and canvassing the law. But those are raw materials, and it's your job as a lawyer to use your lawyerly skills and knowledge and training - in fact, you have an ethical duty - to provide competent legal representation. When you have a legal brief that's written by AI, you've really delegated that lawyerly duty to someone else. Now, interestingly, delegation is also part of the legal profession. We delegate to humans all the time. And it's actually OK if a summer intern who isn't a lawyer drafts something that later becomes a brief. But the key is that the buck stops with the lawyer who files the brief.

INSKEEP: To what extent does someone have to think about what a large language model produces? I'm thinking about the way that we as consumers are continually given these terms of service that we're supposedly going to read and click I accept, and of course we glance at it and click I accept. You have to do something more than that as a lawyer, don't you?

MILLER: You're exactly right. A professor colleague said to me, you know, when a doctor uses an MRI machine, the doctor doesn't necessarily know every technical detail of the MRI machine, right? And my response was, well, that's true, but the doctor knows enough about how the MRI works to have a sense of the sorts of things that would be picked up on an MRI, the sorts of things that wouldn't be picked up. With ChatGPT we don't have - at least not yet - particularly well developed understanding of how our inputs relate to the outputs.

INSKEEP: Does that imply that maybe this technology should not be used in the law at all?

MILLER: I wouldn't necessarily go that far, but I would say that at this juncture a lot of caution is warranted. There's actually been a lot of action in this area already. So in federal courts throughout the United States, the courts issue local rules, which basically say, here are our unique twists on the general rules that govern your behavior in court. My understanding is some of them require disclosure when you use these technologies. Others simply reiterate the existing standard, make it clear that the existing ethical standards still apply.

INSKEEP: Do you consider it a very real possibility, if this is not minded properly, that a large language model could put somebody in jail who doesn't belong there, could cause someone to lose a case that they should have won, some true injustice?

MILLER: It's the same flavor of risk as the risk to a client of a lawyer not supervising his or her underlings properly. The difference, I think, is that the ingredients of good supervision, the things you have to do and not do when delegating certain parts of your lawyering to someone else, have been clearer, I think, in earlier eras. Generally speaking, wherever there's a risk of lawyers doing their job badly, there's a risk to the clients.

INSKEEP: I feel that I hear you trying very hard to be thoughtful and nuanced and careful about this technology that scares a lot of people. And I do wonder if there's any aspect of this that does give you nightmares.

MILLER: I am worried that this technology is going to let loose things I haven't thought of, or we haven't thought of, that we will need to take account of. But I do have some cautious optimism that we are entering not just a stage of technological development at a very fast pace, but also a stage of serious vigilance. And I've been impressed at the speed with which court systems and state bars have at least started to ask these questions. I think the potential is great, but I also think the potential for abuse, unintentional harm, even intentional harm, is also great.

INSKEEP: Andrew Miller is director of the private law clinic at Yale Law School. Thanks so much.

MILLER: Thank you, Steve.

(SOUNDBITE OF MUSIC) Transcript provided by NPR, Copyright NPR.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

Steve Inskeep is a host of NPR's Morning Edition, as well as NPR's morning news podcast Up First.