Skip to main content

AI in the Legal Space

June 19, 2024

Professor Cary Coglianese seated at a table, speaking
Cary Coglianese, Edward B. Shils Professor of Law and Professor of Political Science

“The challenge ahead will be to find the best ways for humans and computers to collaborate,” said Prof. Cary Coglianese at Judicature.

Judicature recently spoke with Cary Coglianese, Edward B. Shils Professor of Law and Professor of Political Science, about the pros and cons of AI in the legal space.

In “AI in the Courts: How Worried Should We Be?,” Coglianese along with Maura R. Grossman, a professor in the School of Computer Science at the University of Waterloo; Paul W. Grimm, a retired federal judge and the David F. Levi Professor of the Practice of Law and Director of the Bolch Judicial Institute at Duke Law School, discuss AI’s potential impact on the legal profession, advisable and nonadvisable uses of AI, the potential for AI to increase access to justice, the use of ChatGPT in legal proceedings, and more.

Judicature is a publication of the Bolch Judicial Institute at Duke Law School.

Coglianese is a globally renowned expert on regulatory law, analysis, and management who has produced extensive action-oriented research and scholarship. He is the Director of the Penn Program on Regulation (PPR), which   He has consulted with regulatory organizations around the world and is a founding editor of the peer-reviewed journal Regulation & Governance. He also created and continues to serve as the faculty advisor to the PPR’s flagship publication, The Regulatory Review.

Q: Views about AI tend toward extremes: Either it will save the world from many of its current challenges, or it will destroy humanity as we know it. Where do you stand? Are you generally more positive or more negative about AI’s potential impact, especially on the legal system?

COGLIANESE: AI won’t be perfect, but the aim should be to have it do more good than bad — and to make the world better, on balance, than it is today. Moreover, any question about how “good” or “bad” AI will be cannot be answered across the board. AI is not a singular technology. It’s a proliferation of many varied technologies put to many varied uses. The types of AI algorithms vary, as do the datasets on which they train. Most importantly, the ways that AI algorithms are used vary widely. Some of these uses can be very good, such as detecting cancers or curing diseases through precision medicine. Other uses are good even if seemingly banal, such as helping the U.S. Postal Service sort mail by reading addresses on letters and packages.

AI can also be put to bad uses, such as fomenting political strife through misinformation campaigns or creating fraudulent images or documents. Even then, good AI tools may help spot the frauds and filter out the misinformation.

The highly varied uses for AI tools make it impossible to paint with a broad brush and declare that “AI is good (or bad).” Furthermore, the reality is that AI is here to stay. The challenge facing society is to ensure that the design, development, and deployment of AI will do more good than bad — and that it improves the status quo. This is where regulation comes in. Society needs ways to govern AI that can equitably reap its benefits while reducing its harms. If we can do that, then we can use AI to make the world better. Along the way, it’s worth remembering that a world dependent solely on humans is imperfect, too. The key is to do better.

Read the full interview at Judicature.