In “AI in Adjudication and Administration,” forthcoming in the Brooklyn Law Review, University of Pennsylvania Carey Law School’s Edward B. Shils Professor of Law and Professor of Political Science Cary Coglianese and co-author Lavi M. Ben Dor L’20 survey and assess the use of artificial intelligence in courts and administrative agencies throughout the United States.
Artificial intelligence (AI) systems involve the use of digital technologies known as machine learning algorithms to process and sort enormous volumes of data and then generate and refine predictions based on the data. The increasing sophistication of such tools for decision-making is leading to their adoption in many areas of the economy.
Because of the structure of the U.S. government, the thousands of agencies and courts at the national, state, and local levels are largely free to develop their own policies and protocols on whether to keep records in electronic form, how to make those records available for storage and retrieval, and how to use those records to support their own decision making.
Coglianese and Ben Dor’s article is the first attempt to take comprehensive stock of such protocols for algorithmic governance throughout the United States. Their project involved extensive research to survey and identify how machine-learning algorithms are currently used by federal and state courts and agencies to support their decision-making.
Controversy around AI centers on the possibility of using it to replace, rather than merely to inform, human decision-making in courts and agencies. As Coglianese and Ben-Dor show in their article, government has generally not reached anything close to that concerning scenario — at least not yet.
At present, they find that “no judicial or administrative body in the United States has instituted a system that provides for total decision-making by algorithm, such that a digital system makes a fully independent determination” that takes human judgment “out of the loop.” Moreover, their survey found that no courts are relying on machine-learning algorithms to make fully automated determinations on questions of law or fact.
Courts are, however, increasingly using tools like digitization and algorithmic systems that serve as “building blocks” towards more sophisticated machine-learning technology. Artificial intelligence systems require large quantities of data to analyze to discern the patterns and mathematical relationships that inform its predictions. The shift to electronic filing and docketing by federal and state courts has created massive repositories of information about the behavior of litigants and judges.
Courts have also increasingly turned to algorithmically driven risk-assessment tools — albeit ones not yet driven by machine learning algorithms — to assist with decisions about granting bail or parole. Defendants have raised due process challenges to these tools on a variety of grounds; thus far, courts that have grappled with these challenges have found the use of current algorithmic risk-assessments acceptable if they serve as only one of many factors available to inform judicial judgment.
Following the lead of the private sector and some foreign jurisdictions, some state courts have also begun experimenting with online dispute resolution (ODR) systems as a means to resolve simple matters, such as traffic violations or low-conflict family court matters, without judicial decision-making.
“As some researchers have already begun to note,” Coglianese and Ben-Dor report, “court systems could take these algorithms to the next ‘level’ of autonomy by integrating artificial intelligence into ODR processes, allowing for increasingly automated forms of decision-making.”
Professor Coglianese and Ben Dor also explain how, despite a slow start in the 1990s, the United States has moved assertively in recent years to take administrative processes online, digitizing and publishing enormous amounts of information.
“In this respect,” they write, “administrative agencies are well along a path that will support greater use of machine learning.” Unlike the courts, administrative agencies are actually starting to deploy machine-learning algorithms.
Certain use-cases, such as enforcement targeting, are current areas in which some administrative agencies are relying on machine learning. Federal agencies with large-scale inspection and auditing responsibilities, such as the Securities and Exchange Commission and the Internal Revenue Service, have great need for tools that will help them make the most efficient use of their oversight and enforcement capabilities.
Other federal, state, and local agencies have used the predictive power of AI tools to enhance their work in running public programs and services. The Food and Drug Administration is starting to use machine learning to analyze adverse event reports about pharmaceuticals. A variety of U.S. cities and counties are using AI to allocate resources for policing and a wide range of other municipal responsibilities, like traffic control, rodent-bait placement, and child-protective services.
As machine-learning tools become further embedded into administrative and judicial practice, observers have become concerned about the risks they pose. If governments use AI systems that are biased or faulty, those tools could exacerbate social inequalities or lead to unjust results. In addition, the “black box” nature of autonomous systems “raise particular concerns about transparency and accountability” because their underlying mechanisms are harder for the officials who rely on them to explain to the members of the public affected by them.
Although AI holds great promise for the future in improving the accuracy, consistency, and efficiency of both adjudication and administration, Coglianese and Ben Dor’s work indicates that governments need to use these new tools with great care.
His paper with Ben-Dor is one part of Coglianese’s much larger, path-breaking scholarship on the use of information technology by government. In work supported by the National Science Foundation, Coglianese led some of the earliest research on e-rulemaking that helped inform the development of the federal online rulemaking portal, Regulations.gov. His more recent work on artificial intelligence also includes “Regulating by Robot” in the Georgetown Law Journal and “Transparency and Algorithmic Governance” in the Administrative Law Review, both co-authored with David Lehr, a graduate of the University of Pennsylvania and Yale Law School.
Coglianese is currently working with Erik Lampmann L’20 on a paper on artificial intelligence and procurement law which will appear in the Administrative Law Review Accord. He is collaborating with Steven Appel L’21 on additional papers on algorithmic governance that are forthcoming in the Cambridge Handbook on the Law of Algorithms and The Oxford Handbook of Administrative Justice. Moreover, with assistance from Appel and Alicia Lai L’21, Coglianese is currently preparing a report on governmental use of artificial intelligence for the Administrative Conference of the United States, a federal agency with which he is a public member and chair of its Committee on Rulemaking.
Coglianese is the Director of the Penn Program on Regulation and the faculty advisor to The Regulatory Review and the University of Pennsylvania Journal of Law and Public Affairs. His work focuses on the study of administrative law and regulatory processes, with an emphasis on the empirical evaluation of alternative processes and strategies of regulation and the role of public participation, technology, and business-government relations in policy-making.
Read more about the Penn Program on Regulation and check out its daily, student-led publication, The Regulatory Review, which covers regulatory issues that span the fields of business, education, environment, health, infrastructure, international law, process, rights, and technology.