Skip to main content

How to Regulate Artificial Intelligence

January 16, 2024

Professor Cary Coglianese seated at a table, speaking
Cary Coglianese, Edward B. Shils Professor of Law and Professor of Political Science

Regulators should factor in the dynamic nature of machine learning when proposing AI regulations, writes Prof. Cary Coglianese.

At The Regulatory Review, Cary Coglianese, Edward B. Shils Professor of Law and Professor of Political Science, explores the regulation of artificial intelligence (AI), emphasizing that regulators must be agile, flexible, and vigilant to address differences in machine-learning algorithms.

The essay is adapted from Coglianese’s article, “Regulating Machine Learning: The Challenge of Heterogeneity,” published by Competition Policy International.

“Machine learning’s future is a dynamic one and regulators need to equip themselves to make smart decisions in a changing environment,” writes Coglianese, the nation’s foremost expert on governmental use of AI. “This means regulators must remain engaged with the industry they are overseeing and continue learning constantly.”

From the article:

Artificial intelligence increasingly delivers valuable improvements for society and the economy. But the machine learning algorithms that drive artificial intelligence also raise important concerns.

The way machine-learning algorithms work autonomously to find patterns in large datasets has given rise to fears of a world that will ultimately cede critical aspects of human control to the dictates of artificial intelligence. These fears seem only exacerbated by the intrinsic opacity surrounding how machine-learning algorithms achieve their results. To a greater degree than with other statistical tools, the outcomes generated by machine learning cannot be easily interpreted and explained, which can make it hard for the public to trust the fairness of products or processes powered by these algorithms.

For these reasons, the autonomous and opaque qualities of machine-learning algorithms make these digital tools both distinctive and a matter of public concern. But when it comes to regulating machine learning, a different quality of these algorithms matters most of all: their heterogeneity. The Merriam-Webster Dictionary defines “heterogeneity” as “the quality or state of consisting of dissimilar or diverse elements.” Machine learning algorithms’ heterogeneity will make all the difference in deciding how to design regulations imposed on their development and use.

One of the most important sources of machine learning’s heterogeneity derives from the highly diverse uses to which it is put. These uses could hardly vary more widely… .

Coglianese, Director of the Penn Program on Regulation (PPR), is a globally renowned expert on regulatory law, analysis, and management who has produced extensive action-oriented research and scholarship. He has consulted with regulatory organizations around the world and is a founding editor of the peer-reviewed journal Regulation & Governance. He also created and continues to serve as the faculty advisor to the PPR’s flagship publication, The Regulatory Review.

Read Coglianese’s full piece at The Regulatory Review.

Read more of Coglianese’s scholarship on governmental use of artificial intelligence.