Skip to main content

Assessing Algorithms for Public Good

October 19, 2023

cyber security hologram with digital shield 3D rendering
cyber security hologram with digital shield 3D rendering

At The Regulatory Review, Soojin Jeong L’23 advocates for algorithmic impact assessments (AIA) as a tool to promote accountability without sacrificing regulatory flexibility that supports innovation.

Following the rapid expansion of AI, lawmakers have scrambled to create an effective regulatory response to the new technology. The Biden Administration has announced that the world’s leaders in AI development voluntarily committed to seven minimum safeguards aimed at increasing safety, security, and trust with respect to developing new AI applications.

In a recent article published by The Regulatory Review, Soojin Jeong L’23 advocates for algorithmic impact assessments (AIAs) as a key tool in regulating AI technology, promoting accountability in the development process, and maintaining flexibility to support continued innovation. 

From The Regulatory Review:

Soojin Jeong L'23 Soojin Jeong L'23This summer, the Biden Administration announced that leading AI developers—including Google, Meta, and OpenAI—agreed to minimum safeguards to promote safe and trustworthy AI. This announcement shows that both the public and private sectors are invested in understanding AI’s vulnerabilities and how to address them.

So far, algorithmic accountability has occurred—to mixed effect—through investigative research and reporting, voluntary standards, scattered audits, government enforcement actions, private litigation, and even hacking. But a comprehensive regulatory regime has yet to emerge, in part because it is challenging to design rules that can account for the wide variety of AI technologies and contexts that present different levels of risk.

At the same time, doing nothing is increasingly untenable for regulators. The lurking dangers of AI are easy to see. AI systems can lead to people being denied a home loan, a job opportunity, and even their freedom. With the recent release of ChatGPT and generative AI tools, heightened concerns have swept through society about AI’s potential to undercut or eliminate jobs.

At this stage of AI development, algorithmic impact assessments (AIA) can be a key tool to promote accountability and public trust in algorithmic systems while maintaining flexibility to support innovation.

The U.S. Congress has expressed interest in AIAs, proposing legislation to require them since at least 2019 and most recently last summer in a bipartisan, bicameral data protection bill. AIAs would compel AI developers to consider and define key aspects of an algorithmic system before releasing it in the world.

Requiring AIAs in the United States would be a timely and politically viable intervention because it would support two important goals: responsible decision-making in organizations­­­­ and informed policymaking in the government.

For example, AIAs can help organizations by providing a structure for gathering information about social impacts in the technical design process to make responsible AI decisions.

The Regulatory Review is a daily online publication that provides accessible coverage of regulatory policymaking and enforcement issues across a full range of regulatory topics and from a variety of perspectives.

Launched in 2009 and operating under the guidance of Cary Coglianese, Edward B. Shils Professor of Law and Professor of Political Science, The Review is edited by students at Penn Carey Law. It is part of the overarching teaching, research, and outreach mission of the Penn Program on Regulation (PPR), which draws together more than 60 faculty from across the University of Pennsylvania.

Read the full article at The Regulatory Review.