
In “Policy Lab: AI and Implicit Bias,” students propose solutions to address intersectional bias in generative AI.
The rapid expansion of artificial intelligence and the overwhelming popularity of generative AI tools like ChatGPT raise important questions about how algorithms and machine learning models reproduce real-world human biases. The University of Pennsylvania Carey Law School’s “Policy Lab: AI and Implicit Bias” incubates ideas for an intersectional, inclusive approach to artificial intelligence.
The Spring 2023 Policy Lab’s report, “A Promethean Moment: Towards and Understanding of Generative AI and Its Implications on Bias” explores how bias arises in generative AI applications, demonstrates methods for measuring AI bias and offers solutions for mitigating the impact of technology that reproduces human bias and exacerbates inequity.
Engaging Interdisciplinary Experts
The Spring 2023 course featured Heather Sussman, Partner and Business Unit Leader for Strategic Advisory and Government Enforcement at Orrick; Sandra Wachter, Professor of Technology and Regulation at University of Oxford; and Kasia Chmielinski, Mason Kortz, and Aaron Gluck-Thaler from the Berkman Klein Center for Internet & Society at Harvard University.
Students conducted a real-time analysis of the White House’s “AI Bill of Rights,” as well as various EU proposals. The student scholars also considered different regulatory frameworks and engaged in a careful examination of gender biases in Large Language Models (LLMs) to develop new strategies for creating more inclusive, equitable AI applications.
Impactful Student Scholarship
In “A Promethean Moment,” student-authored essays call attention to a range of urgent issues, many of which are already the subject of intense public debate. The report explores social media consent disclosures and the potential exploitation of content creators, racial and gender disparity in AI-generated art and implications for copyright law, big data and cross-border data exploitation, the pitfalls of a self-regulated AI industry, and the risks of LLMs trained on biased data sets.
“This report is a valuable primer for an academic, lawyer, or technologist interested in understanding generative AI,” Sirivolu said. “The report explores the legal, ethical, and social implications of bias in generative AI, including how bias arises, how it can be measured, and how it can be mitigated.”
“Ensuring that the law keeps pace with both technical developments in machine learning and the unique algorithmic harms posed by these new technologies requires careful thought about the role of regulation,” said Nabil Shaikh L’24.
“Different regulatory frameworks—private enforcement, management-based approaches, and self-regulation—might be better suited to different types of harms stemming from AI tools, such as employment discrimination or data breaches. Identifying the appropriate type of regulation is currently critical for scholars, policymakers, and organizations that use AI.”