Skip to main content

Addressing Bias in AI

September 18, 2023

Digital abstract art of human profile
Digital abstract art of human profile

In “Policy Lab: AI and Implicit Bias,” students propose solutions to address intersectional bias in generative AI.

The rapid expansion of artificial intelligence and the overwhelming popularity of generative AI tools like ChatGPT raise important questions about how algorithms and machine learning models reproduce real-world human biases. The University of Pennsylvania Carey Law School’s “Policy Lab: AI and Implicit Bias” incubates ideas for an intersectional, inclusive approach to artificial intelligence.

Rangita de Silva de Alwis, Senior Adjunct Professor of Global Leadership Rangita de Silva de Alwis, Senior Adjunct Professor of Global LeadershipTaught by Rangita de Silva de Alwis, Senior Adjunct Professor of Global Leadership, the course engages students in rigorous academic analysis and rich discussions with lawyers, researchers, designers, and international leaders in technology to examine the impact of intersectional bias in generative AI.

The Spring 2023 Policy Lab’s report, “A Promethean Moment: Towards and Understanding of Generative AI and Its Implications on Bias” explores how bias arises in generative AI applications, demonstrates methods for measuring AI bias and offers solutions for mitigating the impact of technology that reproduces human bias and exacerbates inequity.

Engaging Interdisciplinary Experts

The Spring 2023 course featured Heather Sussman, Partner and Business Unit Leader for Strategic Advisory and Government Enforcement at Orrick; Sandra Wachter, Professor of Technology and Regulation at University of Oxford; and Kasia Chmielinski, Mason Kortz, and Aaron Gluck-Thaler from the Berkman Klein Center for Internet & Society at Harvard University.

Shashank Sirivolu L'24 Shashank Sirivolu L'24“These interdisciplinary experts represented a vast array of expertise in AI, including academics, industry practitioners, founders, and investors,” noted Shashank Sirivolu L’24, Research Assistant for the AI Policy Lab. “The advent of generative AI presented the class with an opportunity to engage in groundbreaking work to address the challenges and limitations of new technologies.”

Students conducted a real-time analysis of the White House’s “AI Bill of Rights,” as well as various EU proposals. The student scholars also considered different regulatory frameworks and engaged in a careful examination of gender biases in Large Language Models (LLMs) to develop new strategies for creating more inclusive, equitable AI applications.

Impactful Student Scholarship

In “A Promethean Moment,” student-authored essays call attention to a range of urgent issues, many of which are already the subject of intense public debate. The report explores social media consent disclosures and the potential exploitation of content creators, racial and gender disparity in AI-generated art and implications for copyright law, big data and cross-border data exploitation, the pitfalls of a self-regulated AI industry, and the risks of LLMs trained on biased data sets.

“This report is a valuable primer for an academic, lawyer, or technologist interested in understanding generative AI,” Sirivolu said. “The report explores the legal, ethical, and social implications of bias in generative AI, including how bias arises, how it can be measured, and how it can be mitigated.”

Nabil Shaikh L'24 Nabil Shaikh L’24In the report, students propose concrete solutions, such as implementing new national strategies around bias-free data collection, as well as recommendations for investing in a diverse tech workforce that understands the importance of staying vigilant with respect to bias and AI technological advancements.

“Ensuring that the law keeps pace with both technical developments in machine learning and the unique algorithmic harms posed by these new technologies requires careful thought about the role of regulation,” said Nabil Shaikh L’24.

“Different regulatory frameworks—private enforcement, management-based approaches, and self-regulation—might be better suited to different types of harms stemming from AI tools, such as employment discrimination or data breaches. Identifying the appropriate type of regulation is currently critical for scholars, policymakers, and organizations that use AI.”

Read the full report.