Written by Veda Handa LLM’22, Dean’s Merit Scholar
This spring, I joined a vibrant group of nearly 40 students in the “Policy Lab on Artificial Intelligence and Implicit Bias” at the University of Pennsylvania Carey Law School, hoping to learn about ways to free algorithmic systems of structural biases. Throughout the semester, the Lab brought together expert speakers from diverse fields and countries to share with the class their experiences of working with AI systems. It was a pleasure seeing computer scientists, engineers, lawyers, policy-makers and business-persons join hands to engage in a discourse on systemic biases in AI and how these can be mitigated. This was the biggest revelation of the Lab: the articulation of a shared goal for building fairer and equitable AI systems, acted as a bridge not just between professionals, but also between academic institutions, private sector enterprises and regulatory bodies.
The collaborative nature of the Lab next demonstrated the value of interdisciplinary co-operation in reaching policy solutions. Our guest speakers displayed how multi-stakeholder engagement is vital for problematizing the societal impacts of AI tools, and how solutions to such impacts can only emerge through a partnership across various interest-groups. For instance, one of the guest speakers in the Lab – Sandra Wachter, Associate Professor and Senior Fellow at the Oxford Internet Institute – explained how counterfactual explanations to automated decision-making can be a step towards bringing about explainability, transparency, and accountability in such determinations. Through the lens of a data scientist, she analyzed how the data sets upon which these decisions are made may be re-designed to enable generation of counterfactuals. At the same time, she also discussed in her capacity as a legal scholar, the inadequacy of prevailing regulatory frameworks in guaranteeing a meaningful right to explanation in AI. Thereby, Prof. Wachter and her colleagues have taken a multidisciplinary approach towards the opacity in AI.
We were also privileged to witness, a European data scientist, Alexa Pavliuc, speak of the disastrous impacts of AI-run disinformation campaigns in developing countries. Simultaneously, leading policy-maker Dr. Virgillo Almeida, the former Secretary of Technology and Innovation in Brazil, emphasized the need to include underrepresented states and communities from the Global South in AI-regulation efforts.
Our conversations with technologists such as Deborah Raji further taught us that implicit biases in technology can run across gender, race, and class divides. Any solution for removal of such biases has to take into account the multiple facets of individual identity and how biases may be intersectional.
With such thought leaders paving the way for us, my fellow students and I collaborated on understanding how AI systems contribute to prevailing gender inequalities in the work force. We started with an examination automated filtration tools used for screening job-applicants during the recruitment process and set out to understand whether and how the pre-existing biases of the persons designing such systems seep into the technologies utilized in the hiring process. Using empirical research tools, the Lab surveyed young professionals between the ages of 20-35 to collect primary data on their experiences with online recruitment portals, job-search websites, and applications. We then prepared a class report setting out our key findings from the data collected, including the algorithmic biases that we identified as having bled into AI-driven recruitment mechanisms.
Moreover, each student prepared an individual policy-brief addressing the impending question of implicit biases in AI and suggested appropriate policy measures for resolution of this issue. With a focus on the Global South and gender equality, we attempted to identify a new generation of biases such as immigration status, zip code, and address, etc., creeping across AI tools. We further discovered several contexts, as varied as housing, social media platforms, and primary education, where such biases have trickled. Our class report also included our human rights-centric recommendations for unmasking biases found across such diverse use-cases of automated systems. In this manner, we aimed to contribute to ongoing endeavors towards making AI systems and their applications more inclusive and pluralistic.
We presented our report to the Vice-President and Deputy General Counsel, Human Rights, Microsoft Corporation, Steve Crown. In Crown, we found a partner who can encourage the deployment of emerging technologies in a manner which is cognizant of the impact of these tools on individuals and society, and for the empowerment of communities.
With their assorted educational and social backgrounds, my fellow classmates exemplified the learnings that our distinguished guest lecturers brought to the class. During the semester, we worked together to utilize our global perspectives and wide-ranging experiences in law, technology, and humanities to critically engage with ideas on AI design, application and regulation. This journey of ours has culminated in the formation of an international fraternity of future policy-makers. We recognize the duty that each one of us at the Lab bears in giving voice to all communities and bringing about positive impact in the conversation on AI-governance.
We spring forth from the shoulders of our beloved Professor, Dr. Rangita de Silva de Alwis, Senior Adjunct Professor of Global Leadership, who has presented us wonderful learning opportunities through the Lab. The brains behind this mammoth experiment, she has evolved a new pedagogical tool for law schools, where the classroom serves as a space for incubating new policy solutions and goes beyond traditional legal training. By connecting us with pioneering thinkers, she has facilitated dialogue as the primary means of instruction and encouraged us to build from the experiences of our guest speakers. I hope my classmates and I can take her legacy of experiential labs to law schools back to our home countries and across the world. This teaching methodology can be a roadmap for re-imagining legal education and preparing law students better for their responsibilities as future lawmakers.
This Lab has further provided us with a dynamic illustration of meaningful multi-stakeholder engagement in policy-making. Over the course of this spring, my eyes have opened up to the possibility of a marriage between information technologists and attorneys. My classmates and I bear the charge of advancing this relationship as we assume our responsibilities as policy-makers. We remain committed to carrying our learnings from the Lab forward in making AI-governance a more participatory and representative process across the world.
Watch the “Policy Lab: AI and Implicit Bias (2022)” documentary: