Skip to main content

Diverse Women Leading the Debiasing of AI and Addressing Historical Inequities

March 01, 2021

To kick off Women’s History Month, the AI & Bias Lab taught by Professor Rangita de Silva de Alwis is highlighting stories about diverse women who are changing the face of innovation with their extraordinary leadership in the tech industry.

To kick off Women’s History Month, the AI & Implicit Bias Policy Lab taught by Senior Adjunct Professor of Global Leadership Rangita de Silva de Alwis at the University of Pennsylvania Carey Law School is highlighting stories about diverse women who are changing the face of innovation with their extraordinary leadership in the tech industry. de Silva de Alwis is a nonresident leader in practice at Harvard’s Women and Public Policy Program (WAPPP) and Hillary Rodham Clinton Fellow on Gender Equity 2021-2022 at Georgetown’s Institute for Women, Peace and Security (GIWPS).

All of these women generously shared their personal experiences with us and imparted invaluable insights about everything from the subtle biases of machine learning to overt workplace discrimination. One of the most important tools of debiasing is to tell stories, so we share these narratives in order to shed light on the successes of diverse women in tech who are changing the way we interact with and understand AI.

Here, Lindsay Holcomb L’21 and the AI and Bias Policy Lab share the story of Dr. Mehrnoosh Sameki of Microsoft. The students also wish to thank Professor Rangita de Silva de Alwis for her leadership and guidance.

Dr. Mehrnoosh Sameki – Microsoft

On his first day in office, President Joe Biden repealed former President Donald Trump’s executive order on immigration, known colloquially as the “Muslim Ban,” which prevented individuals from primarily Muslim and African countries from immigrating to the U.S. The ban caused extraordinary hardship both for Muslims in the U.S. who were separated from their loved ones and for Muslims living abroad who sought to come to the U.S. for educational and professional opportunities.

As a result of the previous administration’s immigration policies, the U.S. has experienced a significant drain on its talent pool, particularly in STEM fields, which have for decades been bolstered by immigrants, including many from Muslim-majority countries.

According to Eric Rosenblum, Managing Partner at Tsingyuan Ventures (a $100 million fund that invests in Chinese diaspora science and technology entrepreneurs), who spoke with the AI & Bias Lab last month, technical universities in Canada and the United Kingdom have seen significant upticks in enrollment from non-native students during the Trump era as engineers from the Middle East and Africa have been repelled by American hostility towards immigrants.

Historically, one of the primary feeder countries for STEM students at U.S. universities, particularly female STEM students, is Iran, which boasts far more female engineers than the U.S. In fact, in Iran nearly 70 percent of university graduates in STEM are women, and many come to the U.S. to earn graduate degrees and take on high profile technical and managerial roles at American technology companies.

One such engineer is Dr. Mehrnoosh Sameki, who graduated from Sharif University, Iran’s premier technical university, with a degree in Computer Engineering and came to the U.S. to pursue a PhD in Computer Science. Today in her job at Microsoft, Dr. Sameki works to eradicate a much more subtle form of prejudice than the overt discrimination of the previous administration’s immigration policies: algorithmic bias.

“AI can perpetuate historical unfairness,” explained Dr. Sameki, while speaking to the AI & Bias Lab last month. Dr. Sameki leads efforts to increase the interpretability and fairness of Microsoft’s various machine learning platforms, rooting out the data input errors that foster algorithmic bias and ensuring that “AI is deployed in a responsible way, not discriminating against people that are historically discriminated against.” In 2017, Dr. Sameki received her PhD in Computer Science from Boston University, focusing on fairness, accountability, and transparency in AI.

Dr. Sameki has grown accustomed to being one of the only women in many of the engineering spaces she has inhabited since coming to the U.S., so achieving greater diversity, equity, and inclusion in the tech space has become an important commitment for her. To that end, Dr. Sameki is a member of the non-profit Persian Women in Tech, which was established in Silicon Valley in 2015. The non-profit’s mission is to “connect, mentor, and empower Persian women in technology globally,” and it boasts over 20,000 members, the vast majority of whom possess some sort of technical or entrepreneurial background.

The organization’s work also had personal significance for Dr. Sameki who immigrated to the U.S. because she felt that first rate opportunities in science and technology simply did not exist in Iran due to the economic sanctions imposed on the country. Lacking opportunities to make connections with large, multinational technology companies like Microsoft, which do not have offices in Tehran, Dr. Sameki felt that her only choice was to leave.

By connecting Persian women in technology globally, Persian Women in Tech opens new frontiers for young engineers like Dr. Sameki to cultivate professional relationships no matter where they are in the world, mitigating historical inequities which have given the upper hand in tech careers to those physically located near Silicon Valley.

This commitment to inclusivity has also translated into Dr. Sameki’s work at Microsoft where she leads the product efforts behind open-source offerings for InterpretML, Fairlearn, and Error Analysis, which aim make users aware of potential biases in their machine learning models, identify specific unfairness issues, and mitigate those issues.

As Dr. Sameki explains, “AI is only as unfair as the data put into it,” and as a result, it can perpetuate historical inequities. Ultimately, Dr. Sameki hopes that her efforts will lead to more transparent and accountable algorithms and encourage AI stakeholders to be more wary of the ways in which their machine learning software may be exacerbating latent societal biases and treating some demographics unfairly.