Probing the Biases and Ethics of Machine Learning

A floating robot screens applicant profiles in an abstract digital landscape.

Ethics Lab’s goal of integrating ethics into computer science education isn’t confined to the front gates of Georgetown University. Building on the previous year’s work for Mozilla’s Responsible CS Challenge, Ethics Lab Postdoctoral Fellow Dr. Mark Hanin brought his experience to Vanderbilt University’s Foundations of Machine Learning class taught by Professor Catie Chang for undergraduate and graduate students. Working collaboratively, Hanin and Chang designed a workshop on the ethics of artificial intelligence focused on issues of bias.

In the first part of the workshop, Hanin and Chang introduced a case study on hiring bias at Amazon in which a machine learning tool designed to help with hiring and recruitment was shown to discriminate against women applicants (although Amazon says the tool was never used).

Hanin hoped that the workshop would help students consider how biases might creep into various aspects of the machine learning design cycle. With help from Ethics Lab designers, Hanin and Chang created a virtual worksheet that distilled the machine learning design cycle into six parts (Defining the Objective; Selection of Data; Selection of Features; Training and Testing; Outputs / User Interface; Performance Metrics). Students worked in small groups to explore how bias may arise—and how it could be mitigated—at each stage in relation to Amazon’s machine learning tool. Through this exercise, the instructors hoped that students would better appreciate the myriad ways in which machine learning can impact everyone’s lives and how computer scientists can learn to design systems in more socially responsible ways.

“In the case study I wanted students to start to see that bias can arise at many parts of the design cycle, and it’s important as a software engineer to be mindful of this starting with the earliest stages (i.e., defining the objective),” Hanin said.

Chang said that this workshop marked the first time that she had used a design-based method for an active learning exercise. “The exercise sparked discussion about the societal implications of our design choices, and got the students thinking about new ideas for promoting fairness and transparency in ML.”

The exercise sparked discussion about the societal implications of our design choices, and got the students thinking about new ideas for promoting fairness and transparency in ML.
— Professor Catie Chang

Roza Bayrak, the teaching assistant for the course and a PhD student in Computer Science, said that she really appreciated the chance to have a formal discussion about ethics in a machine learning class.

Hanin noted that groups quickly identified potential sources of bias and brainstormed interesting ways of trying to correct for them. Students also made incisive points about how the algorithm’s output should be visualized and presented to non-technical users, including hiring managers and human resources personnel. As one group put it, “It’s important to output more than just a single hire/no hire number” and to “show related outputs as well.” 

In the final part of the workshop, students posed open-ended questions at the top of their minds about the ethics of AI, prompted by readings that Chang and Hanin assigned in advance about uses of AI in criminal justice, healthcare, and other areas. Students asked thought-provoking questions about legislative efforts to improve algorithmic transparency, the ethics of data scraping, and limits on deploying machine learning technologies in particular contexts.

Machine learning already impacts who gets arrested, who sees a given housing or job advertisement, who is eligible for a loan, and much more. As a result, it is crucial that tomorrow’s computer scientists are prepared to discern and address ethical dilemmas in artificial intelligence.

“To ensure that algorithms treat people fairly and benefit society as a whole, it’s essential for those who design, build, test, and refine algorithms to understand the ethical values at stake in AI deployment, including values of fairness, non-discrimination, respect, and human dignity,” Hanin said.