Students Discuss Algorithm Design

Professor Edenberg speaks to students.

Professor Edenberg speaks to students.

In the Responsible CS Team’s second engagement with Professor Maloof’s Introduction to AI course, we explored how normative values inform, and are embedded in, the design of algorithms in some key social contexts.  The session was co-led by Professor Elizabeth Edenberg, Senior Ethicist at Ethics Lab, and Professor Maloof.

The class centered on an activity intended to surface conflicting values in creating algorithms for use in complex social situations.  Professor Edenberg introduced the social stakes of algorithms through a discussion of the ethical challenges facing risk assessment tools in the criminal justice system.  After this discussion, the bulk of the class asked students to think through how they would design an algorithm for use in a context more familiar to students––the college admissions process.  In pairs, students were tasked with proposing an algorithmic tool to help the Georgetown Admissions Office make decisions about undergraduate admissions. Groups were asked to articulate the objective their algorithmic tool would have; how broad or narrow its remit should be; which features or attributes of applicants should be taken into account; and how those traits should be ranked. 

The class expressed diverse views about the goals an AI tool should advance and when it should enter the admissions process.  Some said AI can help narrow a large pool of applicants to a smaller group of likely candidates, serving the goal of efficiency.  Another group suggested that AI can give candidates in the ‘No’ column a fresh look, promoting the aims of thoroughness and fairness.  Still others sought to use the algorithm to help increase diversity by sorting students into geographic or demographic groups and finding the strongest candidates in each.  Professor Maloof asked students to reflect on possible glitches with their approaches, such as lopsided distributions of preferred majors. He also asked whether the algorithmic tool would be designed using machine learning or by working with subject-matter experts, a choice that would likely influence the normative aims and priorities of the algorithmic tool.

Ethical Limits of AI

Professor Edenberg encouraged students to consider the normative valences of attributes they identified, as well as how easy or hard it is to encode those attributes in an AI system––including teacher recommendations, dedication to extracurricular activities, grit, open-mindedness, and interests in a given university.  Some students were wary about using AI to analyze those complex attributes, even though they expressed greater comfort with using AI to vet metrics like grades or test scores. But Professor Maloof pointed out that seemingly straightforward statistics like SAT scores raise many nuanced ethical and social issues, since (for example) individuals living in wealthier ZIP codes tend to have higher SAT scores.   

The session helped students develop a concrete understanding of the ways that normative judgments form an integral part of creating AI tools and putting them to use.  It prompted them to think more rigorously about ethical limitations on using AI tools to address major social problems. And it encouraged future computer scientists to reflect on when it is a good idea not to use certain new technologies, in spite of the hype that may surround them.

This project is funded by a Mozilla Responsible Computer Science Challenge grant and is part of the university’s Initiative on Tech & Society, which is training the next generation of leaders to understand technology’s complex impact on society and develop innovative solutions at the intersection of ethics, policy and governance.

Back to Mozilla Responsible Computer Science Grant

MozillaGuest User