What Makes a Good Algorithm?

“What Makes a Good Algorithm?” class notes.

“What Makes a Good Algorithm?” class notes.

 

In our first session with Professor Cal Newport’s course, Introduction to Algorithms, on January 14, Georgetown’s Responsible CS Team explored the question: what makes a good algorithm? This activity was designed to prompt students to consider ethical and social considerations alongside technical evaluations of algorithms, introducing themes that will be developed over the semester’s engagements. Ethics Lab’s Senior Ethicist Professor Elizabeth Edenberg and Professor Newport guided students through the activity and discussion. 

This activity began with students independently brainstorming various factors that make a good algorithm, with the professors encouraging students to think broadly drawing on their knowledge of computer science thus far, as well as what they know about algorithms as they are used in the world. Students then broke into small groups to share their ideas and sort these into the relevant scientific, legal, ethical, and social considerations that bear on our evaluation of algorithms. After some class discussion about the considerations that are most relevant to analyzing the merits of an algorithm, students returned to small groups to discuss which are most relevant from different points of view -- the perspectives of computer scientists designing the algorithm, individuals whose data is included in the algorithm, and society more broadly. This second half of the activity was designed to help students see why it is important to consider individual interests and the broader social impact of an algorithm, even in the early stages of its design. 

Two major themes emerged from the discussion. The first centered on safety, a consideration students regarded as relevant from all three perspectives. Professor Edenberg pressed students to disambiguate, noting that what counts as promoting safety for some individual might promote harm to society. Professor Newport followed up, reminding students that in computer science ‘safety’ is a technical term referring to a well-defined property of an algorithm. As a result, an algorithm that is safe from the perspective of a computer scientist might not be safe in any ethically relevant sense of the term. Bringing greater clarity and precision to how students think about ordinary notions like safety and risk is a key aim of these sessions. 

Another important theme centered on the considerations of accuracy and bias. Multiple students noted that an algorithm might be biased in the sense that its application can produce discrimination or social injustice. One student cautioned that such an outcome might be “indicative of some bias [the programmer] codes in.” Another student noted that even if an algorithm doesn’t lead to discrimination or social injustice it may still end up having other bad effects if it biased in the sense of being innacurate. Here again students encountered the value of disambiguating their thinking, clarifying more precisely how an algorithm’s features relate to ethically relevant considerations.

Getting students accustomed to thinking about the ethical significance of algorithms is a new and important change in the curriculum. Reflecting that his own efforts to highlight the social and ethical import of computer science make him unusual in his field, Professor Newport underscored the importance of this grant and the Ethics Lab’s methods. They are part of an exciting opportunity that will encourage the next generation of computer scientists to approach the field in a new, more ethically responsible way.