Students Envision an AI Tool

Students write on a worksheet.

Students write on a worksheet.

 

In the final session in Professor Maloof’s Introduction to AI course, the Responsible CS Team asked students to envision how they would design an AI tool to address a pressing social problem in a novel way and to share and engage with each other’s ideas.  Building on our previous session that explored the advantages and tradeoffs of AI tools in criminal justice and college admissions, this session aimed to encourage students to think critically about what AI tools can achieve, what ethical constraints emerge, and how constructive conversations with those who have contrasting views can shape a computer scientist’s socially responsible mindset.  Professor Edenberg, Senior Ethicist at Ethics Lab, led the session.

To prepare for class, students completed a homework assignment using a multi-part worksheet that asked them to select a meaningful social issue and consider how an AI tool might help ameliorate it.  The worksheet integrated ethical and social concepts alongside technical specifications about the basics of an AI system. We asked students to explain the overall objectives the AI tool would achieve, specify key definitions, and single out attributes and traits for the algorithm. 

Students spent much of the class reflecting and exchanging views about their ideas in groups of four.  They took turns describing their AI systems and providing layers of constructive feedback. They assessed the attributes and traits that their colleagues selected and considered how various AI proposals could be used for good and ill.  Some ideas that students shared included using algorithms to help predict who may be at risk of homelessness and who could fall prey to opioid addiction, as well as a chatbot for students to obtain input and suggestions when teachers are unavailable.  In the final portion of their small-group discussions, students flagged unintended consequences of their classmates’ proposals, including (1) risks to privacy, (2) potential discrimination in lending practices directed at those identified as ‘high risk’ for homelessness, and (3) moral hazards that may arise if teachers rely unduly on chatbots.  In fleshing out and addressing these risks, Professor Edenberg underscored how important it is for technologists to consult broadly with the people who will be affected by the products they build.

The class also reflected on the arc of the course and the range of ethical challenges that computer scientists face.  Students voiced an impressive, and eloquently expressed, set of ideas and concerns. One student raised incisive worries about the gap between the goal of responsible programming and the limits inherent in entry-level jobs.  In response, some classmates observed that it is important not to take the easy way out and that meaningful agency can consist in shifting one’s mindset toward the impact of one’s work or seeking to consult with relevant experts.  A theme that emerged from the class discussion is that the core lesson students are taking away from the semester is that meaningful agency requires being cognizant of the social impact of one’s work. Professor Edenberg reinforced this point by noting that starting conversations about ethics and raising questions without needing to have all the answers can lead company culture to move toward norms of responsible computer science.

This project is funded by a Mozilla Responsible Computer Science Challenge grant and is part of the university’s Initiative on Tech & Society, which is training the next generation of leaders to understand technology’s complex impact on society and develop innovative solutions at the intersection of ethics, policy and governance.

Back to Mozilla Responsible Computer Science Grant



 
MozillaGuest User