Students Engage in Ethical Perspectives of AI

Intro to AI_edited .png
 

Georgetown’s Responsible CS team explored the moral responsibilities of computer scientists in our first session with Professor Mark Maloof’s Introduction to AI course. The team co-designed an interactive exercise to grapple with complex questions about the aims of AI systems, whom they impact for good or ill, and what role computer scientists have in tackling challenges of algorithmic discrimination, surveillance, replacement of human workers, and other pressing societal issues. Professor Elizabeth Edenberg, Senior Ethicist at Ethics Lab, led the session. 

To prepare for the in-class activity, students selected an AI tool of their choice and analyzed its aims, advantages, and drawbacks. They came prepared with an impressive set of examples, from healthcare technologies and criminal justice tools to misinformation-fighting algorithms and facial and voice recognition software.  

Recognizing Complex Challenges

Working in groups, students first articulated the problems various AI systems are intended to solve and the advantages and drawbacks of using AI to address them. Next, they surfaced critiques they read about the AI technologies at issue and asked what aspects of AI they focus on, including system design, input data, labeling, outputs, unintended consequences, interpretive difficulties, and so on. The class zeroed in on a diverse array of challenges. For example, students noted that providing medical diagnoses, particularly major ones, has an emotional aspect that AI may be ill-equipped to handle. Other limitations of AI that students flagged centered on biased inputs that can taint outcomes and recommendations, as well as indiscriminate data collection that creates significant privacy risks. Students also pointed to an array of problems relating to misinformation, including the challenge of deep-fakes and deceptive advertising targeted at vulnerable populations, such as the elderly. 

Brainstorming Solutions

Equipped with a rich understanding of the goals, successes, and shortcomings of diverse AI tools, the class confronted questions of moral responsibility, including what obligations fall on computer scientists and what sorts of steps they and others can take. Students brainstormed a creative set of technical, regulatory, and business-focused solutions. These included greater commitment by management and investors to ethical issues, paying users for their data, increased transparency about data collection, as well as improving data training for AI (although Professor Maloof observed that this does not always work!).


In the session, students learned to use a broad ethical perspective to reflect on AI concepts and techniques they will master throughout the course.  Our interactive activity laid the foundation for two further in-depth engagements the Responsible CS team will undertake in Introduction to AI later this semester.

This project is funded by a Mozilla Responsible Computer Science Challenge grant and is part of the university’s Initiative on Tech & Society, which is training the next generation of leaders to understand technology’s complex impact on society and develop innovative solutions at the intersection of ethics, policy and governance.

Back to Mozilla Responsible Computer Science Grant