Courses

 

The courses we develop and teach equip the next generation of technologists, policymakers, and citizens to gain the knowledge and strong ethical foundation necessary to understand and influence the future as they enter the workforce.

All of our courses are centered around moving theory to practice. We do this through non-traditional creative exercises throughout the semester, as well as through team-based ‘Ethics in Action’ projects.

 

Fall 2024 Courses

An illustration of doctors and an abstract patient—one doctor listening to the patient's heart, others checking notes and conferring.

Ethics of AI & Health
PHIL-2090, 3 credits

Taught by Ethics Lab Assistant Professor Joel de Lara

  • Section 01 (CRN: 44449)
    Tuesday and Thursday
    11:00am–12:15pm
    Ethics Lab (Healy 201)

  • College/Bioethics
    Medical Humanities
    Pathways to Social Justice
    Core: Philosophy/Ethics

Artificial intelligence is re-shaping health and healthcare at a blistering pace. Doctors are using machine-learning algorithms to diagnose illnesses faster and more accurately than a human can. Smart devices are tracking and analyzing the personal health metrics of millions. Surgical teams are using augmented reality underpinned by AI algorithms to guide scalpels and increase precision. These advances hold both ethical opportunities and ethical challenges--from the promise of cheaper, more effective, more personalized care, and advancements in research and therapeutics to the perils of algorithmic bias, the loss or devaluing of human care, the cementing of health inequities, and the surveillance and exploitation of minoritized and vulnerable groups. This course will provide an introduction to critical issues in AI and Health Ethics that aims to equip students with key ethical concepts, theories, and frameworks to help navigate this complex emergent terrain. Particular foci will include AI developments in areas of physical, mental, sexual, and social health, and the course will canvas key readings in bioethics, political philosophy, and feminist and critical race theory.

Pre-health and pre-med students, students taking either the Minor in Tech Ethics and Society or in Medical Humanities, and any students with academic interests in AI, healthcare policy, or bioethics are warmly encouraged to enroll.

Intro to Environmental Ethics
PHIL-1101, 3 credits

Taught by Ethics Lab Assistant Professor Jason Farr

  • Section 01 (CRN: 45947)
    Tuesday and Thursday
    12:30pm–1:15pm
    Ethics Lab (Healy 201)

  • College/Bioethics
    Core: Philosophy/Ethics

In this course, we investigate the ethics of human interaction with the world around us. What is nature? What is wilderness? Is there such a thing? How should we design our parks, our cities, our agricultural systems–according to what values? Who should design these systems, manage them, and so on? How do we weigh environmental values when they conflict? How do we maintain a responsible hope in the face of existential environmental threats? Topics in the course will include environmental justice, biodiversity, animal ethics, climate change, consumerism, suburban sprawl, zoning, food ethics, and more.

This course will meet the Environmental Ethics requirement in the JESP degree.

Ethical Challenges of AI
PHIL-2102, 3 credits

Taught by Ethics Lab Postdoctoral Fellow Minji Jang

  • Section 01 (CRN: 39966)
    Mondays and Wednesdays
    11:00am–12:15pm
    Ethics Lab (Healy 201)

  • SFS/STIA Growth/Development
    SFS/STIA Science/Tech/Security
    X-List: AMST
    X-List: Tech, Ethics & Society
    Core: Philosophy/Ethics

New artificial intelligence systems that mimic and surpass human cognitive activity pose novel and profound ethical challenges. Should we entrust consequential decisions about credit, criminal justice and employment to machine learning algorithms whose operations we cannot explain? In an emergency, should automated vehicles sacrifice the passengers or the pedestrians? Should AI weapon systems decide who to target and how much risk to innocent life to accept? Are clinical decision software systems responsible for the diagnoses and treatments they suggest? If not, who is? Should robots be caregivers for the sick and the elderly? Does the value and meaning of human life change when robots do most of the work? Even more challenging issues arise if AI-systems ever approach consciousness and independent agency. Can AI-systems truly be conscious? Can they become ethical agents or the objects of ethical concern as members of our community and bearers of rights and interests that deserve protection? Is it permissible to aim to develop superintelligent AI-systems that could pose safety risks for humanity? Why or why not update humanity itself through living synergies with AI-systems? And what is the appropriate role for government policy in resolving and enforcing ethical decisions about AI? This class provides the philosophical grounding in ethical theory needed to address these immediate and future challenges to ethical governance of AI.

Abstract design in blues and greens features hands floating ideas and messages across a gridded canvas.

Intro to Tech, Ethics, & Society
PHIL-2100, 3 credits

Offered in partnership with Tech, Ethics, & Society

Taught by TES Assistant Professor Shannon Brick

  • Section 01 (CRN: 44262)
    Mondays and Wednesdays
    12:30pm–1:45pm
    Ethics Lab (Healy 201)

  • Section 02 (CRN: 46040)
    Mondays and Wednesdays
    3:30pm–4:45pm
    Ethics Lab (Healy 201)

  • Core: Philosophy/Ethics
    Satisfies Computer Science, Ethics & Society major requirement
    Satisfies Computer Science concentration in Technology, Ethics, & Society requirement
    Satisfies Tech, Ethics, & Society minor requirement

Emerging technologies provide new capabilities for improving our lives. But they can also pose novel and significant risks to our societies. For instance, artificial intelligence systems trained using machine learning techniques and enormous data sets are being deployed for a rapidly increasing number of important tasks. These range from the mundane, such as recommending movies and targeting advertising, to the crucial, as in hiring or even pre-trial detention decisions. These AI systems raise pressing new ethical issues. For instance, many of these systems display discriminatory bias. How can this bias be mitigated? Who should be accountable for ensuring that it is? Moreover, the effectiveness of machine learning depends on collecting huge amounts of information about people, i.e., “big data.” Who should be allowed to collect such data, and what data should be collected? What rights do individuals have to privacy and anonymity? The complexity of these technologies also poses novel problems of governance and accountability. What kinds of explanations are people owed for how an opaque AI system treats them? How can we hold developers and deployers of opaque systems accountable for their behavior? This course explores ethical questions like these that are raised by contemporary artificial intelligence and data-driven technologies. Readings will be drawn from philosophy, computer science, and other related fields. The goal is to prepare students to engage in critical ethical reasoning as developers, users, and stakeholders of new technology.

 

Previous Offerings

Social Media & Democracy, PHIL-2103

Design Justice, IDST-104

Drawing Sociotechnical Systems, IDST-124

Bioethics, PHIL-105

Ethics & the Environment, PHIL-124

Climate Change and Global Justice, PHIL-127

Democracy in the World, MSFS-603

International Information Privacy & Surveillance, CCTP-708

Data & the Politics of Evidence, CCTP-729


For offerings prior to 2017 see Ethics Lab volume 1: {Link}