Team Profile: An interview with Ethics Lab Postdoctoral Fellow Jason Farr

 
A 30-something white man gives a small smile and looks into the camera lens. Sandy, parted hair and a short, graying beard. Wearing glasses, frame has a wooden texture. Short-sleeved collared shirt with white and light green horizontal stripes.

Jason Farr is a Postdoctoral Fellow at Ethics Lab, where he teaches courses on the ethics of artificial intelligence and helps to develop creative philosophy pedagogy. His research aims to make sense of various relations between normativity and sociality, joining foundational issues from metaethics with concrete, granular issues concerning how we ought to engage, socially, with artificial intelligence. Jason earned his Ph.D. from Georgetown University, where he studied the ethics of social media research as a Fritz Family Fellow at Georgetown’s Initiative on Tech and Society.


What drew you to join Ethics Lab?

Ethics Lab is very much about radical pedagogy. It is about thinking through pedagogy to ensure it is not rote, boring and disengaged but enabling students to show up with 100% of themselves in the classroom. 

Here at the Ethics Lab my colleagues and I spend a lot of time thinking through how we can ensure that we are not only imparting some kind of content to our students in our courses and embedded exercises but also figure out what are the skills that we want to teach them and how are we going to teach those skills.

We are not just exploring in very abstracted away deep questions about what makes the good life but rather thinking through current granular topical issues that affect everybody. 

Which projects are you most excited about? 

So I have been working on projects that are on social media research ethics exploring the kinds of behavior we expect of people doing research using social media data. We ask ourselves questions such as “Can social media companies collect whatever data they want? Are there certain restrictions on what data they can collect? How do they build their machine learning algorithms? What choices do they make when they create those machine learning algorithms?”

Students will also be challenged on what happens when social media companies use categories such as race and gender to organize data and how that might play a role in reinforcing existing stereotypes. 

What I've noticed that I've heard from the students themselves is that we're all tempted when we engage with any social media platform to think that our own experience on it is everyone's experience. In many of the small group discussions students are challenged to think through what their individuals experienced have been with existing social media  features and how that may vary across the board. 

Part of the objective of these exercises is to get students interested in social media design and its impact. Even though students come from various disciplines, social media’s influence cuts across several sectors and roles. Hopefully this is going to be something that leads them to make ethical choices in their daily lives either as an average social media user, programmer, lawyer or in any other capacity as a young professional. These exercises might play a role in keeping them engaged and talking so that they can make ethical choices and continue to grow.