Ethics Lab and CSET Conclude Ethics of AI for Policymakers Series with Workshop on Global Surveillance & Human Rights

Artificial intelligence and machine learning are rapidly transforming our society and posing unprecedented challenges for political leaders and public servants at every level of government. To help the next generation of policymakers meet these challenges, Georgetown University’s Ethics Lab and Center for Security and Emerging Technology (CSET) are organizing a series of three workshops on the ethics of AI in 2020–21.

The series is funded by the Public Interest Technology University Network (PIT-UN), a partnership of 21 colleges and universities made possible by the Ford Foundation, Hewlett Foundation, and New America.

The third and final session in a workshop series on the ethics of AI for future policymakers was hosted virtually on May 7 by Ethics Lab and the Center for Security and Emerging Technology (CSET) at Georgetown University. While the first workshop explored bias in AI systems and the second addressed issues relating to AI-based weapons in the national security space, the third workshop delved into the ethical complexities of AI-powered surveillance technology. Across all the workshops, the goal was to provide a cohort of future policymakers the chance to grapple with urgent, complex ethical issues so that they can make better practical decisions on the job.

The workshop had a number of components, including warm-up activities, presentations, and small-group work centered on a case-study. After inviting participants to engage with a few opening prompts to spur their thinking, three presentations followed. Ethics Lab postdoctoral fellow Dr. Mark Hanin gave an overview of key ethical considerations relevant to surveillance technologies. CSET Research Analysts Dahlia Peterson and Emily Weinstein presented on applications in relevant international contexts, including COVID-19 surveillance and US foreign policy challenges.

The presentations served as a launching point for an in-depth case study on surveillance tech. It involved a fictional scenario in which the police department of a large, multi-ethnic US city requested authorization from the City Council to purchase a subscription to an extensive private facial recognition database as part of its surveillance and crime-fighting mission.

Working in small groups, participants thought through key ethical issues, concerns, and questions raised by such a request to advise the City Council, prompted by “Yes, but only if—” and “No—and here’s why…” statements. Some participants noted safeguards they would like to see in place if such authorization is granted (e.g., ethics training for law enforcement, policies that guide and constrain use of facial recognition, robust oversight) while others expressed grounds for concern about authorizing use of the technology by law enforcement (e.g., bias in AI systems, systemic inequality, lack of adequate best practices and modes of accountability). These reactions underscored that finding a balance between making good design choices for technology itself and crafting proper policies and procedures when deploying surveillance technologies responsibly is a key challenge. 

Groups then considered a related scenario with an international twist. Here, the cohort imagined that the U.S.-based facial recognition company is approached by the government of a large non-democratic country with a request to leverage its technical expertise to advance that country’s public safety goals.This extension to the international context led participants to consider how a change in political context affects the ethical issues at play and the challenges that governments face in monitoring and regulating export of advanced surveillance capabilities.

Following the case study, a spirited plenary discussion took place with groups sharing their recommendations, engaging with each others’ views, and raising other issues. The discussion offered an opportunity for workshop hosts to start distilling practical takeaways for better decision-making in complex policy contexts that implicate digital surveillance technologies.

The workshop team was thrilled with the level of engagement, interest, and insight that participants brought to the table. Reflecting on the series as a whole, Hanin said: “[This series of workshops] was a great concept study in how sustained, cross-disciplinary collaboration can help future policymakers build skills to address tough ethical challenges.” 

Igor Mikolic-Torreira, Director of Analysis at CSET, found the series particularly valuable for CSET researchers: “Creating effective policies for AI requires understanding and balancing national security concerns, domestic civil liberties and international human rights, privacy and data rights, economic consequences across nations and societies, and fundamental questions of what is legal and ethical. By bringing together researchers of diverse backgrounds and focusing them on concrete questions in realistic situations, this series created a unique venue where all dimensions of the AI problem were brought forward and discussed constructively.”