Ethics Lab & CSET Lead Workshop on Ethics of Risk & AI in National Security

Artificial intelligence and machine learning are rapidly transforming our society and posing unprecedented challenges for political leaders and public servants at every level of government. To help the next generation of policymakers meet these challenges, Georgetown University’s Ethics Lab and Center for Security and Emerging Technology (CSET) are organizing a series of three workshops on the ethics of AI in 2020–21.

The series is funded by the Public Interest Technology University Network (PIT-UN), a partnership of 21 colleges and universities made possible by the Ford Foundation, Hewlett Foundation, and New America.


The second Ethics & AI workshop, held virtually on Nov. 20, 2020, focused on AI in national security. Participants from CSET, Tech Congress, the American Association for the Advancement of Science (AAAS), and the Master of Law and Technology program at the Georgetown University Law Center explored a complex case study centered on a hypothetical deployment of U.S. forces in the Middle East, which was developed in collaboration between CSET’s Dr. Mikolic-Torreira and Ethics Lab Postdoctoral Fellow Dr. Mark Hanin. 

The scenario involved a request by U.S. Central Command to authorize use of a novel AI-based weapon system: a drone swarm with hundreds of unmanned aerial vehicles (UAVs) capable of autonomously identifying, tracking, and attacking enemy targets. While this AI capability isn’t in existence, it is modeled on existing technologies (including the lethal UAVs operated remotely by human beings and small autonomous drone swarms without lethal capacities). Exploring the ethical and strategic implications of such capabilities is especially relevant, given the role that lethal (non-autonomous) drones have played in recent hostilities between Armenia and Azerbaijan.

After an overview of the ethics of risk and uncertainty provided by Hanin, workshop participants broke into groups to puzzle through ethical and policy challenges in deciding whether to permit use of the drone swarm capability, and under what conditions. Groups charted key risks and uncertainties, surfaced key ethical values informing their analysis, and homed in on their bottom-line recommendations. They flagged a host of ethical concerns, such as the potential of civilian deaths and setting an undesirable international precedent by deploying this technology, and shared an array of creative recommendations and rationales.

 
Participants worked together in small groups through Zoom breakout rooms and Miro, an online visual collaboration platform.

Participants worked together in small groups through Zoom breakout rooms and Miro, an online visual collaboration platform.

 

Visitors from Georgetown Law Center with national security expertise shared their insights with the cohort. Mitt Regan, the Co-Director at the Center on National Security and the Law, observed that fundamental legal principles of armed conflict incorporate ethical standards and require ethical judgments. Anna Cave, Executive Director of the Center on National Security, emphasized the importance of adopting a longer-term, strategic perspective. Cave also noted that various players—from Navy commanders to diplomats and technical experts—will have distinct perspectives that need to be accommodated in a multi-stakeholder policymaking process.

Ethics Lab Director Prof. Maggie Little, who co-hosted the workshop, observed toward the end of the session that the goal of the engagement was to present a case study that highlights the ethical nuances of technologies.

“What we hoped was that by looking at a concrete example, we can start to highlight where values can get embedded and where there are ubiquitous tradeoffs,” Little said. “From there, you can start thinking about what kinds of frameworks are helpful, and hopefully, you will see them with a slightly different eye.”

The cohort will reconvene for a third workshop on human rights and facial recognition in spring 2021.

 
As part of their deliberations, participants considered the relative importance of risks and uncertainties.

As part of their deliberations, participants considered the relative importance of risks and uncertainties.