AI: Even If Accurate, It May Still Encode Bias

 

Ethics Lab Leaders Speak at Tech Summit

 
Ethics in AI panel, from left: Dr. Maggie Little, Roslyn Docktor, Dr. Ian McCulloh, Dr. Elizabeth Edenberg, Dr. Dawn Tilbury, and Sean Perryman (Photo by Eddie Arrossi).

Ethics in AI panel, from left: Dr. Maggie Little, Roslyn Docktor, Dr. Ian McCulloh, Dr. Elizabeth Edenberg, Dr. Dawn Tilbury, and Sean Perryman (Photo by Eddie Arrossi).

 

Dr. Maggie Little, Founding Co-Chair of the Tech and Society Initiative and Director of Ethics Lab at Georgetown University, moderated a panel about Ethics in Artificial Intelligence, titled “Ethics Of the Future,” at the Congressional Hispanic Caucus Institute’s (CHCI) Tech Summit 2.0. The panel took place December 4, 2019.

Dr. Elizabeth Edenberg, Senior Ethicist at Ethics Lab and Assistant Research Professor at Georgetown University, also appeared on the panel with Roslyn Docktor, Director of Technology Policy at IBM; Sean Perryman, Director of Diversity and Inclusion at the Internet Association; Dr. Ian McCulloh, Chief Data Scientist at Accenture Federal Services; and Dr. Dawn Tilbury, Assistant Director, Engineering, at the National Science Foundation.

CHCI’s Tech Summit 2.0 was created with the goal of deepening policy stakeholders’ knowledge in technology. To this end, the panelists spoke to a group of 60 highly engaged and predominantly Latinx attendees.

 
Dr. Dawn Tilbury speaks on the panel (Photo by Eddie Arrossi).

Dr. Dawn Tilbury speaks on the panel (Photo by Eddie Arrossi).

The rich discussion focused first on potential biases in artificial intelligence, with all panelists sharing their insights as leaders in AI to help attendees better understand the current AI landscape.

“Because subsystems [of artificial intelligence] are built by humans, opportunities for bias creep in, especially on the data collection part,” said Dr. Tilbury. 

They then touched on how AI can improve upon human selection. Notably, IBM’s AI helped increase enrollment in clinical trials at the Mayo Clinic by 80%.

“AI can find insights and make predictions better than the human eye, and more quickly,” said Ms. Docktor. 

Dr. Little returned the group to the topic of potential for bias in AI, quipping, “We have to remember that when we do talk about the potential for bias in AI, the alternative is humans. I know it’s news to all of you, but humans have a lot of bias themselves.” 

Mr. Perryman said that the key words for AI were explainability and transparency. He advocated for continuing to take the perspective of people of color into account. “A lot of times I go to AI policy panels and there is no one on the panel who is talking about the bias,” he said. Addressing the crowd, he added, “But it’s important that you all have an interest in this as you are making policy, because your perspective is important.”

Dr. Edenberg concluded that it was important for policy makers to develop inclusive, ethical guidelines for AI use. Building these guidelines needs to be a collaboration including ethics experts, lawyers, and technologists as well as diverse stakeholders to ensure that guidance is attuned to social justice issues.

During the Q&A, audience member Gladys Sriprasert of Carnegie Mellon University asked how panelists thought increasing transparency around algorithms would impact people using AI in their decision making.

“One of the lessons going forward is that AI is not homogeneous,” Dr. Little remarked. “It’s an approach, it’s a technology, but it has lots of variations to it like most technologies.”

Audience members told Dr. Little and Dr. Edenberg after the panel that they thought the discussion had been a fantastic addition to the tech summit.

FeaturedGuest User