Ethics for Health Informatics
Last month, Ethics Lab director Prof. Maggie Little participated in a panel discussion sponsored by the MA Health Informatics program, where she spoke to students about the importance and value of ethics in their field. The panel, “Health & Biomedical Research Data Ethics,” also featured Dr. Sandra Soo-Jin Lee of Columbia University and Dr. John Rasmussen of Medstar Health, and was moderated by Dr. Subha Madhavan, Chief Data Scientist at the Georgetown University Medical Center.
Prof. Little identified three key concerns in health informatics that require ethical solutions: privacy, transparency, and bias. Health informatics is the application of data science to healthcare, and thus raises questions about privacy and access to information. Prof. Little pointed to recent cases of medical data breaches, including the largest to date: Anthem Blue Cross in 2015. It was an individual, not a computer, who identified the breach, which points to the importance of individuals in data management. By preserving a human role in the collection and monitoring of data, healthcare providers can better protect their patients.
This privacy, Prof. Little argued, is crucial in promoting greater confidence and trust in the medical field. People who trust the medical system are more likely to seek help when they need it, and when they do, are more likely to communicate openly about their needs with their doctor. Data protection, through a human, ethical lens, improves patient outcomes.
Prof. Little then discussed the importance of transparency in health informatics; while it’s crucial to protect patient data, it’s also crucial that doctors and practitioners are public about what kind of data they have and how they’re using it. Algorithms, for instance, are a powerful tool in analytics, but without human supervision, can offer a classificatory or predictive scheme that may contain bias or yield serious errors.
Similarly, human oversight is crucial in identifying situations of bias in health informatics. Certain algorithms that base health care outcomes on inaccurate assumptions or limited inputs can lead to biased conclusions, as recent research from Obermeyer and colleagues revealed. Biases like these in data can exacerbate social inequalities and decrease trust, so it’s crucial that data scientists are sensitive to potential biases in their work.
Those studying health informatics have a responsibility to consider social justice issues in their work, Prof. Little told students. From the information they use to the algorithms they create, they have the ability to combat bias and mistrust in health care, or exacerbate it.