Collegium Helveticum
u3451846549_vector-style_illustration_of_a_classic_medical_symb_05647a91-634a-4930-b8ff-73577e3c2707
Fellow Project 2024–2025

The Ethics of Biases of Artificial Intelligence and Machine Learning in Medicine

This project explores the ethical dimensions of biases in artificial intelligence and machine learning (AI/ML) in medicine and healthcare, a critical issue given the rapid expansion of these technologies. It investigates the sources of algorithmic bias, including factors such as biased training data, skewed feature selection, and modeling choices.

The project delves into how biases related to the implementation and use of AI/ML technologies shape outcomes, revealing a complex web of challenges. Building on this, it explores the ethical implications of these biases and seeks to determine whether existing ethical frameworks are sufficient or if new principles are needed. By examining the intersection of epistemology and ethics, the project aims to shed light on how we understand and address the biases embedded within AI/ML systems.

To ground its inquiry, the project focuses on a compelling case study: the use of AI/ML in breast cancer screening and detection. This field provides a particularly rich context, as AI/ML has been widely adopted here, yet biased training and validation data remain persistent issues, often driven by screening overdiagnosis and overtreatment. At the same time, rigorous methods have been introduced to improve the diagnostic sensitivity of mammograms, offering a unique lens through which to evaluate the ethical and epistemological challenges posed by bias in AI/ML.

Discover more