- Kengo TakahashiAI-based Medical Imaging: Fairness in Models and Causality in ImagesMarch 3, 2025taught byMarkus WenzelEike Petersen

Kengo Takahashi
AI-based Medical Imaging: Fairness in Models and Causality in Images
March 3, 2025
taught by
Markus Wenzel
Eike Petersen
AI-based Medical Imaging: Fairness in Models and Causality in Images
Kengo Takahashi
Level: Intermediate
Length: 4 hours
Format: In-Person Lecture
Intended Audience:
The course is meant to introduce all concepts in an intuitive fashion so that participants with a good foundation in mathematics and statistics can follow along, but even those without will leave with a solid qualitative understanding of the importance and methods of this field. Since some of the lectures are more technical, and code examples are in Python, the course will most benefit a more advanced audience that is already working with AI models in medical imaging and looking to further broaden their skillset. Familiarity with Python is expected for the practical portion of the course.
Description:
The rapidly increasing clinical deployment of AI-based medical image analysis software has led to heightened concerns about the (potential lack of) fairness and robustness of such AI models. Multiple recent high-profile studies have shown that there is indeed cause for concern: medical imaging AI models can, and do, discriminate against different demographic groups, often in surprising ways. In this half-day workshop, we will bring the increasingly important research fields of algorithmic fairness and causality to the attention of the medical imaging community.
The workshop will start with a clarification and open discussion of what "fairness" might mean in a medical imaging context, and answer the question of how a model can be "unfair". Besides standard fairness metrics that might be probed for a given model or algorithm, we will crucially also invite a discussion of the broader ethical scope, including, for example, bias arising when models only serve a certain stratified population; if modeled demographic information categorizes underlying continuous biological variables; etc. We will provide examples of several such "unfair" or biased models and derive different types and sources of bias, along with technical approaches to detect and mitigate them (where available). We will provide code examples in Python to empower participants to experience the proposed methods during the workshop. Using these interactive sessions, we want to enable participants to diagnose and - if possible - fix certain types of bias in example datasets and models, using mitigation strategies that will be discussed in more technical talks in the workshop. This shall result in a set of analytical tools and an overview of methods that the participants can later use in their work. Lastly, we will introduce the concept of causality in medical imaging, by asking what causes certain biases in medical images, and ultimately a training dataset used for modeling. To this end, we will introduce fundamental concepts and assumptions in causal modeling, including the causal graph as a tool for specifying causal assumptions. Again, we will use interactive tools to make this as intuitive as possible, but also provide the mathematical and statistical concepts forming this emerging research field. This will lead up to methods like "conditionally invariant learning" and "counterfactual image generation", constituting some of the most recent research fields in medical imaging.
This is an interactive course and participants will need to bring their own laptops. Learning Outcomes: This course will enable you to: - define different conceptions of "fairness" as a property of models analyzing medical images - describe what direct and indirect discrimination are, how they can be detected and mitigated, and when they can’t - outline the broader ethical concerns associated with demographic information and tell unfair biases apart from unavoidable and acceptable bias - describe the root causes potentially leading to biased models - know the general foundations of causal modeling and appreciate its requirements, scope, and implications for medical image analysis Instructor(s): Markus Wenzel works on machine learning and deep learning methods for medical applications since 2005 and has published numerous conference and journal papers on the subject. He received his PhD for his work on decision support systems for breast care. As a key scientist for cognitive computing at Fraunhofer MEVIS, he creates, leads, and consults several national and international research and implementation projects. He is a passionate and experienced creator and teacher for continuing education programs addressing participants from industry and research, and is a professor at Constructor University in Bremen, teaching Deep Learning and Image Processing classes for undergraduate and graduate students, among others. Eike Petersen is a Senior Scientist at Fraunhofer Institute for Digital Medicine MEVIS, Germany. He previously worked as a postdoc at the Technical University of Denmark (DTU) and holds a Ph.D. in biomedical engineering from the University of Lübeck, Germany. His research interests include the fairness, robustness, causality, and comprehensive evaluation of medical imaging AI. He is a co-organizer of the FAIMI Initiative (Fairness of AI in Medical Imaging) and represents the Dutch NGO AlgorithmAudit in the CEN-CENELEC JTC 21 working group tasked with developing standards on bias assessment and mitigation for the practical implementation of the EU’s AI Act. Event: SPIE Medical Imaging 2025 Course Held: 17 February 2025
This is an interactive course and participants will need to bring their own laptops. Learning Outcomes: This course will enable you to: - define different conceptions of "fairness" as a property of models analyzing medical images - describe what direct and indirect discrimination are, how they can be detected and mitigated, and when they can’t - outline the broader ethical concerns associated with demographic information and tell unfair biases apart from unavoidable and acceptable bias - describe the root causes potentially leading to biased models - know the general foundations of causal modeling and appreciate its requirements, scope, and implications for medical image analysis Instructor(s): Markus Wenzel works on machine learning and deep learning methods for medical applications since 2005 and has published numerous conference and journal papers on the subject. He received his PhD for his work on decision support systems for breast care. As a key scientist for cognitive computing at Fraunhofer MEVIS, he creates, leads, and consults several national and international research and implementation projects. He is a passionate and experienced creator and teacher for continuing education programs addressing participants from industry and research, and is a professor at Constructor University in Bremen, teaching Deep Learning and Image Processing classes for undergraduate and graduate students, among others. Eike Petersen is a Senior Scientist at Fraunhofer Institute for Digital Medicine MEVIS, Germany. He previously worked as a postdoc at the Technical University of Denmark (DTU) and holds a Ph.D. in biomedical engineering from the University of Lübeck, Germany. His research interests include the fairness, robustness, causality, and comprehensive evaluation of medical imaging AI. He is a co-organizer of the FAIMI Initiative (Fairness of AI in Medical Imaging) and represents the Dutch NGO AlgorithmAudit in the CEN-CENELEC JTC 21 working group tasked with developing standards on bias assessment and mitigation for the practical implementation of the EU’s AI Act. Event: SPIE Medical Imaging 2025 Course Held: 17 February 2025
Issued on
March 3, 2025
Expires on
Does not expire