Visualizing UNet Decisions: An Explainable AI Perspective for Brain MRI Segmentation
In recent years, medical image analysis, particularly neuroimaging, has experienced remarkable advancements, with Magnetic Resonance Imaging (MRI) greatly helping in diagnosing complex neurological disorders, including brain tumors. However, accurately segmenting brain tumors from MRI scans remains...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/11095691/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In recent years, medical image analysis, particularly neuroimaging, has experienced remarkable advancements, with Magnetic Resonance Imaging (MRI) greatly helping in diagnosing complex neurological disorders, including brain tumors. However, accurately segmenting brain tumors from MRI scans remains a significant challenge, necessitating sophisticated computational techniques. This article presents research findings from brain MRI segmentation utilizing the UNet architecture and enhancing model interpretability using explainable AI methods to harness UNet’s effectiveness in semantic segmentation tasks. We study intricacies of UNet’s adaptation to brain MRI segmentation, the dataset employed, and the methodology for model development, training, and validation. In addition to discussing segmentation outcomes, we incorporate several explainable AI (XAI) techniques like Grad-CAM, Saliency Maps, Vanilla Gradient and Layer-wise Relevance Propagation (LRP) to generate necessary visualizations showing the internal workings of the opaque system nature of UNet. A comprehensive analysis of the results highlights the clinical implications of these findings, addressing the relative utility of different XAI methods in visualizing UNet’s outputs using metrics like fidelity, unambiguity and stability. The Vanilla Grad method stands out with its high unambiguity and consistent fidelity scores in complex scenarios. We also find that while LRP also offers high stability, the combination of high fidelity and clarity from the Vanilla Grad model makes it the preferred method for enhancing the interpretability of AI systems in brain tumor segmentation. Overall, this research work represents a significant advancement in leveraging the trustworthiness of UNet in accurate and efficient brain tumor segmentation via XAI methods, ultimately aiming to support clinicians in diagnosis and treatment planning while fostering a deeper understanding of the model’s decision-making processes. |
---|---|
ISSN: | 2169-3536 |