Interpretable multimodal classification for age-related macular degeneration diagnosis.

Explainable Artificial Intelligence (XAI) is an emerging machine learning field that has been successful in medical image analysis. Interpretable approaches are able to "unbox" the black-box decisions made by AI systems, aiding medical doctors to justify their diagnostics better. In this p...

Cur síos iomlán

Sábháilte in:
Sonraí bibleagrafaíochta
Príomhchruthaitheoirí: Carla Vairetti, Sebastián Maldonado, Loreto Cuitino, Cristhian A Urzua
Formáid: Alt
Teanga:Béarla
Foilsithe / Cruthaithe: Public Library of Science (PLoS) 2024-01-01
Sraith:PLoS ONE
Rochtain ar líne:https://doi.org/10.1371/journal.pone.0311811
Clibeanna: Cuir clib leis
Níl clibeanna ann, Bí ar an gcéad duine le clib a chur leis an taifead seo!
Cur síos
Achoimre:Explainable Artificial Intelligence (XAI) is an emerging machine learning field that has been successful in medical image analysis. Interpretable approaches are able to "unbox" the black-box decisions made by AI systems, aiding medical doctors to justify their diagnostics better. In this paper, we analyze the performance of three different XAI strategies for medical image analysis in ophthalmology. We consider a multimodal deep learning model that combines optical coherence tomography (OCT) and infrared reflectance (IR) imaging for the diagnosis of age-related macular degeneration (AMD). The classification model is able to achieve an accuracy of 0.94, performing better than other unimodal alternatives. We analyze the XAI methods in terms of their ability to identify retinal damage and ease of interpretation, concluding that grad-CAM and guided grad-CAM can be combined to have both a coarse visual justification and a fine-grained analysis of the retinal layers. We provide important insights and recommendations for practitioners on how to design automated and explainable screening tests based on the combination of two image sources.
ISSN:1932-6203