OculusNet: Detection of retinal diseases using a tailored web-deployed neural network and saliency maps for explainable AI

Retinal diseases are among the leading causes of blindness worldwide, requiring early detection for effective treatment. Manual interpretation of ophthalmic imaging, such as optical coherence tomography (OCT), is traditionally time-consuming, prone to inconsistencies, and requires specialized expert...

Full description

Saved in:
Bibliographic Details
Main Authors: Muhammad Umair, Jawad Ahmad, Oumaima Saidani, Mohammed S. Alshehri, Alanoud Al Mazroa, Muhammad Hanif, Rahmat Ullah, Muhammad Shahbaz Khan
Format: Article
Language:English
Published: Frontiers Media S.A. 2025-07-01
Series:Frontiers in Medicine
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/fmed.2025.1596726/full
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Retinal diseases are among the leading causes of blindness worldwide, requiring early detection for effective treatment. Manual interpretation of ophthalmic imaging, such as optical coherence tomography (OCT), is traditionally time-consuming, prone to inconsistencies, and requires specialized expertise in ophthalmology. This study introduces OculusNet, an efficient and explainable deep learning (DL) approach for detecting retinal diseases using OCT images. The proposed method is specifically tailored for complex medical image patterns in OCTs to identify retinal disorders, such as choroidal neovascularization (CNV), diabetic macular edema (DME), and age-related macular degeneration characterized by drusen. The model benefits from Saliency Map visualization, an Explainable AI (XAI) technique, to interpret and explain how it reaches conclusions when identifying retinal disorders. Furthermore, the proposed model is deployed on a web page, allowing users to upload retinal OCT images and receive instant detection results. This deployment demonstrates significant potential for integration into ophthalmic departments, enhancing diagnostic accuracy and efficiency. In addition, to ensure an equitable comparison, a transfer learning approach has been applied to four pre-trained models: VGG19, MobileNetV2, VGG16, and DenseNet-121. Extensive evaluation reveals that the proposed OculusNet model achieves a test accuracy of 95.48% and a validation accuracy of 98.59%, outperforming all other models in comparison. Moreover, to assess the proposed model's reliability and generalizability, the Matthews Correlation Coefficient and Cohen's Kappa Coefficient have been computed, validating that the model can be applied in practical clinical settings to unseen data.
ISSN:2296-858X