top of page
SonomaEye-full-color-vector[2].png

Thank you for your submission!

Deep Learning Approach for Differentiating Diabetic and Uveitic Macular Edema in Optical Coherence Tomography Scans

Pooya Khosravi

Presenter:

Pooya Khosravi MS 1,2, Taylor Crook BS 1, Paul Zhou MD 1, Olivia Lee MD 1

Authors:

Affiliation:

1. Department of Ophthalmology, School of Medicine, University of California, Irvine, CA 92697, USA

2. Department of Computer Science, Donald Bren School of Information and Computer Sciences, University of California, Irvine, CA 92697, USA

Purpose: The primary goal was to develop a machine learning model capable of distinguishing between diabetic macular edema (DME) and uveitic macular edema (UME) using macular optical coherence tomography (OCT) B-scans, excluding patients with diabetic retinopathy from the uveitis group to minimize potential confounding factors. By leveraging the ResNet18 architecture, this study aimed to enhance diagnostic accuracy in ophthalmology through automated classification, potentially improving patient outcomes.

Methods: Our dataset included 51,569 OCT B-scans (captured using Spectralis, Heidelberg Engineering) from public and private sources, categorized into UME (13,307 images), normal (26,664 images), and DME (11,598 images) scans. To ensure robust evaluation and prevent data leakage, the dataset was divided into training, validation, and test sets following a patient-stratified approach. We employed data augmentation techniques such as random rotations, horizontal flips, and contrast adjustments to enhance model generalizability and mitigate overfitting. Gradient-weighted Class Activation Mapping (Grad-CAM) was implemented to generate heatmaps that highlighted regions within the OCT B-scans most influential in classification decisions.

Results: The ResNet18 model demonstrated high accuracy in classifying the OCT B-scans with a test accuracy of 97.66%. Detailed metrics included a precision of 97.75%, recall of 97.66%, and an F1 score of 97.64%. GradCAM heatmaps visualized the areas of interest that informed the differentiation between DME and UME conditions to find clinically relevant features.

Conclusions: This study successfully demonstrated that a ResNet18-based machine learning model could effectively differentiate between DME and UME macular OCT B-scans with high accuracy. The integration of Grad-CAM enhanced interpretability, making it easier to validate clinical relevance in the model’s decision-making process. These findings suggest significant potential for AI-driven diagnostic tools to improve ophthalmic diagnostics and patient care by providing accurate and transparent assessments.

Attachments:

© 2016-2024 SonomaEye. All Rights Reserved

bottom of page