Purpose: To evaluate a novel deep learning algorithm to distinguish between eyes that may or may not have a graft detachment based on pre–Descemet membrane endothelial keratoplasty (DMEK) anterior segment optical coherence tomography (AS-OCT) images. Methods: Retrospective cohort study. A multiple-instance learning artificial intelligence (MIL-AI) model using a ResNet-101 backbone was designed. AS-OCT images were split into training and testing sets. The MIL-AI model was trained and validated on the training set. Model performance and heatmaps were calculated from the testing set. Classification performance metrics included F1 score (harmonic mean of recall and precision), specificity, sensitivity, and area under curve (AUC). Finally, MIL-AI performance was compared to manual classification by an experienced ophthalmologist. Results: In total, 9466 images of 74 eyes (128 images per eye) were included in the study. Images from 50 eyes were used to train and validate the MIL-AI system, while the remaining 24 eyes were used as the test set to determine its performance and generate heatmaps for visualization. The performance metrics on the test set (95% confidence interval) were as follows: F1 score, 0.77 (0.57–0.91); precision, 0.67 (0.44–0.88); specificity, 0.45 (0.15–0.75); sensitivity, 0.92 (0.73–1.00); and AUC, 0.63 (0.52–0.86). MIL-AI performance was more sensitive (92% vs. 31%) but less specific (45% vs. 64%) than the ophthal-mologist’s performance. Conclusions: The MIL-AI predicts with high sensitivity the eyes that may have post-DMEK graft detachment requiring rebubbling. Larger-scale clinical trials are warranted to validate the model. Translational Relevance: MIL-AI models represent an opportunity for implementation in routine DMEK suitability screening.

Deep Learning Using Preoperative AS-OCT Predicts Graft Detachment in DMEK

Airaldi M.;Semeraro F.;Romano V.
2023-01-01

Abstract

Purpose: To evaluate a novel deep learning algorithm to distinguish between eyes that may or may not have a graft detachment based on pre–Descemet membrane endothelial keratoplasty (DMEK) anterior segment optical coherence tomography (AS-OCT) images. Methods: Retrospective cohort study. A multiple-instance learning artificial intelligence (MIL-AI) model using a ResNet-101 backbone was designed. AS-OCT images were split into training and testing sets. The MIL-AI model was trained and validated on the training set. Model performance and heatmaps were calculated from the testing set. Classification performance metrics included F1 score (harmonic mean of recall and precision), specificity, sensitivity, and area under curve (AUC). Finally, MIL-AI performance was compared to manual classification by an experienced ophthalmologist. Results: In total, 9466 images of 74 eyes (128 images per eye) were included in the study. Images from 50 eyes were used to train and validate the MIL-AI system, while the remaining 24 eyes were used as the test set to determine its performance and generate heatmaps for visualization. The performance metrics on the test set (95% confidence interval) were as follows: F1 score, 0.77 (0.57–0.91); precision, 0.67 (0.44–0.88); specificity, 0.45 (0.15–0.75); sensitivity, 0.92 (0.73–1.00); and AUC, 0.63 (0.52–0.86). MIL-AI performance was more sensitive (92% vs. 31%) but less specific (45% vs. 64%) than the ophthal-mologist’s performance. Conclusions: The MIL-AI predicts with high sensitivity the eyes that may have post-DMEK graft detachment requiring rebubbling. Larger-scale clinical trials are warranted to validate the model. Translational Relevance: MIL-AI models represent an opportunity for implementation in routine DMEK suitability screening.
File in questo prodotto:
File Dimensione Formato  
Deep Learning Using Preoperative AS-OCT Predicts Graft Detachment in DMEK (TVST 2023).pdf

solo utenti autorizzati

Licenza: DRM non definito
Dimensione 1.22 MB
Formato Adobe PDF
1.22 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11379/578807
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? 0
  • Scopus 7
  • ???jsp.display-item.citation.isi??? 5
social impact