FGR-Net: interpretable fundus image gradeability classification based on deep reconstruction learning
The performance of diagnostic Computer-Aided Design (CAD) systems for retinal diseases depends on the quality of the retinal images being screened. Thus, many studies have been developed to evaluate and assess the quality of such retinal images. However, most of them did not investigate the relation...
Autores principales: | , , , , , |
---|---|
Formato: | Articulo |
Lenguaje: | Inglés |
Publicado: |
2023
|
Materias: | |
Acceso en línea: | http://sedici.unlp.edu.ar/handle/10915/160108 |
Aporte de: |
id |
I19-R120-10915-160108 |
---|---|
record_format |
dspace |
spelling |
I19-R120-10915-1601082023-11-15T04:07:08Z http://sedici.unlp.edu.ar/handle/10915/160108 FGR-Net: interpretable fundus image gradeability classification based on deep reconstruction learning Khalid, Saif Rashwan, Hatem A. Abdulwahab, Saddam Abdel-Nasser, Mohamed Quiroga, Facundo Manuel Puig, Domenec 2023 2023-11-14T12:13:38Z en Ciencias Informáticas Retinal image Quality assessment Autoencoder network Ocular diseases Deep learning Intepretability Explainability Gradability The performance of diagnostic Computer-Aided Design (CAD) systems for retinal diseases depends on the quality of the retinal images being screened. Thus, many studies have been developed to evaluate and assess the quality of such retinal images. However, most of them did not investigate the relationship between the accuracy of the developed models and the quality of the visualization of interpretability methods for distinguishing between gradable and non-gradable retinal images. Consequently, this paper presents a novel framework called ‘‘FGR-Net’’ to automatically assess and interpret underlying fundus image quality by merging an autoencoder network with a classifier network. The FGR-Net model also provides an interpretable quality assessment through visualizations. In particular, FGR-Net uses a deep autoencoder to reconstruct the input image in order to extract the visual characteristics of the input fundus images based on self-supervised learning. The extracted features by the autoencoder are then fed into a deep classifier network to distinguish between gradable and ungradable fundus images. FGR-Net is evaluated with different interpretability methods, which indicates that the autoencoder is a key factor in forcing the classifier to focus on the relevant structures of the fundus images, such as the fovea, optic disk, and prominent blood vessels. Additionally, the interpretability methods can provide visual feedback for ophthalmologists to understand how our model evaluates the quality of fundus images. The experimental results showed the superiority of FGR-Net over the state-of-the-art quality assessment methods, with an accuracy of > 89% and an F1-score of > 87%. The code is publicly available at https://github.com/saifalkh/FGR-Net. Instituto de Investigación en Informática Articulo Articulo http://creativecommons.org/licenses/by-nc-nd/4.0/ Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) application/pdf |
institution |
Universidad Nacional de La Plata |
institution_str |
I-19 |
repository_str |
R-120 |
collection |
SEDICI (UNLP) |
language |
Inglés |
topic |
Ciencias Informáticas Retinal image Quality assessment Autoencoder network Ocular diseases Deep learning Intepretability Explainability Gradability |
spellingShingle |
Ciencias Informáticas Retinal image Quality assessment Autoencoder network Ocular diseases Deep learning Intepretability Explainability Gradability Khalid, Saif Rashwan, Hatem A. Abdulwahab, Saddam Abdel-Nasser, Mohamed Quiroga, Facundo Manuel Puig, Domenec FGR-Net: interpretable fundus image gradeability classification based on deep reconstruction learning |
topic_facet |
Ciencias Informáticas Retinal image Quality assessment Autoencoder network Ocular diseases Deep learning Intepretability Explainability Gradability |
description |
The performance of diagnostic Computer-Aided Design (CAD) systems for retinal diseases depends on the quality of the retinal images being screened. Thus, many studies have been developed to evaluate and assess the quality of such retinal images. However, most of them did not investigate the relationship between the accuracy of the developed models and the quality of the visualization of interpretability methods for distinguishing between gradable and non-gradable retinal images. Consequently, this paper presents a novel framework called ‘‘FGR-Net’’ to automatically assess and interpret underlying fundus image quality by merging an autoencoder network with a classifier network. The FGR-Net model also provides an interpretable quality assessment through visualizations. In particular, FGR-Net uses a deep autoencoder to reconstruct the input image in order to extract the visual characteristics of the input fundus images based on self-supervised learning.
The extracted features by the autoencoder are then fed into a deep classifier network to distinguish between gradable and ungradable fundus images. FGR-Net is evaluated with different interpretability methods, which indicates that the autoencoder is a key factor in forcing the classifier to focus on the relevant structures of the fundus images, such as the fovea, optic disk, and prominent blood vessels. Additionally, the interpretability methods can provide visual feedback for ophthalmologists to understand how our model evaluates the quality of fundus images. The experimental results showed the superiority of FGR-Net over the state-of-the-art quality assessment methods, with an accuracy of > 89% and an F1-score of > 87%. The code is publicly available at https://github.com/saifalkh/FGR-Net. |
format |
Articulo Articulo |
author |
Khalid, Saif Rashwan, Hatem A. Abdulwahab, Saddam Abdel-Nasser, Mohamed Quiroga, Facundo Manuel Puig, Domenec |
author_facet |
Khalid, Saif Rashwan, Hatem A. Abdulwahab, Saddam Abdel-Nasser, Mohamed Quiroga, Facundo Manuel Puig, Domenec |
author_sort |
Khalid, Saif |
title |
FGR-Net: interpretable fundus image gradeability classification based on deep reconstruction learning |
title_short |
FGR-Net: interpretable fundus image gradeability classification based on deep reconstruction learning |
title_full |
FGR-Net: interpretable fundus image gradeability classification based on deep reconstruction learning |
title_fullStr |
FGR-Net: interpretable fundus image gradeability classification based on deep reconstruction learning |
title_full_unstemmed |
FGR-Net: interpretable fundus image gradeability classification based on deep reconstruction learning |
title_sort |
fgr-net: interpretable fundus image gradeability classification based on deep reconstruction learning |
publishDate |
2023 |
url |
http://sedici.unlp.edu.ar/handle/10915/160108 |
work_keys_str_mv |
AT khalidsaif fgrnetinterpretablefundusimagegradeabilityclassificationbasedondeepreconstructionlearning AT rashwanhatema fgrnetinterpretablefundusimagegradeabilityclassificationbasedondeepreconstructionlearning AT abdulwahabsaddam fgrnetinterpretablefundusimagegradeabilityclassificationbasedondeepreconstructionlearning AT abdelnassermohamed fgrnetinterpretablefundusimagegradeabilityclassificationbasedondeepreconstructionlearning AT quirogafacundomanuel fgrnetinterpretablefundusimagegradeabilityclassificationbasedondeepreconstructionlearning AT puigdomenec fgrnetinterpretablefundusimagegradeabilityclassificationbasedondeepreconstructionlearning |
_version_ |
1807221830712295424 |