TY - GEN
T1 - SUPRDAD
T2 - 20th IEEE International Conference on Machine Learning and Applications, ICMLA 2021
AU - Lee, Joonho
AU - Song, Su Jeong
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021
Y1 - 2021
N2 - Leveraging the large-scale data-driven deep learning, many researchers have attempted to automatically screen retinal abnormalities from fundus images. Most of them focus on a single disease that allows easy accessibility to clinical cases. In real clinical environment, however, patients can suffer from various retinal diseases with low prevalence and co-occurrence. Data distribution shift makes classification task more difficult. To tackle these issues, we propose a novel framework that boosts representation learning of the feature extractor using additional unlabeled fundus images, from which we can benefit effective fine-tuning for disease-specific classifiers. The feature extractor is trained based on the extended single label supervision, and generalized to unseen features via self-supervised semi-supervised learning under multi-task training scheme. Then we adapt the feature representation to a more robust space using domain-adaptive distillation. Experiments are conducted on three carefully prepared test datasets; in all metrics, every fine-tuned classifier out of five diseases demonstrates superior performance to the corresponding one-versus-rest supervised learning baseline and, in particular, by 10.4 percent in AUPR. The proposed method largely improves classification performance for low-prevalent retinal diseases and can be potentially extended to other diseases.
AB - Leveraging the large-scale data-driven deep learning, many researchers have attempted to automatically screen retinal abnormalities from fundus images. Most of them focus on a single disease that allows easy accessibility to clinical cases. In real clinical environment, however, patients can suffer from various retinal diseases with low prevalence and co-occurrence. Data distribution shift makes classification task more difficult. To tackle these issues, we propose a novel framework that boosts representation learning of the feature extractor using additional unlabeled fundus images, from which we can benefit effective fine-tuning for disease-specific classifiers. The feature extractor is trained based on the extended single label supervision, and generalized to unseen features via self-supervised semi-supervised learning under multi-task training scheme. Then we adapt the feature representation to a more robust space using domain-adaptive distillation. Experiments are conducted on three carefully prepared test datasets; in all metrics, every fine-tuned classifier out of five diseases demonstrates superior performance to the corresponding one-versus-rest supervised learning baseline and, in particular, by 10.4 percent in AUPR. The proposed method largely improves classification performance for low-prevalent retinal diseases and can be potentially extended to other diseases.
KW - Domain generalization
KW - Multi-label classification
KW - Representation learning
KW - Retinal fundus image analysis
UR - https://www.scopus.com/pages/publications/85125842225
U2 - 10.1109/ICMLA52953.2021.00089
DO - 10.1109/ICMLA52953.2021.00089
M3 - Conference contribution
AN - SCOPUS:85125842225
T3 - Proceedings - 20th IEEE International Conference on Machine Learning and Applications, ICMLA 2021
SP - 534
EP - 540
BT - Proceedings - 20th IEEE International Conference on Machine Learning and Applications, ICMLA 2021
A2 - Wani, M. Arif
A2 - Sethi, Ishwar K.
A2 - Shi, Weisong
A2 - Qu, Guangzhi
A2 - Raicu, Daniela Stan
A2 - Jin, Ruoming
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 13 December 2021 through 16 December 2021
ER -