TY - GEN
T1 - CAMEL
T2 - 2025 IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2025
AU - Jung, Juho
AU - Yang, Migyeong
AU - Won, Hyunseon
AU - Kim, Jiwon
AU - Han, Jeong Mo
AU - Hwang, Joon Seo
AU - Hwang, Daniel Duck Jin
AU - Han, Jinyoung
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Precise retina Optical Coherence Tomography (OCT) image classification and segmentation are important for di-agnosing various retinal diseases and identifying specific regions. Alongside comprehensive lesion identification, re-ducing the predictive uncertainty of models is crucial for improving reliability in clinical retinal practice. However, existing methods have primarily focused on a limited set of regions identified in OCT images and have often faced challenges due to aleatoric and epistemic uncertainty. To address these issues, we propose CAMEL (Confidence-Aware Multi-task Ensemble Learning), a novel frame-work designed to reduce task-specific uncertainty in multi-task learning. CAMEL achieves this by estimating model confidence at both pixel and image levels and leveraging confidence-aware ensemble learning to minimize the un-certainty inherent in single-model predictions. CAMEL demonstrates state-of-the-art performance on a compre-hensive retinal OCT image dataset containing annotations for nine distinct retinal regions and nine retinal diseases. Furthermore, extensive experiments highlight the clini-cal utility of CAMEL, especially in scenarios with mini-mal regions, significant class imbalances, and diverse re-gions and diseases. Our code is publicly available at: https://github.com/DSAIL-SKKU/CAMEL.
AB - Precise retina Optical Coherence Tomography (OCT) image classification and segmentation are important for di-agnosing various retinal diseases and identifying specific regions. Alongside comprehensive lesion identification, re-ducing the predictive uncertainty of models is crucial for improving reliability in clinical retinal practice. However, existing methods have primarily focused on a limited set of regions identified in OCT images and have often faced challenges due to aleatoric and epistemic uncertainty. To address these issues, we propose CAMEL (Confidence-Aware Multi-task Ensemble Learning), a novel frame-work designed to reduce task-specific uncertainty in multi-task learning. CAMEL achieves this by estimating model confidence at both pixel and image levels and leveraging confidence-aware ensemble learning to minimize the un-certainty inherent in single-model predictions. CAMEL demonstrates state-of-the-art performance on a compre-hensive retinal OCT image dataset containing annotations for nine distinct retinal regions and nine retinal diseases. Furthermore, extensive experiments highlight the clini-cal utility of CAMEL, especially in scenarios with mini-mal regions, significant class imbalances, and diverse re-gions and diseases. Our code is publicly available at: https://github.com/DSAIL-SKKU/CAMEL.
UR - https://www.scopus.com/pages/publications/105003632928
U2 - 10.1109/WACV61041.2025.00867
DO - 10.1109/WACV61041.2025.00867
M3 - Conference contribution
AN - SCOPUS:105003632928
T3 - Proceedings - 2025 IEEE Winter Conference on Applications of Computer Vision, WACV 2025
SP - 8947
EP - 8957
BT - Proceedings - 2025 IEEE Winter Conference on Applications of Computer Vision, WACV 2025
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 28 February 2025 through 4 March 2025
ER -