TY - GEN
T1 - Mixup Your Own Latent
T2 - 27th European Conference on Artificial Intelligence, ECAI 2024
AU - Yang, Eugene
AU - Chen, Hao
AU - Kang, Seokho
N1 - Publisher Copyright:
© 2024 The Authors.
PY - 2024/10/16
Y1 - 2024/10/16
N2 - Self-supervised learning has emerged as a powerful technique in computer vision, demonstrating remarkable performance in various downstream tasks by leveraging unlabeled data.Among these methods, contrastive learning has proven particularly promising by effectively learning image representations.However, its high reliance on large computational resources poses significant practical challenges.To address this issue, there is a pressing need to improve efficiency without compromising generalization performance and robustness.In this paper, we propose Mixup Your Own Latent (MYOL), a regularization method to improve the generalization performance and robustness of Bootstrap Your Own Latent (BYOL), particularly for small images under limited computational resources.MYOL achieves this using the Mixup of the representations of two input images as the target representation of the Mixup of those images.Through experiments conducted in a single GPU environment, we demonstrate that MYOL outperforms BYOL and other regularization methods across various downstream tasks on small-image datasets.The high resilience of MYOL to small batch sizes and its robustness to adversarial attacks further highlight its effectiveness in mitigating the limitations of BYOL.The source code is available at https://github.com/cneyang/MYOL-MixupYourOwnLatent.
AB - Self-supervised learning has emerged as a powerful technique in computer vision, demonstrating remarkable performance in various downstream tasks by leveraging unlabeled data.Among these methods, contrastive learning has proven particularly promising by effectively learning image representations.However, its high reliance on large computational resources poses significant practical challenges.To address this issue, there is a pressing need to improve efficiency without compromising generalization performance and robustness.In this paper, we propose Mixup Your Own Latent (MYOL), a regularization method to improve the generalization performance and robustness of Bootstrap Your Own Latent (BYOL), particularly for small images under limited computational resources.MYOL achieves this using the Mixup of the representations of two input images as the target representation of the Mixup of those images.Through experiments conducted in a single GPU environment, we demonstrate that MYOL outperforms BYOL and other regularization methods across various downstream tasks on small-image datasets.The high resilience of MYOL to small batch sizes and its robustness to adversarial attacks further highlight its effectiveness in mitigating the limitations of BYOL.The source code is available at https://github.com/cneyang/MYOL-MixupYourOwnLatent.
UR - https://www.scopus.com/pages/publications/85216629809
U2 - 10.3233/FAIA240860
DO - 10.3233/FAIA240860
M3 - Conference contribution
AN - SCOPUS:85216629809
T3 - Frontiers in Artificial Intelligence and Applications
SP - 3163
EP - 3170
BT - ECAI 2024 - 27th European Conference on Artificial Intelligence, Including 13th Conference on Prestigious Applications of Intelligent Systems, PAIS 2024, Proceedings
A2 - Endriss, Ulle
A2 - Melo, Francisco S.
A2 - Bach, Kerstin
A2 - Bugarin-Diz, Alberto
A2 - Alonso-Moral, Jose M.
A2 - Barro, Senen
A2 - Heintz, Fredrik
PB - IOS Press BV
Y2 - 19 October 2024 through 24 October 2024
ER -