TY - GEN
T1 - Relation-Aware Label Smoothing for Self-KD
AU - Kim, Jeongho
AU - Woo, Simon S.
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024.
PY - 2024
Y1 - 2024
N2 - Knowledge distillation (KD) is widely used to improve models’ performances by transferring a larger teacher’s knowledge to a smaller student model. However, KD has a disadvantage where a pre-trained teacher model is required, which can lead to training inefficiency. Therefore, self-knowledge distillation, enhancing the student by itself, has been proposed. Although self-knowledge distillation shows remarkable performance improvement with fewer resources than conventional teacher-student based KD approaches, existing self-KD methods still require additional time and memory for training. We propose Relation-Aware Label Smoothing for Self-Knowledge Distillation (RAS-KD) that regularizes the student model itself by utilizing the inter-class relationships between class representative vectors with a light-weight auxiliary classifier. Compared to existing self-KD methods that only consider the instance-level knowledge, we show that proposed global-level knowledge is sufficient to achieve competitive performance while being extremely efficient training cost. Also, we achieve extra performance improvement through instance-level supervision. We demonstrate RAS-KD outperforms existing self-KD approaches in various tasks with negligible additional cost.
AB - Knowledge distillation (KD) is widely used to improve models’ performances by transferring a larger teacher’s knowledge to a smaller student model. However, KD has a disadvantage where a pre-trained teacher model is required, which can lead to training inefficiency. Therefore, self-knowledge distillation, enhancing the student by itself, has been proposed. Although self-knowledge distillation shows remarkable performance improvement with fewer resources than conventional teacher-student based KD approaches, existing self-KD methods still require additional time and memory for training. We propose Relation-Aware Label Smoothing for Self-Knowledge Distillation (RAS-KD) that regularizes the student model itself by utilizing the inter-class relationships between class representative vectors with a light-weight auxiliary classifier. Compared to existing self-KD methods that only consider the instance-level knowledge, we show that proposed global-level knowledge is sufficient to achieve competitive performance while being extremely efficient training cost. Also, we achieve extra performance improvement through instance-level supervision. We demonstrate RAS-KD outperforms existing self-KD approaches in various tasks with negligible additional cost.
UR - https://www.scopus.com/pages/publications/85192791617
U2 - 10.1007/978-981-97-2253-2_16
DO - 10.1007/978-981-97-2253-2_16
M3 - Conference contribution
AN - SCOPUS:85192791617
SN - 9789819722525
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 197
EP - 209
BT - Advances in Knowledge Discovery and Data Mining - 28th Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2024, Proceedings
A2 - Yang, De-Nian
A2 - Xie, Xing
A2 - Tseng, Vincent S.
A2 - Pei, Jian
A2 - Huang, Jen-Wei
A2 - Lin, Jerry Chun-Wei
PB - Springer Science and Business Media Deutschland GmbH
T2 - 28th Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2024
Y2 - 7 May 2024 through 10 May 2024
ER -