TY - JOUR
T1 - DUET
T2 - Dually guided knowledge distillation from explicit feedback
AU - Bae, Hong Kyun
AU - Kim, Jiyeon
AU - Lee, Jongwuk
AU - Kim, Sang Wook
N1 - Publisher Copyright:
© 2025
PY - 2025/8
Y1 - 2025/8
N2 - Various knowledge distillation (KD) methods for recommender systems have been recently introduced to achieve two goals: (i) obtaining an inference time shorter than the cumbersome model (i.e., teacher) and (ii) providing accuracy higher than the compact model (i.e., student). Despite their success, they solely focus on developing KD methods with implicit feedback. We argue that handling CF with explicit feedback is also crucial, representing the different degrees of user preferences. Towards this goal, we propose a novel KD framework for recommender systems, namely Dually gUided knowlEdge disTillation (DUET). We first observe that explicit feedback is interpreted as two types of user preferences, i.e., pre-use preference and post-use preference. Motivated by such characteristics of explicit feedback, we aim to fuse knowledge from the teacher's pre- and post-use preferences by employing two teachers (i.e., teacher #1 and teacher #2). Teacher #1, trained with pre-use preferences, selects some items among unrated ones. Teacher #2, trained with post-use preferences, determines the soft labels (i.e., predicted post-use preferences) of those items chosen by teacher #1. Finally, the student is trained with both the hard labels (i.e., observed post-use preferences) of rated items and the soft labels (i.e., predicted post-use preferences by teacher #2) of the items selected by teacher #1. Extensive experimental results show that our DUET framework consistently outperforms state-of-the-art KD methods on three benchmark datasets. Notably, it beats RD, CD, DE-RRD, BD, and TD up to 13.6%, 18.6%, 16.8%, 9.6%, and 18.6% in terms of NDCG@10, respectively.
AB - Various knowledge distillation (KD) methods for recommender systems have been recently introduced to achieve two goals: (i) obtaining an inference time shorter than the cumbersome model (i.e., teacher) and (ii) providing accuracy higher than the compact model (i.e., student). Despite their success, they solely focus on developing KD methods with implicit feedback. We argue that handling CF with explicit feedback is also crucial, representing the different degrees of user preferences. Towards this goal, we propose a novel KD framework for recommender systems, namely Dually gUided knowlEdge disTillation (DUET). We first observe that explicit feedback is interpreted as two types of user preferences, i.e., pre-use preference and post-use preference. Motivated by such characteristics of explicit feedback, we aim to fuse knowledge from the teacher's pre- and post-use preferences by employing two teachers (i.e., teacher #1 and teacher #2). Teacher #1, trained with pre-use preferences, selects some items among unrated ones. Teacher #2, trained with post-use preferences, determines the soft labels (i.e., predicted post-use preferences) of those items chosen by teacher #1. Finally, the student is trained with both the hard labels (i.e., observed post-use preferences) of rated items and the soft labels (i.e., predicted post-use preferences by teacher #2) of the items selected by teacher #1. Extensive experimental results show that our DUET framework consistently outperforms state-of-the-art KD methods on three benchmark datasets. Notably, it beats RD, CD, DE-RRD, BD, and TD up to 13.6%, 18.6%, 16.8%, 9.6%, and 18.6% in terms of NDCG@10, respectively.
KW - Collaborative filtering
KW - Explicit feedback
KW - Knowledge distillation
KW - Model compression
KW - Top-N recommendation
UR - https://www.scopus.com/pages/publications/105000064522
U2 - 10.1016/j.inffus.2025.103098
DO - 10.1016/j.inffus.2025.103098
M3 - Article
AN - SCOPUS:105000064522
SN - 1566-2535
VL - 120
JO - Information Fusion
JF - Information Fusion
M1 - 103098
ER -