TY - GEN
T1 - Representative Item Summarization Prompting for LLM-based Sequential Recommendation
AU - Kim, Han Beul
AU - Na, Cheol Won
AU - Choi, Yun Seok
AU - Lee, Jee Hyong
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Sequential recommendation aims to predict the next item based on historical behaviors. However, these sequential recommendation models are data-specific and require additional training to be applied to different datasets. To address this problem, recent studies have increasingly utilized Large Language Models (LLMs), known as their remarkable inference capabilities without additional training, to sequential recommendation task. Despite their success, there remain challenges to effectively utilizing LLMs for sequential recommendation. Previous LLM-based studies have employed In-Context Learning (ICL) to enable the LLMs to understand the input sequence and sequential recommendation task for generalized recommendations across various datasets without additional training. However, this approach is limited in the ability to capture user preferences from the semantic information of sequences due to a lack of user-specific information. To address this, we propose a prompt design strategy called RISP (Representative Item Summarization Prompting). Specifically, we select representative items by considering their similarity to the user's sequence. Then, we generate user preference information by summarizing these items through the LLMs. Finally, the user preference information is added to the prompt for the next item prediction, allowing the LLMs to make effective recommendations based on user preferences. We conduct experiments on three recommendation datasets and validate the effectiveness of our proposed method.
AB - Sequential recommendation aims to predict the next item based on historical behaviors. However, these sequential recommendation models are data-specific and require additional training to be applied to different datasets. To address this problem, recent studies have increasingly utilized Large Language Models (LLMs), known as their remarkable inference capabilities without additional training, to sequential recommendation task. Despite their success, there remain challenges to effectively utilizing LLMs for sequential recommendation. Previous LLM-based studies have employed In-Context Learning (ICL) to enable the LLMs to understand the input sequence and sequential recommendation task for generalized recommendations across various datasets without additional training. However, this approach is limited in the ability to capture user preferences from the semantic information of sequences due to a lack of user-specific information. To address this, we propose a prompt design strategy called RISP (Representative Item Summarization Prompting). Specifically, we select representative items by considering their similarity to the user's sequence. Then, we generate user preference information by summarizing these items through the LLMs. Finally, the user preference information is added to the prompt for the next item prediction, allowing the LLMs to make effective recommendations based on user preferences. We conduct experiments on three recommendation datasets and validate the effectiveness of our proposed method.
KW - In-Context Learning
KW - Large Language Models
KW - Prompting
KW - Sequential Recommendation
UR - https://www.scopus.com/pages/publications/85214666426
U2 - 10.1109/SCISISIS61014.2024.10759967
DO - 10.1109/SCISISIS61014.2024.10759967
M3 - Conference contribution
AN - SCOPUS:85214666426
T3 - 2024 Joint 13th International Conference on Soft Computing and Intelligent Systems and 25th International Symposium on Advanced Intelligent Systems, SCIS and ISIS 2024
BT - 2024 Joint 13th International Conference on Soft Computing and Intelligent Systems and 25th International Symposium on Advanced Intelligent Systems, SCIS and ISIS 2024
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - Joint 13th International Conference on Soft Computing and Intelligent Systems and 25th International Symposium on Advanced Intelligent Systems, SCIS and ISIS 2024
Y2 - 9 November 2024 through 12 November 2024
ER -