TY - GEN
T1 - User Satisfaction Forecasting in Game-based Educational Technology through Transformer Models
AU - Kang, Choongwon
AU - Kim, Jang Hyun
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - This paper presents a deep learning approach using transformer-based models to predict user satisfaction in online Educational Technology (EdTech) services. Online educational services have become essential tools as technology advances, providing students with an efficient and concise method for acquiring knowledge. In this study, we analyzed 368,690 metadata which were gathered from the Google Play Store over a period of approximately twelve years, from August 21, 2013 to September 3, 2024 focusing on game-based learning EdTech services. We utilized seven different transformer-based models (BERT, XLNet, ALBERT, DistilBERT, DistilRoBERTa, ELECTRA, DeBERTa) to predict user satisfaction. DeBERTa model achieving the highest performance, demonstrating a score of 89.96 in accuracy and attaining an F1-score of 89.41. Additionally, by combining the DeBERTa model with the sentiment score extracted from VADER (Valence Aware Dictionary for sEntiment Reasoning), we reached a score of 90.12 in accuracy and an F1-score of 89.49. Other models also achieved strong performance, with accuracies and F1-scores ranging between 88 and 89, proving their suitability for predicting user satisfaction in online EdTech services. This research identifies the most optimized transformer-based model for predicting online EdTech user satisfaction and presents a framework that demonstrates even better performance through the integration of sentiment scores. Through this study, valuable direction is provided for further research within this area.
AB - This paper presents a deep learning approach using transformer-based models to predict user satisfaction in online Educational Technology (EdTech) services. Online educational services have become essential tools as technology advances, providing students with an efficient and concise method for acquiring knowledge. In this study, we analyzed 368,690 metadata which were gathered from the Google Play Store over a period of approximately twelve years, from August 21, 2013 to September 3, 2024 focusing on game-based learning EdTech services. We utilized seven different transformer-based models (BERT, XLNet, ALBERT, DistilBERT, DistilRoBERTa, ELECTRA, DeBERTa) to predict user satisfaction. DeBERTa model achieving the highest performance, demonstrating a score of 89.96 in accuracy and attaining an F1-score of 89.41. Additionally, by combining the DeBERTa model with the sentiment score extracted from VADER (Valence Aware Dictionary for sEntiment Reasoning), we reached a score of 90.12 in accuracy and an F1-score of 89.49. Other models also achieved strong performance, with accuracies and F1-scores ranging between 88 and 89, proving their suitability for predicting user satisfaction in online EdTech services. This research identifies the most optimized transformer-based model for predicting online EdTech user satisfaction and presents a framework that demonstrates even better performance through the integration of sentiment scores. Through this study, valuable direction is provided for further research within this area.
KW - Deep Learning
KW - EdTech
KW - Natural Language Processing (NLP)
KW - Transformers
KW - User Satisfaction
UR - https://www.scopus.com/pages/publications/85218126612
U2 - 10.1109/IMCOM64595.2025.10857533
DO - 10.1109/IMCOM64595.2025.10857533
M3 - Conference contribution
AN - SCOPUS:85218126612
T3 - Proceedings of the 2025 19th International Conference on Ubiquitous Information Management and Communication, IMCOM 2025
BT - Proceedings of the 2025 19th International Conference on Ubiquitous Information Management and Communication, IMCOM 2025
A2 - Lee, Sukhan
A2 - Choo, Hyunseung
A2 - Ismail, Roslan
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 19th International Conference on Ubiquitous Information Management and Communication, IMCOM 2025
Y2 - 3 January 2025 through 5 January 2025
ER -