TY - GEN
T1 - Comparison of KoBERT and BERT for Emotion Classification of Healthcare Text Data
AU - Gu, Mose
AU - Jeong, Jaehoon Paul
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - In recent times, the rapid progress of digital technology has led to a substantial increase in the popularity of digital health. Identifying depression, which is a prevalent mental illness, is crucial in digital healthcare to prevent further harm and provide timely support. This study proposes an AI model that automates the identification of depressive patients. By leveraging Natural Language Processing (NLP) and pre-trained language models like BERT, we aim to classify emotions into six categories. Training the model requires a Korean emotional conversation corpus, which we obtain through crowd-sourcing and AI-Hub's user case studies. To extend the applicability to English-speaking countries, we plan to translate the Korean corpus using the Google Translation API and fine-tune the BERT model with English data. The feasibility of the English model was evaluated by comparing the performance of KoBERT and BERT in emotion understanding. The findings will offer valuable insights into these models' efficacy and contribute to the field of emotion classification.
AB - In recent times, the rapid progress of digital technology has led to a substantial increase in the popularity of digital health. Identifying depression, which is a prevalent mental illness, is crucial in digital healthcare to prevent further harm and provide timely support. This study proposes an AI model that automates the identification of depressive patients. By leveraging Natural Language Processing (NLP) and pre-trained language models like BERT, we aim to classify emotions into six categories. Training the model requires a Korean emotional conversation corpus, which we obtain through crowd-sourcing and AI-Hub's user case studies. To extend the applicability to English-speaking countries, we plan to translate the Korean corpus using the Google Translation API and fine-tune the BERT model with English data. The feasibility of the English model was evaluated by comparing the performance of KoBERT and BERT in emotion understanding. The findings will offer valuable insights into these models' efficacy and contribute to the field of emotion classification.
KW - AI
KW - Deep Learning
KW - Digital Health
KW - NLP
UR - https://www.scopus.com/pages/publications/85184604964
U2 - 10.1109/ICTC58733.2023.10393750
DO - 10.1109/ICTC58733.2023.10393750
M3 - Conference contribution
AN - SCOPUS:85184604964
T3 - International Conference on ICT Convergence
SP - 1771
EP - 1775
BT - ICTC 2023 - 14th International Conference on Information and Communication Technology Convergence
PB - IEEE Computer Society
T2 - 14th International Conference on Information and Communication Technology Convergence, ICTC 2023
Y2 - 11 October 2023 through 13 October 2023
ER -