TY - JOUR
T1 - DOGpred
T2 - A Novel Deep Learning Framework for Accurate Identification of Human O-linked Threonine Glycosylation Sites
AU - Lee, Ki Wook
AU - Pham, Nhat Truong
AU - Min, Hye Jung
AU - Park, Hyun Woo
AU - Lee, Ji Won
AU - Lo, Han En
AU - Kwon, Na Young
AU - Seo, Jimin
AU - Shaginyan, Illia
AU - Cho, Heeje
AU - Wei, Leyi
AU - Manavalan, Balachandran
AU - Jeon, Young Jun
N1 - Publisher Copyright:
© 2025 Elsevier Ltd
PY - 2025/3/15
Y1 - 2025/3/15
N2 - O-linked glycosylation is a crucial post-translational modification that regulates protein function and biological processes. Dysregulation of this process is associated with various diseases, underscoring the need to accurately identify O-linked glycosylation sites on proteins. Current experimental methods for identifying O-linked threonine glycosylation (OTG) sites are often complex and costly. Consequently, developing computational tools that predict these sites based on protein features is crucial. Such tools can complement experimental approaches, enhancing our understanding of the role of OTG dysregulation in diseases and uncovering potential therapeutic targets. In this study, we developed DOGpred, a deep learning-based predictor for precisely identifying human OTGs using high-latent feature representations. Initially, we extracted nine different conventional feature descriptors (CFDs) and nine pre-trained protein language model (PLM)-based embeddings. Notably, each feature was encoded as a 2D tensor, capturing both the sequential and inherent feature characteristics. Subsequently, we designed a stacked convolutional neural network (CNN) module to learn spatial feature representations from CFDs and a stacked recurrent neural network (RNN) module to learn temporal feature representations from PLM-based embeddings. These features were integrated using attention-based fusion mechanisms to generate high-level feature representations for final classification. Ablation analysis and independent tests demonstrated that the optimal model (DOGpred), employing a stacked 1D CNN and a stacked attention-based RNN modules with cross-attention feature fusion, achieved the best performance on the training dataset and significantly outperformed machine learning-based single-feature models and state-of-the-art methods on independent datasets. Furthermore, DOGpred is publicly available at https://github.com/JeonRPM/DOGpred/ for free access and usage.
AB - O-linked glycosylation is a crucial post-translational modification that regulates protein function and biological processes. Dysregulation of this process is associated with various diseases, underscoring the need to accurately identify O-linked glycosylation sites on proteins. Current experimental methods for identifying O-linked threonine glycosylation (OTG) sites are often complex and costly. Consequently, developing computational tools that predict these sites based on protein features is crucial. Such tools can complement experimental approaches, enhancing our understanding of the role of OTG dysregulation in diseases and uncovering potential therapeutic targets. In this study, we developed DOGpred, a deep learning-based predictor for precisely identifying human OTGs using high-latent feature representations. Initially, we extracted nine different conventional feature descriptors (CFDs) and nine pre-trained protein language model (PLM)-based embeddings. Notably, each feature was encoded as a 2D tensor, capturing both the sequential and inherent feature characteristics. Subsequently, we designed a stacked convolutional neural network (CNN) module to learn spatial feature representations from CFDs and a stacked recurrent neural network (RNN) module to learn temporal feature representations from PLM-based embeddings. These features were integrated using attention-based fusion mechanisms to generate high-level feature representations for final classification. Ablation analysis and independent tests demonstrated that the optimal model (DOGpred), employing a stacked 1D CNN and a stacked attention-based RNN modules with cross-attention feature fusion, achieved the best performance on the training dataset and significantly outperformed machine learning-based single-feature models and state-of-the-art methods on independent datasets. Furthermore, DOGpred is publicly available at https://github.com/JeonRPM/DOGpred/ for free access and usage.
KW - attention mechanisms
KW - convolutional neural networks
KW - O-linked glycosylation
KW - pre-trained language models
KW - recurrent neural networks
UR - https://www.scopus.com/pages/publications/85217685091
U2 - 10.1016/j.jmb.2025.168977
DO - 10.1016/j.jmb.2025.168977
M3 - Article
C2 - 39900285
AN - SCOPUS:85217685091
SN - 0022-2836
VL - 437
JO - Journal of Molecular Biology
JF - Journal of Molecular Biology
IS - 6
M1 - 168977
ER -