TY - JOUR
T1 - Reinforcement learning-based dynamic routing using mobile sink for data collection in WSNs and IoT applications
AU - Krishnan, Muralitharan
AU - Lim, Yongdo
N1 - Publisher Copyright:
© 2021 Elsevier Ltd
PY - 2021/11/15
Y1 - 2021/11/15
N2 - Energy is one of the most critical resources for sensor devices that decides the network lifetime of the wireless sensor networks. In many circumstances, sensor devices consume more energy for data transmission, reception, and forwarding operations. The major challenge is to increase the network lifetime by implementing the latest research models to reduce the deployment and operational cost. Many existing methods address the application of the static sink with multi-hop routing. But most of them suffer from energy-hole issues and inefficient data collection due to the early death of sensor nodes. Most of the existing methods of learning require massive data with feature engineering which eventually increases the learning complexity. In order to avoid these issues, a robust reinforcement learning-based mobile sink model is proposed for dynamic routing with efficient data collection. In addition, the Q-Learning approach is implemented to induce automatic learning through the shortest route. Combining these strategies preserves network stability and efficiently improves the routing performance as well as the reward. The simulation results reveal that the proposed reinforcement learning-based mobile sink model extends the network lifetime, provides an improved learning time with more reward, and results in high efficiency when compared with existing methods.
AB - Energy is one of the most critical resources for sensor devices that decides the network lifetime of the wireless sensor networks. In many circumstances, sensor devices consume more energy for data transmission, reception, and forwarding operations. The major challenge is to increase the network lifetime by implementing the latest research models to reduce the deployment and operational cost. Many existing methods address the application of the static sink with multi-hop routing. But most of them suffer from energy-hole issues and inefficient data collection due to the early death of sensor nodes. Most of the existing methods of learning require massive data with feature engineering which eventually increases the learning complexity. In order to avoid these issues, a robust reinforcement learning-based mobile sink model is proposed for dynamic routing with efficient data collection. In addition, the Q-Learning approach is implemented to induce automatic learning through the shortest route. Combining these strategies preserves network stability and efficiently improves the routing performance as well as the reward. The simulation results reveal that the proposed reinforcement learning-based mobile sink model extends the network lifetime, provides an improved learning time with more reward, and results in high efficiency when compared with existing methods.
KW - Clustering
KW - Internet of Things
KW - Mobile sink
KW - Q-Learning
KW - Reinforcement learning
KW - Routing
KW - Wireless sensor networks
UR - https://www.scopus.com/pages/publications/85115980337
U2 - 10.1016/j.jnca.2021.103223
DO - 10.1016/j.jnca.2021.103223
M3 - Article
AN - SCOPUS:85115980337
SN - 1084-8045
VL - 194
JO - Journal of Network and Computer Applications
JF - Journal of Network and Computer Applications
M1 - 103223
ER -