TY - GEN
T1 - Edge Deployment of Vision-Based Model for Human Following Robot
AU - Manzoor, Sumaira
AU - Kim, Eun Jin
AU - Bae, Sang Hyeon
AU - Kuc, Tae Yong
N1 - Publisher Copyright:
© 2023 ICROS.
PY - 2023
Y1 - 2023
N2 - Mobile robots are proliferating at a significant pace and the continuous interaction between humans and robots opens the doors to facilitate our daily life activities. Following the target person with the robot is an important human-robot interaction (HRI) task that leads to its applications in industrial, domestic, and medical assistant robots. To implement the robotic tasks, traditional solutions rely on cloud servers that cause significant communication overhead due to data offloading. In our work, we overcome this potential issue of cloud-based solutions, by implementing the task of a hum-following robot (HFR) at the Nvidia Jetson Xavier NX edge platform. To perform the HFR task, typical approaches track the target person only from behind. While, our work allows the robot to track the person from behind, front, and side views (left & right). In this article, we combine the latest advances of deep learning and metric learning by presenting two trackers: Single Person Head Detection-based Tracking (SPHDT) model and Single Person full-Body Detection-based Tracking (SPBDT) model. For both models, we leverage a deep learning-based single object detector called MobileNetSSD with a metric learning-based re-identification model, DaSiamRPN. We perform the qualitative analysis considering six major environmental factors: pose change, illumination variations, partial occlusion, full occlusion, wall corner, and different viewing angles. Based on the better performance of SPBDT, compared to SPHDT in the experimental results, we select SPBDT model for the robot to track the target. We also use this vision model to provide the relative position, location, distance, and angle of the target person to control the robot's movement for performing the human-following task.
AB - Mobile robots are proliferating at a significant pace and the continuous interaction between humans and robots opens the doors to facilitate our daily life activities. Following the target person with the robot is an important human-robot interaction (HRI) task that leads to its applications in industrial, domestic, and medical assistant robots. To implement the robotic tasks, traditional solutions rely on cloud servers that cause significant communication overhead due to data offloading. In our work, we overcome this potential issue of cloud-based solutions, by implementing the task of a hum-following robot (HFR) at the Nvidia Jetson Xavier NX edge platform. To perform the HFR task, typical approaches track the target person only from behind. While, our work allows the robot to track the person from behind, front, and side views (left & right). In this article, we combine the latest advances of deep learning and metric learning by presenting two trackers: Single Person Head Detection-based Tracking (SPHDT) model and Single Person full-Body Detection-based Tracking (SPBDT) model. For both models, we leverage a deep learning-based single object detector called MobileNetSSD with a metric learning-based re-identification model, DaSiamRPN. We perform the qualitative analysis considering six major environmental factors: pose change, illumination variations, partial occlusion, full occlusion, wall corner, and different viewing angles. Based on the better performance of SPBDT, compared to SPHDT in the experimental results, we select SPBDT model for the robot to track the target. We also use this vision model to provide the relative position, location, distance, and angle of the target person to control the robot's movement for performing the human-following task.
KW - deep learning
KW - human tracking
KW - Object recognition
KW - Person following robot
KW - single object tracking
UR - https://www.scopus.com/pages/publications/85179183284
U2 - 10.23919/ICCAS59377.2023.10316989
DO - 10.23919/ICCAS59377.2023.10316989
M3 - Conference contribution
AN - SCOPUS:85179183284
T3 - International Conference on Control, Automation and Systems
SP - 1721
EP - 1726
BT - 23rd International Conference on Control, Automation and Systems, ICCAS 2023
PB - IEEE Computer Society
T2 - 23rd International Conference on Control, Automation and Systems, ICCAS 2023
Y2 - 17 October 2023 through 20 October 2023
ER -