Edge Deployment of Vision-Based Model for Human Following Robot

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

4 Scopus citations

Abstract

Mobile robots are proliferating at a significant pace and the continuous interaction between humans and robots opens the doors to facilitate our daily life activities. Following the target person with the robot is an important human-robot interaction (HRI) task that leads to its applications in industrial, domestic, and medical assistant robots. To implement the robotic tasks, traditional solutions rely on cloud servers that cause significant communication overhead due to data offloading. In our work, we overcome this potential issue of cloud-based solutions, by implementing the task of a hum-following robot (HFR) at the Nvidia Jetson Xavier NX edge platform. To perform the HFR task, typical approaches track the target person only from behind. While, our work allows the robot to track the person from behind, front, and side views (left & right). In this article, we combine the latest advances of deep learning and metric learning by presenting two trackers: Single Person Head Detection-based Tracking (SPHDT) model and Single Person full-Body Detection-based Tracking (SPBDT) model. For both models, we leverage a deep learning-based single object detector called MobileNetSSD with a metric learning-based re-identification model, DaSiamRPN. We perform the qualitative analysis considering six major environmental factors: pose change, illumination variations, partial occlusion, full occlusion, wall corner, and different viewing angles. Based on the better performance of SPBDT, compared to SPHDT in the experimental results, we select SPBDT model for the robot to track the target. We also use this vision model to provide the relative position, location, distance, and angle of the target person to control the robot's movement for performing the human-following task.

Original languageEnglish
Title of host publication23rd International Conference on Control, Automation and Systems, ICCAS 2023
PublisherIEEE Computer Society
Pages1721-1726
Number of pages6
ISBN (Electronic)9788993215274
DOIs
StatePublished - 2023
Externally publishedYes
Event23rd International Conference on Control, Automation and Systems, ICCAS 2023 - Yeosu, Korea, Republic of
Duration: 17 Oct 202320 Oct 2023

Publication series

NameInternational Conference on Control, Automation and Systems
ISSN (Print)1598-7833

Conference

Conference23rd International Conference on Control, Automation and Systems, ICCAS 2023
Country/TerritoryKorea, Republic of
CityYeosu
Period17/10/2320/10/23

Keywords

  • deep learning
  • human tracking
  • Object recognition
  • Person following robot
  • single object tracking

Fingerprint

Dive into the research topics of 'Edge Deployment of Vision-Based Model for Human Following Robot'. Together they form a unique fingerprint.

Cite this