TY - GEN
T1 - ROS2 Implementation of Object Detection and Distance Estimation using Camera and 2D LiDAR Fusion in Autonomous Vehicle
AU - Hwang, Gyu Hyeon
AU - Lee, Si Woo
AU - Jeon, Jae Wook
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - The performance of 'perception' is important to enable autonomous vehicles to drive safely and efficiently under all environmental conditions. This paper proposes a methodology for object detection and distance estimation within a ROS2 environment, utilizing the fusion of 2D LiDAR and camera sensors. It highlights potential solutions for future autonomous driving using ROS2, emphasizing how sensor fusion can enhance the reliability and safety of autonomous vehicles. A camera, LiDAR, and vehicle were simulated in Gazebo, allowing for the fusion of sensor data to estimate distances and detect objects. The system includes four integrated nodes. The first, a camera node, publishes image topics. The second, a LiDAR node, publishes data on the location and distance of obstacles. The third, a YOLOv8 node, subscribes to image topics and publishes information about the bounding box, accuracy, class name, and ID. Lastly, the projection node combines data from the previous nodes to detect objects and measure their distances. According to our experimental results, the average error in distance estimation is only 0.75%. We also tested the frequency of topic messages and discovered that, on average, messages related to object detection and distance calculation are issued 33 times per second on NVIDIA Titan X. It is required to optimize the embedded board model and procedures in order to implement this fusion method in practical applications.
AB - The performance of 'perception' is important to enable autonomous vehicles to drive safely and efficiently under all environmental conditions. This paper proposes a methodology for object detection and distance estimation within a ROS2 environment, utilizing the fusion of 2D LiDAR and camera sensors. It highlights potential solutions for future autonomous driving using ROS2, emphasizing how sensor fusion can enhance the reliability and safety of autonomous vehicles. A camera, LiDAR, and vehicle were simulated in Gazebo, allowing for the fusion of sensor data to estimate distances and detect objects. The system includes four integrated nodes. The first, a camera node, publishes image topics. The second, a LiDAR node, publishes data on the location and distance of obstacles. The third, a YOLOv8 node, subscribes to image topics and publishes information about the bounding box, accuracy, class name, and ID. Lastly, the projection node combines data from the previous nodes to detect objects and measure their distances. According to our experimental results, the average error in distance estimation is only 0.75%. We also tested the frequency of topic messages and discovered that, on average, messages related to object detection and distance calculation are issued 33 times per second on NVIDIA Titan X. It is required to optimize the embedded board model and procedures in order to implement this fusion method in practical applications.
KW - Calibration
KW - Distance Estimation
KW - Gazebo simulation
KW - Object Detection
KW - ROS2
UR - https://www.scopus.com/pages/publications/85199597478
U2 - 10.1109/ISIE54533.2024.10595707
DO - 10.1109/ISIE54533.2024.10595707
M3 - Conference contribution
AN - SCOPUS:85199597478
T3 - IEEE International Symposium on Industrial Electronics
BT - 2024 33rd International Symposium on Industrial Electronics, ISIE 2024 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 33rd International Symposium on Industrial Electronics, ISIE 2024
Y2 - 18 June 2024 through 21 June 2024
ER -