Abstract
Detecting and quantifying damage in steel structures is essential for assessing their integrity and safety. Traditional manual inspections are time-consuming, labor-intensive, and costly. While computer vision and deep learning offer potential, relying solely on RGB images limits in accurately identifying damage. This study proposes an innovative early fusion approach that integrates LiDAR and camera sensors for enhanced damage detection. LiDAR provides complementary intensity data, while high-resolution RGB images offer visual context, and these datasets are fused using a point-to-pixel alignment. By combining the strengths of both sensors, we significantly improve damage assessment precision. Our method automatically measures section loss damage using fused point cloud and image data. Experimental results demonstrate a 15.1% improvement in the Intersection over Union (IoU) metric compared to RGB images alone. This highlights the method's effectiveness and its potential impact on structural maintenance practices, contributing to more reliable damage detection and quantification in steel structures.
| Original language | English |
|---|---|
| Article number | 116914 |
| Journal | Measurement: Journal of the International Measurement Confederation |
| Volume | 249 |
| DOIs | |
| State | Published - 31 May 2025 |
Keywords
- Computer vision
- Data fusion
- Intensity
- LiDAR
- Section loss damage
Fingerprint
Dive into the research topics of 'Point-to-pixel early fusion of LiDAR and image data for accurate assessment of section loss damage in steel structural components'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver