TY - GEN
T1 - Uncertainty-Aware and objectness-Guided Feature Quantization for Robust Object Detection
AU - Kong, Hyunmin
AU - Shin, Jitae
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Object detection models face challenges in accurately extracting features due to complex backgrounds, small objects, and environmental noise. Particularly, noise present in images increases pixel-level uncertainty, blurring the boundaries between objects and backgrounds, thereby causing confusion in object information. This increased uncertainty in specific regions reduces the reliability of feature representations, significantly deteriorating detection performance. To address these issues, this paper proposes a novel feature quantization framework that simultaneously utilizes uncertainty and objectness information. The proposed method independently estimates uncertainty maps and objectness maps at the pixel level to effectively identify uncertain regions, subsequently training a vector quantization codebook based on features from high-objectness regions. During inference, regions with both high uncertainty and high objectness are treated as objects, with their features replaced by the nearest code vector to restore object information accurately. Conversely, regions with high uncertainty but low objectness are considered background noise and are masked by setting their features to zero, thus minimizing the probability of errors. Experimental evaluations using a YOLO-based object detection model on the BDD100K and KITTI dataset demonstrate improvement in mAP performance compared to the conventional YOLOv12 model.
AB - Object detection models face challenges in accurately extracting features due to complex backgrounds, small objects, and environmental noise. Particularly, noise present in images increases pixel-level uncertainty, blurring the boundaries between objects and backgrounds, thereby causing confusion in object information. This increased uncertainty in specific regions reduces the reliability of feature representations, significantly deteriorating detection performance. To address these issues, this paper proposes a novel feature quantization framework that simultaneously utilizes uncertainty and objectness information. The proposed method independently estimates uncertainty maps and objectness maps at the pixel level to effectively identify uncertain regions, subsequently training a vector quantization codebook based on features from high-objectness regions. During inference, regions with both high uncertainty and high objectness are treated as objects, with their features replaced by the nearest code vector to restore object information accurately. Conversely, regions with high uncertainty but low objectness are considered background noise and are masked by setting their features to zero, thus minimizing the probability of errors. Experimental evaluations using a YOLO-based object detection model on the BDD100K and KITTI dataset demonstrate improvement in mAP performance compared to the conventional YOLOv12 model.
KW - objectness
KW - quantization
KW - robust object detection
KW - uncertainty
UR - https://www.scopus.com/pages/publications/105016401961
U2 - 10.1109/ITC-CSCC66376.2025.11137655
DO - 10.1109/ITC-CSCC66376.2025.11137655
M3 - Conference contribution
AN - SCOPUS:105016401961
T3 - 2025 International Technical Conference on Circuits/Systems, Computers, and Communications, ITC-CSCC 2025
BT - 2025 International Technical Conference on Circuits/Systems, Computers, and Communications, ITC-CSCC 2025
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2025 International Technical Conference on Circuits/Systems, Computers, and Communications, ITC-CSCC 2025
Y2 - 7 July 2025 through 10 July 2025
ER -