Comprehensive Evaluation of Cloaking Backdoor Attacks on Object Detector in Real-World

Hua Ma, Alsharif Abuadbba, Yansong Gao, Hyoungshick Kim, Surya Nepal

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

The exploration of backdoor vulnerabilities in object detectors, particularly in real-world scenarios, remains limited. A significant challenge lies in the absence of a natural physical backdoor dataset, and constructing such a dataset is both time- and labor-intensive. In this work, we address this gap by creating a large-scale dataset comprising approximately 11,800 images/frames with annotations featuring natural objects (e.g., T-shirts and hats) as triggers to incur cloaking adversarial effects in diverse real-world scenarios. This dataset is tailored for the study of physical backdoors in object detectors. Leveraging this dataset, we conduct a comprehensive evaluation of an insidious cloaking backdoor effect against object detectors, wherein the bounding box around a person vanishes when the individual is near a natural object (e.g., a commonly available T-shirt) in front of the detector. Our evaluations encompass three prevalent attack surfaces: data outsourcing, model outsourcing, and the use of pretrained models. The cloaking effect is successfully implanted in object detectors across all three attack surfaces. We extensively evaluate four popular object detection algorithms (anchor-based Yolo-V3, Yolo-V4, Faster R-CNN, and anchor-free CenterNet) using 19 videos (totaling approximately 11,800 frames) in real-world scenarios. Our results demonstrate that the backdoor attack exhibits remarkable robustness against various factors, including movement, distance, angle, non-rigid deformation, and lighting. In data and model outsourcing scenarios, the attack success rate (ASR) in most videos reaches 100% or near it, while the clean data accuracy of the backdoored model remains indistinguishable from that of the clean model, making it impossible to detect backdoor behavior through a validation set. Notably, two-stage object detectors (e.g., Faster R-CNN) show greater resistance to backdoor attacks under pure data poisoning conditions (i.e., in data outsourcing) compared to one-stage detectors (e.g., the Yolo series). However, this challenge is surmountable when the attacker controls the training process (particularly in model outsourcing), even with the same small poisoning rate budget as in data outsourcing. In transfer learning attack scenarios assessed on CenterNet, the average ASR remains high at 78%. A detailed 5-minute video illustrating our attack is available at https://youtu.be/Q3HOF4OobbY.

Original languageEnglish
Title of host publicationACM ASIA CCS 2025 - Proceedings of the 20th ACM ASIA Conference on Computer and Communications Security
PublisherAssociation for Computing Machinery
Pages605-620
Number of pages16
ISBN (Electronic)9798400714108
DOIs
StatePublished - 24 Aug 2025
Event20th ACM ASIA Conference on Computer and Communications Security, ASIA CCS 2025 - Hanoi, Viet Nam
Duration: 25 Aug 202529 Aug 2025

Publication series

NameProceedings of the ACM Conference on Computer and Communications Security
ISSN (Print)1543-7221

Conference

Conference20th ACM ASIA Conference on Computer and Communications Security, ASIA CCS 2025
Country/TerritoryViet Nam
CityHanoi
Period25/08/2529/08/25

Keywords

  • Cloaking Backdoor
  • Natural Trigger
  • Object Detector

Fingerprint

Dive into the research topics of 'Comprehensive Evaluation of Cloaking Backdoor Attacks on Object Detector in Real-World'. Together they form a unique fingerprint.

Cite this