Abstract
Injecting adversarial examples during training, known as adversarial training, can improve robustness against one-step attacks, but not for unknown iterative attacks. To address this challenge, we first show iteratively generated adversarial images easily transfer between networks trained with the same strategy. Inspired by this observation, we propose cascade adversarial training, which transfers the knowledge of the end results of adversarial training. We train a network from scratch by injecting iteratively generated adversarial images crafted from already defended networks in addition to one-step adversarial images from the network being trained. We also propose to utilize embedding space for both classification and low-level (pixel-level) similarity learning to ignore unknown pixel level perturbation. During training, we inject adversarial images without replacing their corresponding clean images and penalize the distance between the two embeddings (clean and adversarial). Experimental results show that cascade adversarial training together with our proposed low-level similarity learning efficiently enhances the robustness against iterative attacks, but at the expense of decreased robustness against one-step attacks. We show that combining those two techniques can also improve robustness under the worst case black box attack scenario.
| Original language | English |
|---|---|
| State | Published - 2018 |
| Externally published | Yes |
| Event | 6th International Conference on Learning Representations, ICLR 2018 - Vancouver, Canada Duration: 30 Apr 2018 → 3 May 2018 |
Conference
| Conference | 6th International Conference on Learning Representations, ICLR 2018 |
|---|---|
| Country/Territory | Canada |
| City | Vancouver |
| Period | 30/04/18 → 3/05/18 |
Fingerprint
Dive into the research topics of 'Cascade adversarial machine learning regularized with a unified embedding'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver