TY - GEN
T1 - The Impact of Model Variations on the Robustness of Deep Learning Models in Adversarial Settings
AU - Juraev, Firuz
AU - Abuhamad, Mohammed
AU - Woo, Simon S.
AU - Thiruvathukal, George K.
AU - Abuhmed, Tamer
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Rapid advancements of deep learning are accelerating adoption in a wide variety of applications, including safety-critical applications such as self-driving vehicles, drones, robots, and surveillance systems. These advancements include applying variations of sophisticated techniques that improve the performance of models. However, such models are not immune to adversarial manipulations, which can cause the system to misbehave and remain unnoticed by experts. The frequency of modifications to existing deep learning models necessitates thorough analysis to determine the impact on models' robustness. In this work, we present an experimental evaluation of the effects of model modifications on deep learning model robustness using adversarial attacks. Our methodology involves examining the robustness of variations of models against various adversarial attacks. By conducting our experiments, we aim to shed light on the critical issue of maintaining the reliability and safety of deep learning models in safety- and security-critical applications. Our results indicate the pressing demand for an in-depth assessment of the effects of model changes on the robustness of models.
AB - Rapid advancements of deep learning are accelerating adoption in a wide variety of applications, including safety-critical applications such as self-driving vehicles, drones, robots, and surveillance systems. These advancements include applying variations of sophisticated techniques that improve the performance of models. However, such models are not immune to adversarial manipulations, which can cause the system to misbehave and remain unnoticed by experts. The frequency of modifications to existing deep learning models necessitates thorough analysis to determine the impact on models' robustness. In this work, we present an experimental evaluation of the effects of model modifications on deep learning model robustness using adversarial attacks. Our methodology involves examining the robustness of variations of models against various adversarial attacks. By conducting our experiments, we aim to shed light on the critical issue of maintaining the reliability and safety of deep learning models in safety- and security-critical applications. Our results indicate the pressing demand for an in-depth assessment of the effects of model changes on the robustness of models.
KW - Adversarial Attacks
KW - Computer Vision
KW - Deep Learning
KW - Defenses
KW - Model robustness
UR - https://www.scopus.com/pages/publications/85203599128
U2 - 10.1109/SVCC61185.2024.10637362
DO - 10.1109/SVCC61185.2024.10637362
M3 - Conference contribution
AN - SCOPUS:85203599128
T3 - 2024 Silicon Valley Cybersecurity Conference, SVCC 2024
BT - 2024 Silicon Valley Cybersecurity Conference, SVCC 2024
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2024 Silicon Valley Cybersecurity Conference, SVCC 2024
Y2 - 17 June 2024 through 19 June 2024
ER -