TY - GEN
T1 - DAT
T2 - 2023 International Conference on Simulation of Semiconductor Processes and Devices, SISPAD 2023
AU - Park, Chanwoo
AU - Jeon, Jongwook
AU - Cho, Hyunbo
N1 - Publisher Copyright:
© 2023 The Japan Society of Applied Physics.
PY - 2023
Y1 - 2023
N2 - The increasing interest in artificial intelligence (AI) and the limitations of general-purpose graphics processing units (GPUs) have prompted the exploration of neuromorphic devices, such as resistive random-access memory (ReRAM), for AI computation. However, ReRAM devices exhibit various sources of variability that impact their performance and reliability. In this paper, we propose Device-Aware Training (DAT), a robust training method that accounts for device-specific noise and resilience against inherent variability in ReRAM devices. To address the significant computational costs of noise-robust training, DAT employs sharpness-aware minimization and a low-rank approximation of the device-specific noise covariance matrix. This leads to efficient computation and reduced training time while maintaining versatility across various model architectures and tasks. We evaluate our method on CIFAR-10 and CIFAR-100 datasets, achieving a 38.2% increase in test accuracy in the presence of analog noise and a 5.9x faster training time compared to using a full-rank covariance matrix. From a loss landscape perspective, we provide insights into addressing noise-induced challenges in the weight space. DAT contributes to the development of reliable and high-performing neuromorphic AI systems based on ReRAM technology.
AB - The increasing interest in artificial intelligence (AI) and the limitations of general-purpose graphics processing units (GPUs) have prompted the exploration of neuromorphic devices, such as resistive random-access memory (ReRAM), for AI computation. However, ReRAM devices exhibit various sources of variability that impact their performance and reliability. In this paper, we propose Device-Aware Training (DAT), a robust training method that accounts for device-specific noise and resilience against inherent variability in ReRAM devices. To address the significant computational costs of noise-robust training, DAT employs sharpness-aware minimization and a low-rank approximation of the device-specific noise covariance matrix. This leads to efficient computation and reduced training time while maintaining versatility across various model architectures and tasks. We evaluate our method on CIFAR-10 and CIFAR-100 datasets, achieving a 38.2% increase in test accuracy in the presence of analog noise and a 5.9x faster training time compared to using a full-rank covariance matrix. From a loss landscape perspective, we provide insights into addressing noise-induced challenges in the weight space. DAT contributes to the development of reliable and high-performing neuromorphic AI systems based on ReRAM technology.
KW - Neuromorphic AI systems
KW - Resistive random-access memory (ReRAM)
KW - Robust training
UR - https://www.scopus.com/pages/publications/85179128478
U2 - 10.23919/SISPAD57422.2023.10319518
DO - 10.23919/SISPAD57422.2023.10319518
M3 - Conference contribution
AN - SCOPUS:85179128478
T3 - International Conference on Simulation of Semiconductor Processes and Devices, SISPAD
SP - 289
EP - 292
BT - 2023 International Conference on Simulation of Semiconductor Processes and Devices, SISPAD 2023
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 27 September 2023 through 29 September 2023
ER -