Changbao Zhou , Jiawei Du , Ming Yan , Hengshan Yue , Xiaohui Wei , Joey Tianyi Zhou
{"title":"SAR:锐度感知最小化,增强 DNN 对比特翻转错误的鲁棒性","authors":"Changbao Zhou , Jiawei Du , Ming Yan , Hengshan Yue , Xiaohui Wei , Joey Tianyi Zhou","doi":"10.1016/j.sysarc.2024.103284","DOIUrl":null,"url":null,"abstract":"<div><div>As Deep Neural Networks (DNNs) are increasingly deployed in safety-critical scenarios, there is a growing need to address bit-flip errors occurring in hardware, such as memory. These errors can lead to changes in DNN weights, potentially degrading the performance of deployed models and causing catastrophic consequences. Existing methods improve DNNs’ fault tolerance or robustness by modifying network size, structure, or inference and training processes. Unfortunately, these methods often enhance robustness at the expense of clean accuracy and introduce additional overhead during inference. To address these issues, we propose <strong><u>S</u>harpness-<u>A</u>ware Minimization for enhancing DNNs’ <u>R</u>obustness against bit-flip errors</strong> (<strong>SAR</strong>), which aims to leverage the intrinsic robustness of DNNs. We begin with a comprehensive investigation of DNNs under bit-flip errors, yielding insightful observations regarding the intensity and occurrence of such errors. Based on these insights, we identify that Sharpness-Aware Minimization (SAM) has the potential to enhance DNN robustness. We further analyze this potential through the relationship between SAM formulation and our observations, building a robustness-enhancing framework based on SAM. Experimental validation across various models and datasets demonstrates that SAR can effectively improve DNN robustness against bit-flip errors without sacrificing clean accuracy or introducing additional inference costs, making it a “double-win” method compared to existing approaches.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"156 ","pages":"Article 103284"},"PeriodicalIF":3.7000,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"SAR: Sharpness-Aware minimization for enhancing DNNs’ Robustness against bit-flip errors\",\"authors\":\"Changbao Zhou , Jiawei Du , Ming Yan , Hengshan Yue , Xiaohui Wei , Joey Tianyi Zhou\",\"doi\":\"10.1016/j.sysarc.2024.103284\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>As Deep Neural Networks (DNNs) are increasingly deployed in safety-critical scenarios, there is a growing need to address bit-flip errors occurring in hardware, such as memory. These errors can lead to changes in DNN weights, potentially degrading the performance of deployed models and causing catastrophic consequences. Existing methods improve DNNs’ fault tolerance or robustness by modifying network size, structure, or inference and training processes. Unfortunately, these methods often enhance robustness at the expense of clean accuracy and introduce additional overhead during inference. To address these issues, we propose <strong><u>S</u>harpness-<u>A</u>ware Minimization for enhancing DNNs’ <u>R</u>obustness against bit-flip errors</strong> (<strong>SAR</strong>), which aims to leverage the intrinsic robustness of DNNs. We begin with a comprehensive investigation of DNNs under bit-flip errors, yielding insightful observations regarding the intensity and occurrence of such errors. Based on these insights, we identify that Sharpness-Aware Minimization (SAM) has the potential to enhance DNN robustness. We further analyze this potential through the relationship between SAM formulation and our observations, building a robustness-enhancing framework based on SAM. Experimental validation across various models and datasets demonstrates that SAR can effectively improve DNN robustness against bit-flip errors without sacrificing clean accuracy or introducing additional inference costs, making it a “double-win” method compared to existing approaches.</div></div>\",\"PeriodicalId\":50027,\"journal\":{\"name\":\"Journal of Systems Architecture\",\"volume\":\"156 \",\"pages\":\"Article 103284\"},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2024-10-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Systems Architecture\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1383762124002212\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Systems Architecture","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1383762124002212","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
SAR: Sharpness-Aware minimization for enhancing DNNs’ Robustness against bit-flip errors
As Deep Neural Networks (DNNs) are increasingly deployed in safety-critical scenarios, there is a growing need to address bit-flip errors occurring in hardware, such as memory. These errors can lead to changes in DNN weights, potentially degrading the performance of deployed models and causing catastrophic consequences. Existing methods improve DNNs’ fault tolerance or robustness by modifying network size, structure, or inference and training processes. Unfortunately, these methods often enhance robustness at the expense of clean accuracy and introduce additional overhead during inference. To address these issues, we propose Sharpness-Aware Minimization for enhancing DNNs’ Robustness against bit-flip errors (SAR), which aims to leverage the intrinsic robustness of DNNs. We begin with a comprehensive investigation of DNNs under bit-flip errors, yielding insightful observations regarding the intensity and occurrence of such errors. Based on these insights, we identify that Sharpness-Aware Minimization (SAM) has the potential to enhance DNN robustness. We further analyze this potential through the relationship between SAM formulation and our observations, building a robustness-enhancing framework based on SAM. Experimental validation across various models and datasets demonstrates that SAR can effectively improve DNN robustness against bit-flip errors without sacrificing clean accuracy or introducing additional inference costs, making it a “double-win” method compared to existing approaches.
期刊介绍:
The Journal of Systems Architecture: Embedded Software Design (JSA) is a journal covering all design and architectural aspects related to embedded systems and software. It ranges from the microarchitecture level via the system software level up to the application-specific architecture level. Aspects such as real-time systems, operating systems, FPGA programming, programming languages, communications (limited to analysis and the software stack), mobile systems, parallel and distributed architectures as well as additional subjects in the computer and system architecture area will fall within the scope of this journal. Technology will not be a main focus, but its use and relevance to particular designs will be. Case studies are welcome but must contribute more than just a design for a particular piece of software.
Design automation of such systems including methodologies, techniques and tools for their design as well as novel designs of software components fall within the scope of this journal. Novel applications that use embedded systems are also central in this journal. While hardware is not a part of this journal hardware/software co-design methods that consider interplay between software and hardware components with and emphasis on software are also relevant here.