{"title":"Techniques for Evaluating the Robustness of Deep Learning Systems: A Preliminary Review","authors":"Horacio L. França, César Teixeira, N. Laranjeiro","doi":"10.1109/ladc53747.2021.9672592","DOIUrl":null,"url":null,"abstract":"Machine Learning algorithms are currently being applied to a huge diversity of systems in various domains, including control systems in the industry, medical instruments, and autonomous vehicles, just to name a few. Systems based on deep learning models have become extremely popular in this context, and, like regular machine learning algorithms, are susceptible to errors caused by noisy data, outliers, or adversarial attacks. An error of a deep learning model in a safety-critical context can lead to a system failure, which can have disastrous consequences, including safety violations. In this paper we review the state of the art in techniques for evaluating the reliability (in lato sensu) of deep learning models, identify the main characteristics of the methods used and discuss research trends and open challenges.","PeriodicalId":376642,"journal":{"name":"2021 10th Latin-American Symposium on Dependable Computing (LADC)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 10th Latin-American Symposium on Dependable Computing (LADC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ladc53747.2021.9672592","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Machine Learning algorithms are currently being applied to a huge diversity of systems in various domains, including control systems in the industry, medical instruments, and autonomous vehicles, just to name a few. Systems based on deep learning models have become extremely popular in this context, and, like regular machine learning algorithms, are susceptible to errors caused by noisy data, outliers, or adversarial attacks. An error of a deep learning model in a safety-critical context can lead to a system failure, which can have disastrous consequences, including safety violations. In this paper we review the state of the art in techniques for evaluating the reliability (in lato sensu) of deep learning models, identify the main characteristics of the methods used and discuss research trends and open challenges.