Misael Alpizar Santana, R. Calinescu, Colin Paterson
{"title":"神经网络分类器的风险降低","authors":"Misael Alpizar Santana, R. Calinescu, Colin Paterson","doi":"10.1109/SEAA56994.2022.00065","DOIUrl":null,"url":null,"abstract":"Deep Neural Network (DNN) classifiers perform remarkably well on many problems that require skills which are natural and intuitive to humans. These classifiers have been used in safety-critical systems including autonomous vehicles. For such systems to be trusted it is necessary to demonstrate that the risk factors associated with neural network classification have been appropriately considered and sufficient risk mitigation has been employed. Traditional DNNs fail to explicitly consider risk during their training and verification stages, meaning that unsafe failure modes are permitted and under-reported. To address this limitation, our short paper introduces a work-in-progress approach that (i) allows the risk of misclassification between classes to be quantified, (ii) guides the training of DNN classifiers towards mitigating the risks that require treatment, and (iii) synthesises risk-aware ensembles with the aid of multi-objective genetic algorithms that seek to optimise DNN performance metrics while also mitigating risks. We show the effectiveness of our approach by using it to synthesise risk-aware neural network ensembles for the CIFAR-10 dataset.","PeriodicalId":269970,"journal":{"name":"2022 48th Euromicro Conference on Software Engineering and Advanced Applications (SEAA)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Mitigating Risk in Neural Network Classifiers\",\"authors\":\"Misael Alpizar Santana, R. Calinescu, Colin Paterson\",\"doi\":\"10.1109/SEAA56994.2022.00065\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep Neural Network (DNN) classifiers perform remarkably well on many problems that require skills which are natural and intuitive to humans. These classifiers have been used in safety-critical systems including autonomous vehicles. For such systems to be trusted it is necessary to demonstrate that the risk factors associated with neural network classification have been appropriately considered and sufficient risk mitigation has been employed. Traditional DNNs fail to explicitly consider risk during their training and verification stages, meaning that unsafe failure modes are permitted and under-reported. To address this limitation, our short paper introduces a work-in-progress approach that (i) allows the risk of misclassification between classes to be quantified, (ii) guides the training of DNN classifiers towards mitigating the risks that require treatment, and (iii) synthesises risk-aware ensembles with the aid of multi-objective genetic algorithms that seek to optimise DNN performance metrics while also mitigating risks. We show the effectiveness of our approach by using it to synthesise risk-aware neural network ensembles for the CIFAR-10 dataset.\",\"PeriodicalId\":269970,\"journal\":{\"name\":\"2022 48th Euromicro Conference on Software Engineering and Advanced Applications (SEAA)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 48th Euromicro Conference on Software Engineering and Advanced Applications (SEAA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SEAA56994.2022.00065\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 48th Euromicro Conference on Software Engineering and Advanced Applications (SEAA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SEAA56994.2022.00065","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Deep Neural Network (DNN) classifiers perform remarkably well on many problems that require skills which are natural and intuitive to humans. These classifiers have been used in safety-critical systems including autonomous vehicles. For such systems to be trusted it is necessary to demonstrate that the risk factors associated with neural network classification have been appropriately considered and sufficient risk mitigation has been employed. Traditional DNNs fail to explicitly consider risk during their training and verification stages, meaning that unsafe failure modes are permitted and under-reported. To address this limitation, our short paper introduces a work-in-progress approach that (i) allows the risk of misclassification between classes to be quantified, (ii) guides the training of DNN classifiers towards mitigating the risks that require treatment, and (iii) synthesises risk-aware ensembles with the aid of multi-objective genetic algorithms that seek to optimise DNN performance metrics while also mitigating risks. We show the effectiveness of our approach by using it to synthesise risk-aware neural network ensembles for the CIFAR-10 dataset.