Darryl Hond, H. Asgari, Daniel Jeffery, Mike Newman
{"title":"An Integrated Process for Verifying Deep Learning Classifiers Using Dataset Dissimilarity Measures","authors":"Darryl Hond, H. Asgari, Daniel Jeffery, Mike Newman","doi":"10.4018/ijaiml.289536","DOIUrl":null,"url":null,"abstract":"The specification and verification of algorithms is vital for safety-critical autonomous systems which incorporate deep learning elements. We propose an integrated process for verifying artificial neural network (ANN) classifiers. This process consists of an off-line verification and an on-line performance prediction phase. The process is intended to verify ANN classifier generalisation performance, and to this end makes use of dataset dissimilarity measures. We introduce a novel measure for quantifying the dissimilarity between the dataset used to train a classification algorithm, and the test dataset used to evaluate and verify classifier performance. A system-level requirement could specify the permitted form of the functional relationship between classifier performance and a dissimilarity measure; such a requirement could be verified by dynamic testing. Experimental results, obtained using publicly available datasets, suggest that the measures have relevance to real-world practice for both quantifying dataset dissimilarity, and specifying and verifying classifier performance.","PeriodicalId":217541,"journal":{"name":"Int. J. Artif. Intell. Mach. Learn.","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Int. J. Artif. Intell. Mach. Learn.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4018/ijaiml.289536","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
The specification and verification of algorithms is vital for safety-critical autonomous systems which incorporate deep learning elements. We propose an integrated process for verifying artificial neural network (ANN) classifiers. This process consists of an off-line verification and an on-line performance prediction phase. The process is intended to verify ANN classifier generalisation performance, and to this end makes use of dataset dissimilarity measures. We introduce a novel measure for quantifying the dissimilarity between the dataset used to train a classification algorithm, and the test dataset used to evaluate and verify classifier performance. A system-level requirement could specify the permitted form of the functional relationship between classifier performance and a dissimilarity measure; such a requirement could be verified by dynamic testing. Experimental results, obtained using publicly available datasets, suggest that the measures have relevance to real-world practice for both quantifying dataset dissimilarity, and specifying and verifying classifier performance.