{"title":"Investigating Bugs in AI-Infused Systems: Analysis and Proposed Taxonomy","authors":"M. Kassab, J. Defranco, P. Laplante","doi":"10.1109/ISSREW55968.2022.00094","DOIUrl":null,"url":null,"abstract":"Testing for critical AI systems is non-trivial as these systems are prone to a new breed of sophisticated software defects. The admissibility of these systems and their fundamental social acceptance is tightly coupled with assuring whether the potential hazards to humans, animals, and property posed by the prospect defects can be minimized and limited to an acceptable level. In this work, we address the problem of assurance for critical AI systems by firstly, analyzing the nature of defects that occur in AI -infused systems in general and how to combat these within a testing strategy. Secondly, developing a focused taxon-omy of prospect defects in critical AI systems. This taxonomy enables the development of the non-critical proxy (i.e., stand-in) equivalent by reproducing defects with similar characteristics.","PeriodicalId":178302,"journal":{"name":"2022 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISSREW55968.2022.00094","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Investigating Bugs in AI-Infused Systems: Analysis and Proposed Taxonomy
Testing for critical AI systems is non-trivial as these systems are prone to a new breed of sophisticated software defects. The admissibility of these systems and their fundamental social acceptance is tightly coupled with assuring whether the potential hazards to humans, animals, and property posed by the prospect defects can be minimized and limited to an acceptable level. In this work, we address the problem of assurance for critical AI systems by firstly, analyzing the nature of defects that occur in AI -infused systems in general and how to combat these within a testing strategy. Secondly, developing a focused taxon-omy of prospect defects in critical AI systems. This taxonomy enables the development of the non-critical proxy (i.e., stand-in) equivalent by reproducing defects with similar characteristics.