{"title":"Explainable Artificial Intelligence based Classification of Automotive Radar Targets","authors":"Neeraj Pandey, S. S. Ram","doi":"10.1109/RadarConf2351548.2023.10149788","DOIUrl":null,"url":null,"abstract":"Explainable decision-making is a key component for compliance with regulatory frameworks and winning trust among end users. In this work, we propose to understand the mis-classification of automotive radar images through counterfactual explanations obtained from generative adversarial networks. The proposed method enables perturbations of original radar images belonging to a query class to result in counterfactual images that are classified as the distractor class. The key requirement is that the perturbations must result in realistic images that belong to the original distribution of the query class and also provide physics-based insights into the causes of the misclassification. We test the methods on simulated automotive inverse synthetic aperture radar data images for a query class of a four-wheel mid-size car and a distractor class of a three-wheel auto-rickshaw. Our results show that the shadowing of one or more wheels of the query class is most likely to result in misclassification.","PeriodicalId":168311,"journal":{"name":"2023 IEEE Radar Conference (RadarConf23)","volume":"225 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE Radar Conference (RadarConf23)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RadarConf2351548.2023.10149788","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Explainable decision-making is a key component for compliance with regulatory frameworks and winning trust among end users. In this work, we propose to understand the mis-classification of automotive radar images through counterfactual explanations obtained from generative adversarial networks. The proposed method enables perturbations of original radar images belonging to a query class to result in counterfactual images that are classified as the distractor class. The key requirement is that the perturbations must result in realistic images that belong to the original distribution of the query class and also provide physics-based insights into the causes of the misclassification. We test the methods on simulated automotive inverse synthetic aperture radar data images for a query class of a four-wheel mid-size car and a distractor class of a three-wheel auto-rickshaw. Our results show that the shadowing of one or more wheels of the query class is most likely to result in misclassification.