Harsh Mankodiya, M. Obaidat, Rajesh Gupta, S. Tanwar
{"title":"XAI-AV:可解释的自动驾驶汽车信任管理人工智能","authors":"Harsh Mankodiya, M. Obaidat, Rajesh Gupta, S. Tanwar","doi":"10.1109/CCCI52664.2021.9583190","DOIUrl":null,"url":null,"abstract":"Artificial intelligence (AI) is the most looked up technology with a diverse range of applications across all the fields, whether it is intelligent transportation systems (ITS), medicine, healthcare, military operations, or others. One such application is autonomous vehicles (AVs), which comes under the category of AI in ITS. Vehicular Adhoc Networks (VANET) makes communication possible between AVs in the system. The performance of each vehicle depends upon the information exchanged between AVs. False or malicious information can perturb the whole system leading to severe consequences. Hence, the detection of malicious vehicles is of utmost importance. We use machine learning (ML) algorithms to predict the flaw in the data transmitted. Recent papers that used the stacking ML approach gave an accuracy of 98.44%. Decision tree-based random forest is used to solve the problem in this paper. We achieved accuracy and F1 score of 98.43% and 98.5% respectively on the VeRiMi dataset in this paper. Explainable AI (XAI) is the method and technique to make the complex black-box ML and deep learning (DL) models more interpretable and understandable. We use a particular model interface of the evaluation metrics to explain and measure the model’s performance. Applying XAI to these complex AI models can ensure a cautious use of AI for AVs.","PeriodicalId":136382,"journal":{"name":"2021 International Conference on Communications, Computing, Cybersecurity, and Informatics (CCCI)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":"{\"title\":\"XAI-AV: Explainable Artificial Intelligence for Trust Management in Autonomous Vehicles\",\"authors\":\"Harsh Mankodiya, M. Obaidat, Rajesh Gupta, S. Tanwar\",\"doi\":\"10.1109/CCCI52664.2021.9583190\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Artificial intelligence (AI) is the most looked up technology with a diverse range of applications across all the fields, whether it is intelligent transportation systems (ITS), medicine, healthcare, military operations, or others. One such application is autonomous vehicles (AVs), which comes under the category of AI in ITS. Vehicular Adhoc Networks (VANET) makes communication possible between AVs in the system. The performance of each vehicle depends upon the information exchanged between AVs. False or malicious information can perturb the whole system leading to severe consequences. Hence, the detection of malicious vehicles is of utmost importance. We use machine learning (ML) algorithms to predict the flaw in the data transmitted. Recent papers that used the stacking ML approach gave an accuracy of 98.44%. Decision tree-based random forest is used to solve the problem in this paper. We achieved accuracy and F1 score of 98.43% and 98.5% respectively on the VeRiMi dataset in this paper. Explainable AI (XAI) is the method and technique to make the complex black-box ML and deep learning (DL) models more interpretable and understandable. We use a particular model interface of the evaluation metrics to explain and measure the model’s performance. Applying XAI to these complex AI models can ensure a cautious use of AI for AVs.\",\"PeriodicalId\":136382,\"journal\":{\"name\":\"2021 International Conference on Communications, Computing, Cybersecurity, and Informatics (CCCI)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-10-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"12\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Conference on Communications, Computing, Cybersecurity, and Informatics (CCCI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CCCI52664.2021.9583190\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Communications, Computing, Cybersecurity, and Informatics (CCCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCCI52664.2021.9583190","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
XAI-AV: Explainable Artificial Intelligence for Trust Management in Autonomous Vehicles
Artificial intelligence (AI) is the most looked up technology with a diverse range of applications across all the fields, whether it is intelligent transportation systems (ITS), medicine, healthcare, military operations, or others. One such application is autonomous vehicles (AVs), which comes under the category of AI in ITS. Vehicular Adhoc Networks (VANET) makes communication possible between AVs in the system. The performance of each vehicle depends upon the information exchanged between AVs. False or malicious information can perturb the whole system leading to severe consequences. Hence, the detection of malicious vehicles is of utmost importance. We use machine learning (ML) algorithms to predict the flaw in the data transmitted. Recent papers that used the stacking ML approach gave an accuracy of 98.44%. Decision tree-based random forest is used to solve the problem in this paper. We achieved accuracy and F1 score of 98.43% and 98.5% respectively on the VeRiMi dataset in this paper. Explainable AI (XAI) is the method and technique to make the complex black-box ML and deep learning (DL) models more interpretable and understandable. We use a particular model interface of the evaluation metrics to explain and measure the model’s performance. Applying XAI to these complex AI models can ensure a cautious use of AI for AVs.