{"title":"ReMAV: Reward Modeling of Autonomous Vehicles for Finding Likely Failure Events","authors":"Aizaz Sharif;Dusica Marijan","doi":"10.1109/OJITS.2024.3479098","DOIUrl":null,"url":null,"abstract":"Autonomous vehicles are advanced driving systems that revolutionize transportation, but their vulnerability to adversarial attacks poses significant safety risks. Consider a scenario in which a slight perturbation in sensor data causes an autonomous vehicle to fail unexpectedly, potentially leading to accidents. Current testing methods often rely on computationally expensive active learning techniques to identify such vulnerabilities. Rather than actively training complex adversaries by interacting with the environment, there is a need to first intelligently find and reduce the search space to only those states where autonomous vehicles are found to be less confident. In this paper, we propose a black-box testing framework ReMAV that uses offline trajectories first to efficiently identify weaknesses of autonomous vehicles without the need for active interaction. To this end, we introduce a three-step methodology which i) uses offline state action pairs of any autonomous vehicle under test, ii) builds an abstract behavior representation using our designed reward modeling technique to analyze states with uncertain driving decisions, and iii) uses a disturbance model for minimal perturbation attacks where the driving decisions are less confident. Our reward modeling creates a behavior representation that highlights regions of likely uncertain autonomous vehicle behavior, even when performance seems adequate. This enables efficient testing without computationally expensive active adversarial learning. We evaluated ReMAV in a high-fidelity urban driving simulator across various single- and multi-agent scenarios. The results show substantial increases in failure events compared to the standard behavior of autonomous vehicles: 35% in vehicle collisions, 23% in road object collisions, 48% in pedestrian collisions, and 50% in off-road steering events. ReMAV outperforms two baselines and previous testing frameworks in effectiveness, efficiency, and speed of identifying failures. This demonstrates ReMAV’s capability to efficiently expose autonomous vehicle weaknesses using simple perturbation models.","PeriodicalId":100631,"journal":{"name":"IEEE Open Journal of Intelligent Transportation Systems","volume":"5 ","pages":"669-691"},"PeriodicalIF":4.6000,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10714436","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of Intelligent Transportation Systems","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10714436/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Autonomous vehicles are advanced driving systems that revolutionize transportation, but their vulnerability to adversarial attacks poses significant safety risks. Consider a scenario in which a slight perturbation in sensor data causes an autonomous vehicle to fail unexpectedly, potentially leading to accidents. Current testing methods often rely on computationally expensive active learning techniques to identify such vulnerabilities. Rather than actively training complex adversaries by interacting with the environment, there is a need to first intelligently find and reduce the search space to only those states where autonomous vehicles are found to be less confident. In this paper, we propose a black-box testing framework ReMAV that uses offline trajectories first to efficiently identify weaknesses of autonomous vehicles without the need for active interaction. To this end, we introduce a three-step methodology which i) uses offline state action pairs of any autonomous vehicle under test, ii) builds an abstract behavior representation using our designed reward modeling technique to analyze states with uncertain driving decisions, and iii) uses a disturbance model for minimal perturbation attacks where the driving decisions are less confident. Our reward modeling creates a behavior representation that highlights regions of likely uncertain autonomous vehicle behavior, even when performance seems adequate. This enables efficient testing without computationally expensive active adversarial learning. We evaluated ReMAV in a high-fidelity urban driving simulator across various single- and multi-agent scenarios. The results show substantial increases in failure events compared to the standard behavior of autonomous vehicles: 35% in vehicle collisions, 23% in road object collisions, 48% in pedestrian collisions, and 50% in off-road steering events. ReMAV outperforms two baselines and previous testing frameworks in effectiveness, efficiency, and speed of identifying failures. This demonstrates ReMAV’s capability to efficiently expose autonomous vehicle weaknesses using simple perturbation models.