{"title":"Cognitive Reinforcement Learning: An Interpretable Decision-Making for Virtual Driver","authors":"Hao Qi;Enguang Hou;Peijun Ye","doi":"10.1109/JRFID.2024.3418649","DOIUrl":null,"url":null,"abstract":"The interpretability of decision-making in autonomous driving is crucial for the building of virtual driver, promoting the trust worth of artificial intelligence (AI) and the efficiency of human-machine interaction. However, current data-driven methods such as deep reinforcement learning (DRL) directly acquire driving policies from collected data, where the decision-making process is vague for safety validation. To address this issue, this paper proposes cognitive reinforcement learning that can both simulate the human driver’s deliberation and provide interpretability of the virtual driver’s behaviors. The new method involves cognitive modeling, reinforcement learning and reasoning path extraction. Experiments on the virtual driving environment indicate that our method can semantically interpret the virtual driver’s behaviors. The results show that the proposed cognitive reinforcement learning model combines the interpretability of cognitive models with the learning capability of reinforcement learning, providing a new approach for the construction of trustworthy virtual drivers.","PeriodicalId":73291,"journal":{"name":"IEEE journal of radio frequency identification","volume":null,"pages":null},"PeriodicalIF":2.3000,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE journal of radio frequency identification","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10570307/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
The interpretability of decision-making in autonomous driving is crucial for the building of virtual driver, promoting the trust worth of artificial intelligence (AI) and the efficiency of human-machine interaction. However, current data-driven methods such as deep reinforcement learning (DRL) directly acquire driving policies from collected data, where the decision-making process is vague for safety validation. To address this issue, this paper proposes cognitive reinforcement learning that can both simulate the human driver’s deliberation and provide interpretability of the virtual driver’s behaviors. The new method involves cognitive modeling, reinforcement learning and reasoning path extraction. Experiments on the virtual driving environment indicate that our method can semantically interpret the virtual driver’s behaviors. The results show that the proposed cognitive reinforcement learning model combines the interpretability of cognitive models with the learning capability of reinforcement learning, providing a new approach for the construction of trustworthy virtual drivers.