Rinta Kridalukmana, D. Eridani, Risma Septiana, A. F. Rochim, Charisma T. Setyobudhi
{"title":"协同驾驶环境下自动驾驶Agent透明度的驾驶情境推断","authors":"Rinta Kridalukmana, D. Eridani, Risma Septiana, A. F. Rochim, Charisma T. Setyobudhi","doi":"10.1109/CyberneticsCom55287.2022.9865662","DOIUrl":null,"url":null,"abstract":"Overly trust in the autopilot agent has been identi-fied as the primary factor of road incidents involving autonomous cars. As this agent is considered a human driver counterpart in the collaborative driving context, many researchers suggest its transparency to mitigate such overly trust mental model. Hence, this paper aims to develop a driving situation inference method as a transparency provider explaining the types of situations the autopilot agent encounters leading to its certain decision. The proposed method is verified using an autonomous driving simulator called Carla. The findings show that the proposed method can generate situations which enable the human driver to calibrate their trust in the autopilot agent.","PeriodicalId":178279,"journal":{"name":"2022 IEEE International Conference on Cybernetics and Computational Intelligence (CyberneticsCom)","volume":"180 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Driving Situation Inference for Autopilot Agent Transparency in Collaborative Driving Context\",\"authors\":\"Rinta Kridalukmana, D. Eridani, Risma Septiana, A. F. Rochim, Charisma T. Setyobudhi\",\"doi\":\"10.1109/CyberneticsCom55287.2022.9865662\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Overly trust in the autopilot agent has been identi-fied as the primary factor of road incidents involving autonomous cars. As this agent is considered a human driver counterpart in the collaborative driving context, many researchers suggest its transparency to mitigate such overly trust mental model. Hence, this paper aims to develop a driving situation inference method as a transparency provider explaining the types of situations the autopilot agent encounters leading to its certain decision. The proposed method is verified using an autonomous driving simulator called Carla. The findings show that the proposed method can generate situations which enable the human driver to calibrate their trust in the autopilot agent.\",\"PeriodicalId\":178279,\"journal\":{\"name\":\"2022 IEEE International Conference on Cybernetics and Computational Intelligence (CyberneticsCom)\",\"volume\":\"180 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-06-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Conference on Cybernetics and Computational Intelligence (CyberneticsCom)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CyberneticsCom55287.2022.9865662\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Cybernetics and Computational Intelligence (CyberneticsCom)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CyberneticsCom55287.2022.9865662","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Driving Situation Inference for Autopilot Agent Transparency in Collaborative Driving Context
Overly trust in the autopilot agent has been identi-fied as the primary factor of road incidents involving autonomous cars. As this agent is considered a human driver counterpart in the collaborative driving context, many researchers suggest its transparency to mitigate such overly trust mental model. Hence, this paper aims to develop a driving situation inference method as a transparency provider explaining the types of situations the autopilot agent encounters leading to its certain decision. The proposed method is verified using an autonomous driving simulator called Carla. The findings show that the proposed method can generate situations which enable the human driver to calibrate their trust in the autopilot agent.