Tingru Zhang , Weitao Li , Weixing Huang , Liang Ma
{"title":"可解释性在形成对自动驾驶汽车的认知、信任和接受度方面的关键作用","authors":"Tingru Zhang , Weitao Li , Weixing Huang , Liang Ma","doi":"10.1016/j.ergon.2024.103568","DOIUrl":null,"url":null,"abstract":"<div><p>Despite the advancements in autonomous vehicles (AVs) and their potential benefits, widespread acceptance of AVs remains low due to the significant barrier of trust. While prior research has explored various factors influencing trust towards AVs, the role of explainability—AVs’ ability to describe the rationale behind their outputs in human-understandable terms—has been largely overlooked. This study aimed to investigate how the perceived explainability of AVs impacts driver perception, trust, and the acceptance of AVs. For this end, an enhanced AV acceptance model that incorporates novel features such as perceived explainability and perceived intelligence was proposed. In order to validate the proposed model, a survey was conducted in which participants were exposed to different AV introductions (<em>basic</em> introduction, <em>video</em> introduction, or introduction with <em>how</em> + <em>why</em> explanations). The responses of 399 participants were analyzed using structural equation modeling. The results showed that perceived explainability had the most profound impact on trust, exerting both direct and indirect effects. AVs perceived as more explainable were also considered easier to use, more useful, safer, and more intelligent, which in turn fostered trust and acceptance. Additionally, the impact of perceived intelligence on trust was significant, indicating that drivers view AVs as intelligent agents rather than mere passive tools. While traditional factors such as perceived ease of use and perceived usefulness remained significant predictors of trust, their effects were smaller than perceived explainability and perceived intelligence. These findings underscore the importance of considering the role of explainability in AV design to address the trust-related challenges associated with AV adoption.</p></div>","PeriodicalId":50317,"journal":{"name":"International Journal of Industrial Ergonomics","volume":"100 ","pages":"Article 103568"},"PeriodicalIF":2.5000,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Critical roles of explainability in shaping perception, trust, and acceptance of autonomous vehicles\",\"authors\":\"Tingru Zhang , Weitao Li , Weixing Huang , Liang Ma\",\"doi\":\"10.1016/j.ergon.2024.103568\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Despite the advancements in autonomous vehicles (AVs) and their potential benefits, widespread acceptance of AVs remains low due to the significant barrier of trust. While prior research has explored various factors influencing trust towards AVs, the role of explainability—AVs’ ability to describe the rationale behind their outputs in human-understandable terms—has been largely overlooked. This study aimed to investigate how the perceived explainability of AVs impacts driver perception, trust, and the acceptance of AVs. For this end, an enhanced AV acceptance model that incorporates novel features such as perceived explainability and perceived intelligence was proposed. In order to validate the proposed model, a survey was conducted in which participants were exposed to different AV introductions (<em>basic</em> introduction, <em>video</em> introduction, or introduction with <em>how</em> + <em>why</em> explanations). The responses of 399 participants were analyzed using structural equation modeling. The results showed that perceived explainability had the most profound impact on trust, exerting both direct and indirect effects. AVs perceived as more explainable were also considered easier to use, more useful, safer, and more intelligent, which in turn fostered trust and acceptance. Additionally, the impact of perceived intelligence on trust was significant, indicating that drivers view AVs as intelligent agents rather than mere passive tools. While traditional factors such as perceived ease of use and perceived usefulness remained significant predictors of trust, their effects were smaller than perceived explainability and perceived intelligence. These findings underscore the importance of considering the role of explainability in AV design to address the trust-related challenges associated with AV adoption.</p></div>\",\"PeriodicalId\":50317,\"journal\":{\"name\":\"International Journal of Industrial Ergonomics\",\"volume\":\"100 \",\"pages\":\"Article 103568\"},\"PeriodicalIF\":2.5000,\"publicationDate\":\"2024-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Industrial Ergonomics\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0169814124000246\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, INDUSTRIAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Industrial Ergonomics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0169814124000246","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, INDUSTRIAL","Score":null,"Total":0}
引用次数: 0
摘要
尽管自动驾驶汽车(AVs)取得了进步并具有潜在的益处,但由于存在巨大的信任障碍,人们对自动驾驶汽车的广泛接受度仍然很低。以往的研究探讨了影响人们对自动驾驶汽车信任度的各种因素,但在很大程度上忽略了可解释性的作用--自动驾驶汽车用人类可理解的语言描述其输出背后原理的能力。本研究旨在探究人们所感知的自动驾驶汽车可解释性如何影响驾驶员对自动驾驶汽车的感知、信任和接受。为此,研究人员提出了一个增强型自动驾驶汽车接受度模型,其中包含了感知可解释性和感知智能等新特征。为了验证所提出的模型,我们进行了一项调查,让参与者接触不同的自动驾驶汽车介绍(基本介绍、视频介绍或带有 "如何+为什么 "解释的介绍)。使用结构方程模型对 399 名参与者的回答进行了分析。结果表明,可解释性对信任的影响最为深远,既有直接影响,也有间接影响。被认为可解释性更强的 AV 也被认为更容易使用、更有用、更安全和更智能,这反过来又促进了信任和接受。此外,智能感知对信任度的影响也很显著,这表明驾驶者将自动驾驶汽车视为智能代理,而不仅仅是被动的工具。虽然感知易用性和感知有用性等传统因素仍然是信任的重要预测因素,但它们的影响小于感知可解释性和感知智能性。这些发现强调了在设计自动驾驶汽车时考虑可解释性的重要性,以应对与采用自动驾驶汽车相关的信任挑战。
Critical roles of explainability in shaping perception, trust, and acceptance of autonomous vehicles
Despite the advancements in autonomous vehicles (AVs) and their potential benefits, widespread acceptance of AVs remains low due to the significant barrier of trust. While prior research has explored various factors influencing trust towards AVs, the role of explainability—AVs’ ability to describe the rationale behind their outputs in human-understandable terms—has been largely overlooked. This study aimed to investigate how the perceived explainability of AVs impacts driver perception, trust, and the acceptance of AVs. For this end, an enhanced AV acceptance model that incorporates novel features such as perceived explainability and perceived intelligence was proposed. In order to validate the proposed model, a survey was conducted in which participants were exposed to different AV introductions (basic introduction, video introduction, or introduction with how + why explanations). The responses of 399 participants were analyzed using structural equation modeling. The results showed that perceived explainability had the most profound impact on trust, exerting both direct and indirect effects. AVs perceived as more explainable were also considered easier to use, more useful, safer, and more intelligent, which in turn fostered trust and acceptance. Additionally, the impact of perceived intelligence on trust was significant, indicating that drivers view AVs as intelligent agents rather than mere passive tools. While traditional factors such as perceived ease of use and perceived usefulness remained significant predictors of trust, their effects were smaller than perceived explainability and perceived intelligence. These findings underscore the importance of considering the role of explainability in AV design to address the trust-related challenges associated with AV adoption.
期刊介绍:
The journal publishes original contributions that add to our understanding of the role of humans in today systems and the interactions thereof with various system components. The journal typically covers the following areas: industrial and occupational ergonomics, design of systems, tools and equipment, human performance measurement and modeling, human productivity, humans in technologically complex systems, and safety. The focus of the articles includes basic theoretical advances, applications, case studies, new methodologies and procedures; and empirical studies.