N. Tenhundfeld, E. D. de Visser, Kerstin S Haring, Anthony J. Ries, V. Finomore, Chad C. Tossell
{"title":"通过熟悉特斯拉Model X的自动标记功能来校准对自动化的信任","authors":"N. Tenhundfeld, E. D. de Visser, Kerstin S Haring, Anthony J. Ries, V. Finomore, Chad C. Tossell","doi":"10.1177/1555343419869083","DOIUrl":null,"url":null,"abstract":"Because one of the largest influences on trust in automation is the familiarity with the system, we sought to examine the effects of familiarity on driver interventions while using the autoparking feature of a Tesla Model X. Participants were either told or shown how the autoparking feature worked. Results showed a significantly higher initial driver intervention rate when the participants were only told how to employ the autoparking feature, than when shown. However, the intervention rate quickly leveled off, and differences between conditions disappeared. The number of interventions and the distances from the parking anchoring point (a trashcan) were used to create a new measure of distrust in autonomy. Eyetracking measures revealed that participants disengaged from monitoring the center display as the experiment progressed, which could be a further indication of a lowering of distrust in the system. Combined, these results have important implications for development and design of explainable artificial intelligence and autonomous systems. Finally, we detail the substantial hurdles encountered while trying to evaluate “autonomy in the wild.” Our research highlights the need to re-evaluate trust concepts in high-risk, high-consequence environments.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":"13 1","pages":"279 - 294"},"PeriodicalIF":2.2000,"publicationDate":"2019-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/1555343419869083","citationCount":"37","resultStr":"{\"title\":\"Calibrating Trust in Automation Through Familiarity With the Autoparking Feature of a Tesla Model X\",\"authors\":\"N. Tenhundfeld, E. D. de Visser, Kerstin S Haring, Anthony J. Ries, V. Finomore, Chad C. Tossell\",\"doi\":\"10.1177/1555343419869083\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Because one of the largest influences on trust in automation is the familiarity with the system, we sought to examine the effects of familiarity on driver interventions while using the autoparking feature of a Tesla Model X. Participants were either told or shown how the autoparking feature worked. Results showed a significantly higher initial driver intervention rate when the participants were only told how to employ the autoparking feature, than when shown. However, the intervention rate quickly leveled off, and differences between conditions disappeared. The number of interventions and the distances from the parking anchoring point (a trashcan) were used to create a new measure of distrust in autonomy. Eyetracking measures revealed that participants disengaged from monitoring the center display as the experiment progressed, which could be a further indication of a lowering of distrust in the system. Combined, these results have important implications for development and design of explainable artificial intelligence and autonomous systems. Finally, we detail the substantial hurdles encountered while trying to evaluate “autonomy in the wild.” Our research highlights the need to re-evaluate trust concepts in high-risk, high-consequence environments.\",\"PeriodicalId\":46342,\"journal\":{\"name\":\"Journal of Cognitive Engineering and Decision Making\",\"volume\":\"13 1\",\"pages\":\"279 - 294\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2019-08-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1177/1555343419869083\",\"citationCount\":\"37\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Cognitive Engineering and Decision Making\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1177/1555343419869083\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ENGINEERING, INDUSTRIAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Cognitive Engineering and Decision Making","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/1555343419869083","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, INDUSTRIAL","Score":null,"Total":0}
Calibrating Trust in Automation Through Familiarity With the Autoparking Feature of a Tesla Model X
Because one of the largest influences on trust in automation is the familiarity with the system, we sought to examine the effects of familiarity on driver interventions while using the autoparking feature of a Tesla Model X. Participants were either told or shown how the autoparking feature worked. Results showed a significantly higher initial driver intervention rate when the participants were only told how to employ the autoparking feature, than when shown. However, the intervention rate quickly leveled off, and differences between conditions disappeared. The number of interventions and the distances from the parking anchoring point (a trashcan) were used to create a new measure of distrust in autonomy. Eyetracking measures revealed that participants disengaged from monitoring the center display as the experiment progressed, which could be a further indication of a lowering of distrust in the system. Combined, these results have important implications for development and design of explainable artificial intelligence and autonomous systems. Finally, we detail the substantial hurdles encountered while trying to evaluate “autonomy in the wild.” Our research highlights the need to re-evaluate trust concepts in high-risk, high-consequence environments.