Gale M. Lucas, Burcin Becerik-Gerber, Shawn C. Roll
{"title":"Calibrating workers’ trust in intelligent automated systems","authors":"Gale M. Lucas, Burcin Becerik-Gerber, Shawn C. Roll","doi":"10.1016/j.patter.2024.101045","DOIUrl":null,"url":null,"abstract":"<p>With the exponential rise in the prevalence of automation, trust in such technology has become more critical than ever before. Trust is confidence in a particular entity, especially in regard to the consequences they can have for the trustor, and calibrated trust is the extent to which the judgments of trust are accurate. The focus of this paper is to reevaluate the general understanding of calibrating trust in automation, update this understanding, and apply it to worker’s trust in automation in the workplace. Seminal models of trust in automation were designed for automation that was already common in workforces, where the machine’s “intelligence” (i.e., capacity for decision making, cognition, and/or understanding) was limited. Now, burgeoning automation with more human-like intelligence is intended to be more interactive with workers, serving in roles such as decision aid, assistant, or collaborative coworker. Thus, we revise “calibrated trust in automation” to include more intelligent automated systems.</p>","PeriodicalId":36242,"journal":{"name":"Patterns","volume":null,"pages":null},"PeriodicalIF":6.7000,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Patterns","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1016/j.patter.2024.101045","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
With the exponential rise in the prevalence of automation, trust in such technology has become more critical than ever before. Trust is confidence in a particular entity, especially in regard to the consequences they can have for the trustor, and calibrated trust is the extent to which the judgments of trust are accurate. The focus of this paper is to reevaluate the general understanding of calibrating trust in automation, update this understanding, and apply it to worker’s trust in automation in the workplace. Seminal models of trust in automation were designed for automation that was already common in workforces, where the machine’s “intelligence” (i.e., capacity for decision making, cognition, and/or understanding) was limited. Now, burgeoning automation with more human-like intelligence is intended to be more interactive with workers, serving in roles such as decision aid, assistant, or collaborative coworker. Thus, we revise “calibrated trust in automation” to include more intelligent automated systems.