观察自动化系统时的难度判断 (JOD) 支持媒体等式和独特代理假设。

IF 2.9 3区 心理学 Q1 BEHAVIORAL SCIENCES Human Factors Pub Date : 2024-08-18 DOI:10.1177/00187208241273379
Jade Driggs, Lisa Vangsness
{"title":"观察自动化系统时的难度判断 (JOD) 支持媒体等式和独特代理假设。","authors":"Jade Driggs, Lisa Vangsness","doi":"10.1177/00187208241273379","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>We investigated how people used cues to make Judgments of Difficulty (JODs) while observing automation perform a task and when performing this task themselves.</p><p><strong>Background: </strong>Task difficulty is a factor affecting trust in automation; however, no research has explored how individuals make JODs when watching automation or whether these judgments are similar to or different from those made while watching humans. Lastly, it is unclear how cue use when observing automation differs as a function of experience.</p><p><strong>Method: </strong>The study involved a visual search task. Some participants performed the task first, then watched automation complete it. Others watched and then performed, and a third group alternated between performing and watching. After each trial, participants made a JOD by indicating if the task was easier or harder than before. Task difficulty randomly changed every five trials.</p><p><strong>Results: </strong>A Bayesian regression suggested that cue use is similar to and different from cue use while observing humans. For central cues, support for the UAH was bounded by experience: those who performed the task first underweighted central cues when making JODs, relative to their counterparts in a previous study involving humans. For peripheral cues, support for the MEH was unequivocal and participants weighted cues similarly across observation sources.</p><p><strong>Conclusion: </strong>People weighted cues similar to and different from when they watched automation perform a task relative to when they watched humans, supporting the Media Equation and Unique Agent Hypotheses.</p><p><strong>Application: </strong>This study adds to a growing understanding of judgments in human-human and human-automation interactions.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":null,"pages":null},"PeriodicalIF":2.9000,"publicationDate":"2024-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Judgments of Difficulty (JODs) While Observing an Automated System Support the Media Equation and Unique Agent Hypotheses.\",\"authors\":\"Jade Driggs, Lisa Vangsness\",\"doi\":\"10.1177/00187208241273379\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objective: </strong>We investigated how people used cues to make Judgments of Difficulty (JODs) while observing automation perform a task and when performing this task themselves.</p><p><strong>Background: </strong>Task difficulty is a factor affecting trust in automation; however, no research has explored how individuals make JODs when watching automation or whether these judgments are similar to or different from those made while watching humans. Lastly, it is unclear how cue use when observing automation differs as a function of experience.</p><p><strong>Method: </strong>The study involved a visual search task. Some participants performed the task first, then watched automation complete it. Others watched and then performed, and a third group alternated between performing and watching. After each trial, participants made a JOD by indicating if the task was easier or harder than before. Task difficulty randomly changed every five trials.</p><p><strong>Results: </strong>A Bayesian regression suggested that cue use is similar to and different from cue use while observing humans. For central cues, support for the UAH was bounded by experience: those who performed the task first underweighted central cues when making JODs, relative to their counterparts in a previous study involving humans. For peripheral cues, support for the MEH was unequivocal and participants weighted cues similarly across observation sources.</p><p><strong>Conclusion: </strong>People weighted cues similar to and different from when they watched automation perform a task relative to when they watched humans, supporting the Media Equation and Unique Agent Hypotheses.</p><p><strong>Application: </strong>This study adds to a growing understanding of judgments in human-human and human-automation interactions.</p>\",\"PeriodicalId\":56333,\"journal\":{\"name\":\"Human Factors\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2024-08-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Human Factors\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1177/00187208241273379\",\"RegionNum\":3,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"BEHAVIORAL SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Human Factors","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1177/00187208241273379","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BEHAVIORAL SCIENCES","Score":null,"Total":0}
引用次数: 0

摘要

目的我们研究了人们在观察自动化执行任务时和自己执行任务时如何利用线索做出难度判断(JOD):背景:任务难度是影响人们对自动化信任度的一个因素;然而,目前还没有研究探讨过人们在观察自动化时是如何做出难度判断的,或者这些判断与观察人类时做出的判断是相似还是不同。最后,目前还不清楚观察自动化时线索的使用如何随经验而变化:研究涉及一项视觉搜索任务。一些参与者先完成任务,然后观看自动化完成任务。其他参与者则是先观察后执行,第三组则是交替执行和观察。每次试验后,参与者都要进行一次 JOD,指出任务比之前更容易还是更难。任务难度每五次试验随机改变一次:贝叶斯回归表明,线索的使用与观察人类时线索的使用有相似之处,也有不同之处。对于中心线索,UAH 的支持受经验限制:与之前一项涉及人类的研究中的同行相比,那些第一次执行任务的人在做出 JOD 时对中心线索的权重较低。对于外围线索,MEH 得到了明确的支持,而且参与者对不同观察来源的线索的权重也相似:结论:相对于观察人类,人们在观察自动化执行任务时对线索的权重与观察人类时相似或不同,这支持了 "媒体等式假说 "和 "独特代理假说":这项研究加深了人们对人与人以及人与自动化互动中的判断的理解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Judgments of Difficulty (JODs) While Observing an Automated System Support the Media Equation and Unique Agent Hypotheses.

Objective: We investigated how people used cues to make Judgments of Difficulty (JODs) while observing automation perform a task and when performing this task themselves.

Background: Task difficulty is a factor affecting trust in automation; however, no research has explored how individuals make JODs when watching automation or whether these judgments are similar to or different from those made while watching humans. Lastly, it is unclear how cue use when observing automation differs as a function of experience.

Method: The study involved a visual search task. Some participants performed the task first, then watched automation complete it. Others watched and then performed, and a third group alternated between performing and watching. After each trial, participants made a JOD by indicating if the task was easier or harder than before. Task difficulty randomly changed every five trials.

Results: A Bayesian regression suggested that cue use is similar to and different from cue use while observing humans. For central cues, support for the UAH was bounded by experience: those who performed the task first underweighted central cues when making JODs, relative to their counterparts in a previous study involving humans. For peripheral cues, support for the MEH was unequivocal and participants weighted cues similarly across observation sources.

Conclusion: People weighted cues similar to and different from when they watched automation perform a task relative to when they watched humans, supporting the Media Equation and Unique Agent Hypotheses.

Application: This study adds to a growing understanding of judgments in human-human and human-automation interactions.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Human Factors
Human Factors 管理科学-行为科学
CiteScore
10.60
自引率
6.10%
发文量
99
审稿时长
6-12 weeks
期刊介绍: Human Factors: The Journal of the Human Factors and Ergonomics Society publishes peer-reviewed scientific studies in human factors/ergonomics that present theoretical and practical advances concerning the relationship between people and technologies, tools, environments, and systems. Papers published in Human Factors leverage fundamental knowledge of human capabilities and limitations – and the basic understanding of cognitive, physical, behavioral, physiological, social, developmental, affective, and motivational aspects of human performance – to yield design principles; enhance training, selection, and communication; and ultimately improve human-system interfaces and sociotechnical systems that lead to safer and more effective outcomes.
期刊最新文献
Road Users Fail to Appreciate the Special Optical Properties of Retroreflective Materials. Effectiveness of Safe Patient Handling Equipment and Techniques: A Review of Biomechanical Studies. Changes in Neck and Shoulder Muscles Fatigue Threshold When Using a Passive Head/Neck Supporting Exoskeleton During Repetitive Overhead Tasks. The Role of State and Trait Self-Control on the Sustained Attention to Response Task. Improving Social Bot Detection Through Aid and Training.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1