并非越多越好:人工智能生成的信心和解释对人机交互的影响。

IF 2.9 3区 心理学 Q1 BEHAVIORAL SCIENCES Human Factors Pub Date : 2024-12-01 Epub Date: 2024-03-04 DOI:10.1177/00187208241234810
Shihong Ling, Yutong Zhang, Na Du
{"title":"并非越多越好:人工智能生成的信心和解释对人机交互的影响。","authors":"Shihong Ling, Yutong Zhang, Na Du","doi":"10.1177/00187208241234810","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>The study aimed to enhance transparency in autonomous systems by automatically generating and visualizing confidence and explanations and assessing their impacts on performance, trust, preference, and eye-tracking behaviors in human-automation interaction.</p><p><strong>Background: </strong>System transparency is vital to maintaining appropriate levels of trust and mission success. Previous studies presented mixed results regarding the impact of displaying likelihood information and explanations, and often relied on hand-created information, limiting scalability and failing to address real-world dynamics.</p><p><strong>Method: </strong>We conducted a dual-task experiment involving 42 university students who operated a simulated surveillance testbed with assistance from intelligent detectors. The study used a 2 (confidence visualization: yes vs. no) × 3 (visual explanations: none, bounding boxes, bounding boxes and keypoints) mixed design. Task performance, human trust, preference for intelligent detectors, and eye-tracking behaviors were evaluated.</p><p><strong>Results: </strong>Visual explanations using bounding boxes and keypoints improved detection task performance when confidence was not displayed. Meanwhile, visual explanations enhanced trust and preference for the intelligent detector, regardless of the explanation type. Confidence visualization did not influence human trust in and preference for the intelligent detector. Moreover, both visual information slowed saccade velocities.</p><p><strong>Conclusion: </strong>The study demonstrated that visual explanations could improve performance, trust, and preference in human-automation interaction without confidence visualization partially by changing the search strategies. However, excessive information might cause adverse effects.</p><p><strong>Application: </strong>These findings provide guidance for the design of transparent automation, emphasizing the importance of context-appropriate and user-centered explanations to foster effective human-machine collaboration.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":null,"pages":null},"PeriodicalIF":2.9000,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"More Is Not Always Better: Impacts of AI-Generated Confidence and Explanations in Human-Automation Interaction.\",\"authors\":\"Shihong Ling, Yutong Zhang, Na Du\",\"doi\":\"10.1177/00187208241234810\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objective: </strong>The study aimed to enhance transparency in autonomous systems by automatically generating and visualizing confidence and explanations and assessing their impacts on performance, trust, preference, and eye-tracking behaviors in human-automation interaction.</p><p><strong>Background: </strong>System transparency is vital to maintaining appropriate levels of trust and mission success. Previous studies presented mixed results regarding the impact of displaying likelihood information and explanations, and often relied on hand-created information, limiting scalability and failing to address real-world dynamics.</p><p><strong>Method: </strong>We conducted a dual-task experiment involving 42 university students who operated a simulated surveillance testbed with assistance from intelligent detectors. The study used a 2 (confidence visualization: yes vs. no) × 3 (visual explanations: none, bounding boxes, bounding boxes and keypoints) mixed design. Task performance, human trust, preference for intelligent detectors, and eye-tracking behaviors were evaluated.</p><p><strong>Results: </strong>Visual explanations using bounding boxes and keypoints improved detection task performance when confidence was not displayed. Meanwhile, visual explanations enhanced trust and preference for the intelligent detector, regardless of the explanation type. Confidence visualization did not influence human trust in and preference for the intelligent detector. Moreover, both visual information slowed saccade velocities.</p><p><strong>Conclusion: </strong>The study demonstrated that visual explanations could improve performance, trust, and preference in human-automation interaction without confidence visualization partially by changing the search strategies. However, excessive information might cause adverse effects.</p><p><strong>Application: </strong>These findings provide guidance for the design of transparent automation, emphasizing the importance of context-appropriate and user-centered explanations to foster effective human-machine collaboration.</p>\",\"PeriodicalId\":56333,\"journal\":{\"name\":\"Human Factors\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2024-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Human Factors\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1177/00187208241234810\",\"RegionNum\":3,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/3/4 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"BEHAVIORAL SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Human Factors","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1177/00187208241234810","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/3/4 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"BEHAVIORAL SCIENCES","Score":null,"Total":0}
引用次数: 0

摘要

研究目的本研究旨在通过自动生成和可视化信心和解释,并评估其对人机交互中的性能、信任、偏好和眼动跟踪行为的影响,从而提高自主系统的透明度:背景:系统透明度对于保持适当的信任度和任务成功至关重要。以往的研究对显示可能性信息和解释的影响结果不一,而且通常依赖于手工创建的信息,这限制了可扩展性,也无法解决现实世界中的动态问题:我们进行了一项双任务实验,42 名大学生在智能探测器的协助下操作模拟监控试验台。研究采用了 2(信心可视化:是与否)×3(可视化解释:无、边界框、边界框和关键点)混合设计。对任务表现、人类信任度、对智能探测器的偏好以及眼动跟踪行为进行了评估:结果:在不显示信心的情况下,使用边界框和关键点的视觉解释提高了检测任务的成绩。同时,无论解释类型如何,可视化解释都能增强对智能检测器的信任和偏好。信心可视化并不影响人类对智能检测器的信任和偏好。此外,这两种视觉信息都会减慢扫描速度:研究表明,在没有信心可视化的情况下,视觉解释可以通过改变搜索策略来提高人机交互的性能、信任度和偏好度。然而,过多的信息可能会造成不良影响:这些研究结果为透明自动化的设计提供了指导,强调了与情境相适应、以用户为中心的解释对于促进有效的人机协作的重要性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
More Is Not Always Better: Impacts of AI-Generated Confidence and Explanations in Human-Automation Interaction.

Objective: The study aimed to enhance transparency in autonomous systems by automatically generating and visualizing confidence and explanations and assessing their impacts on performance, trust, preference, and eye-tracking behaviors in human-automation interaction.

Background: System transparency is vital to maintaining appropriate levels of trust and mission success. Previous studies presented mixed results regarding the impact of displaying likelihood information and explanations, and often relied on hand-created information, limiting scalability and failing to address real-world dynamics.

Method: We conducted a dual-task experiment involving 42 university students who operated a simulated surveillance testbed with assistance from intelligent detectors. The study used a 2 (confidence visualization: yes vs. no) × 3 (visual explanations: none, bounding boxes, bounding boxes and keypoints) mixed design. Task performance, human trust, preference for intelligent detectors, and eye-tracking behaviors were evaluated.

Results: Visual explanations using bounding boxes and keypoints improved detection task performance when confidence was not displayed. Meanwhile, visual explanations enhanced trust and preference for the intelligent detector, regardless of the explanation type. Confidence visualization did not influence human trust in and preference for the intelligent detector. Moreover, both visual information slowed saccade velocities.

Conclusion: The study demonstrated that visual explanations could improve performance, trust, and preference in human-automation interaction without confidence visualization partially by changing the search strategies. However, excessive information might cause adverse effects.

Application: These findings provide guidance for the design of transparent automation, emphasizing the importance of context-appropriate and user-centered explanations to foster effective human-machine collaboration.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Human Factors
Human Factors 管理科学-行为科学
CiteScore
10.60
自引率
6.10%
发文量
99
审稿时长
6-12 weeks
期刊介绍: Human Factors: The Journal of the Human Factors and Ergonomics Society publishes peer-reviewed scientific studies in human factors/ergonomics that present theoretical and practical advances concerning the relationship between people and technologies, tools, environments, and systems. Papers published in Human Factors leverage fundamental knowledge of human capabilities and limitations – and the basic understanding of cognitive, physical, behavioral, physiological, social, developmental, affective, and motivational aspects of human performance – to yield design principles; enhance training, selection, and communication; and ultimately improve human-system interfaces and sociotechnical systems that lead to safer and more effective outcomes.
期刊最新文献
Attentional Tunneling in Pilots During a Visual Tracking Task With a Head Mounted Display. Examining Patterns and Predictors of ADHD Teens' Skill-Learning Trajectories During Enhanced FOrward Concentration and Attention Learning (FOCAL+) Training. Is Less Sometimes More? An Experimental Comparison of Four Measures of Perceived Usability. An Automobile's Tail Lights: Sacrificing Safety for Playful Design? Virtual Reality Adaptive Training for Personalized Stress Inoculation.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1