首页 > 最新文献

Journal of Cognitive Engineering and Decision Making最新文献

英文 中文
Where Failures May Occur in Automated Driving: A Fault Tree Analysis Approach 自动驾驶中可能发生故障的地方:故障树分析方法
IF 2 Q3 ENGINEERING, INDUSTRIAL Pub Date : 2022-08-06 DOI: 10.1177/15553434221116254
Kuan-Ting Chen, H. Chen, Ann Bisantz, Su Shen, Ercan Sahin
There will be circumstances where partial or conditionally automated vehicles fail to drive safely and require human interventions. Within the human factors community, the taxonomies surrounding control transitions have primarily focused on characterizing the stages and sequences of the transition between the automated driving system (ADS) and the human driver. Recognizing the variance in operational design domains (ODDs) across vehicles equipped with ADS and how variable the takeover situations may be, we describe a simple taxonomy of takeover situations to aid the identification and discussions of takeover scenarios in future takeover studies. By considering the ODD structure and the human information processing stages, we constructed a fault tree analysis (FTA) aimed to identify potential failure sources that would prevent successful control transitions. The FTA was applied in analyzing two real-world accidents involving ADS failures, illustrating how this approach can help identify areas for improvements in the system, interface, or training design to support drivers in level 2 and level 3 automated driving.
在某些情况下,部分或有条件的自动化车辆无法安全驾驶,需要人工干预。在人为因素群落中,围绕控制转换的分类法主要集中于表征自动驾驶系统(ADS)和人类驾驶员之间转换的阶段和顺序。认识到配备ADS的车辆在作战设计域(ODD)方面的差异,以及接管情况的变化程度,我们描述了接管情况的简单分类,以帮助在未来的接管研究中识别和讨论接管情况。通过考虑ODD结构和人类信息处理阶段,我们构建了一个故障树分析(FTA),旨在识别阻止成功控制转换的潜在故障源。FTA被应用于分析两起涉及ADS故障的真实事故,说明了这种方法如何帮助确定系统、接口或培训设计的改进领域,以支持驾驶员进行2级和3级自动驾驶。
{"title":"Where Failures May Occur in Automated Driving: A Fault Tree Analysis Approach","authors":"Kuan-Ting Chen, H. Chen, Ann Bisantz, Su Shen, Ercan Sahin","doi":"10.1177/15553434221116254","DOIUrl":"https://doi.org/10.1177/15553434221116254","url":null,"abstract":"There will be circumstances where partial or conditionally automated vehicles fail to drive safely and require human interventions. Within the human factors community, the taxonomies surrounding control transitions have primarily focused on characterizing the stages and sequences of the transition between the automated driving system (ADS) and the human driver. Recognizing the variance in operational design domains (ODDs) across vehicles equipped with ADS and how variable the takeover situations may be, we describe a simple taxonomy of takeover situations to aid the identification and discussions of takeover scenarios in future takeover studies. By considering the ODD structure and the human information processing stages, we constructed a fault tree analysis (FTA) aimed to identify potential failure sources that would prevent successful control transitions. The FTA was applied in analyzing two real-world accidents involving ADS failures, illustrating how this approach can help identify areas for improvements in the system, interface, or training design to support drivers in level 2 and level 3 automated driving.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":"17 1","pages":"147 - 165"},"PeriodicalIF":2.0,"publicationDate":"2022-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42865155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Exploring the Relationship Between Ethics and Trust in Human–Artificial Intelligence Teaming: A Mixed Methods Approach 探索人-人工智能团队中道德与信任的关系:一种混合方法
IF 2 Q3 ENGINEERING, INDUSTRIAL Pub Date : 2022-08-02 DOI: 10.1177/15553434221113964
Claire Textor, Rui Zhang, Jeremy Lopez, Beau G. Schelble, Nathan J. Mcneese, Guo Freeman, R. Pak, Chad C. Tossell, E. D. de Visser
Advancements and implementations of autonomous systems coincide with an increased concern for the ethical implications resulting from their use. This is increasingly relevant as autonomy fulfills teammate roles in contexts that demand ethical considerations. As AI teammates (ATs) enter these roles, research is needed to explore how an AT’s ethics influences human trust. This current research presents two studies which explore how an AT’s ethical or unethical behavior impacts trust in that teammate. In Study 1, participants responded to scenarios of an AT recommending actions which violated or abided by a set of ethical principles. The results suggest that ethicality perceptions and trust are influenced by ethical violations, but only ethicality depends on the type of ethical violation. Participants in Study 2 completed a focus group interview after performing a team task with a simulated AT that committed ethical violations and attempted to repair trust (apology or denial). The focus group responses suggest that ethical violations worsened perceptions of the AT and decreased trust, but it could still be trusted to perform tasks. The AT’s apologies and denials did not repair damaged trust. The studies’ findings suggest a nuanced relationship between trust and ethics and a need for further investigation into trust repair strategies following ethical violations.
自主系统的进步和实现与人们对其使用所产生的伦理影响的日益关注相吻合。随着自主性在需要道德考虑的环境中发挥队友的作用,这一点变得越来越重要。随着人工智能队友(AT)进入这些角色,需要研究AT的道德如何影响人类的信任。目前的这项研究提出了两项研究,探讨AT的道德或不道德行为如何影响对队友的信任。在研究1中,参与者对AT建议违反或遵守一系列道德原则的行为的情景做出了反应。研究结果表明,道德认知和信任受到道德违规的影响,但只有道德行为取决于道德违规的类型。研究2的参与者在与一名违反道德并试图修复信任(道歉或否认)的模拟AT执行团队任务后,完成了焦点小组访谈。焦点小组的回应表明,违反道德行为恶化了对AT的认知,降低了信任,但它仍然可以被信任执行任务。AT的道歉和否认并没有修复受损的信任。研究结果表明,信任和道德之间存在微妙的关系,需要对违反道德行为后的信任修复策略进行进一步调查。
{"title":"Exploring the Relationship Between Ethics and Trust in Human–Artificial Intelligence Teaming: A Mixed Methods Approach","authors":"Claire Textor, Rui Zhang, Jeremy Lopez, Beau G. Schelble, Nathan J. Mcneese, Guo Freeman, R. Pak, Chad C. Tossell, E. D. de Visser","doi":"10.1177/15553434221113964","DOIUrl":"https://doi.org/10.1177/15553434221113964","url":null,"abstract":"Advancements and implementations of autonomous systems coincide with an increased concern for the ethical implications resulting from their use. This is increasingly relevant as autonomy fulfills teammate roles in contexts that demand ethical considerations. As AI teammates (ATs) enter these roles, research is needed to explore how an AT’s ethics influences human trust. This current research presents two studies which explore how an AT’s ethical or unethical behavior impacts trust in that teammate. In Study 1, participants responded to scenarios of an AT recommending actions which violated or abided by a set of ethical principles. The results suggest that ethicality perceptions and trust are influenced by ethical violations, but only ethicality depends on the type of ethical violation. Participants in Study 2 completed a focus group interview after performing a team task with a simulated AT that committed ethical violations and attempted to repair trust (apology or denial). The focus group responses suggest that ethical violations worsened perceptions of the AT and decreased trust, but it could still be trusted to perform tasks. The AT’s apologies and denials did not repair damaged trust. The studies’ findings suggest a nuanced relationship between trust and ethics and a need for further investigation into trust repair strategies following ethical violations.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":"16 1","pages":"252 - 281"},"PeriodicalIF":2.0,"publicationDate":"2022-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46141456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Autonomy as a Teammate: Evaluation of Teammate-Likeness 作为队友的自主性:队友相似性的评价
IF 2 Q3 ENGINEERING, INDUSTRIAL Pub Date : 2022-06-12 DOI: 10.1177/15553434221108002
G. Tokadlı, M. Dorneich
What makes an autonomous system a teammate? The paper presents an evaluation of factors that can encourage a human perceive an autonomous system as a teammate rather than a tool. Increased perception of teammate-likeness more closely matches the human’s expectations of a teammate’s behavior, benefiting coordination and cooperation. Previous work with commercial pilots suggested that autonomous systems should provide visible cues of actions situated in the work environment. These results motivated the present study to investigate the impact of feedback modality on the teammate-likeness of an autonomous system under low (sequential events) and high (concurrent events) task loads. A Cognitive Assistant (CA) was developed as an autonomous teammate to support a (simulated) Mars mission. With centralized feedback, an autonomous teammate provided verbal and written information on a dedicated display. With distributed feedback, the autonomous teammate provided visible cues of actions in the environment in addition to centralized feedback. Perception of teammate-likeness increased with distributed feedback due to increased awareness of the CA’s actions, especially under low task load. In high task load, teamwork performance was higher with distributed feedback when compared to centralized feedback, where in low task load there was no difference in teamwork performance between feedback modalities.
是什么让一个自治系统成为一个队友?本文提出了一个评估因素,可以鼓励人们将自主系统视为一个队友,而不是一个工具。对队友相似度的感知增加,更符合人类对队友行为的期望,有利于协调和合作。先前对商业飞行员的研究表明,自主系统应该在工作环境中提供可见的动作提示。这些结果促使本研究探讨反馈方式对低(顺序事件)和高(并发事件)任务负载下自主系统队友相似性的影响。一个认知助理(CA)被开发为一个自主的队友来支持(模拟)火星任务。通过集中反馈,自主队友可以在专用显示屏上提供口头和书面信息。通过分布式反馈,除了集中式反馈外,自主团队还提供了环境中行动的可见线索。由于对CA行为的认识增加,特别是在低任务负荷下,分布式反馈增加了对队友相似度的感知。在高任务负荷下,分布式反馈方式的团队合作绩效高于集中式反馈方式,而在低任务负荷下,不同反馈方式的团队合作绩效无显著差异。
{"title":"Autonomy as a Teammate: Evaluation of Teammate-Likeness","authors":"G. Tokadlı, M. Dorneich","doi":"10.1177/15553434221108002","DOIUrl":"https://doi.org/10.1177/15553434221108002","url":null,"abstract":"What makes an autonomous system a teammate? The paper presents an evaluation of factors that can encourage a human perceive an autonomous system as a teammate rather than a tool. Increased perception of teammate-likeness more closely matches the human’s expectations of a teammate’s behavior, benefiting coordination and cooperation. Previous work with commercial pilots suggested that autonomous systems should provide visible cues of actions situated in the work environment. These results motivated the present study to investigate the impact of feedback modality on the teammate-likeness of an autonomous system under low (sequential events) and high (concurrent events) task loads. A Cognitive Assistant (CA) was developed as an autonomous teammate to support a (simulated) Mars mission. With centralized feedback, an autonomous teammate provided verbal and written information on a dedicated display. With distributed feedback, the autonomous teammate provided visible cues of actions in the environment in addition to centralized feedback. Perception of teammate-likeness increased with distributed feedback due to increased awareness of the CA’s actions, especially under low task load. In high task load, teamwork performance was higher with distributed feedback when compared to centralized feedback, where in low task load there was no difference in teamwork performance between feedback modalities.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":"16 1","pages":"282 - 300"},"PeriodicalIF":2.0,"publicationDate":"2022-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43677892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
How Much Reliability Is Enough? A Context-Specific View on Human Interaction With (Artificial) Agents From Different Perspectives 多少可靠性才算足够?从不同角度看人类与(人工)智能体的互动
IF 2 Q3 ENGINEERING, INDUSTRIAL Pub Date : 2022-06-03 DOI: 10.1177/15553434221104615
Ksenia Appelganc, Tobias Rieger, Eileen Roesler, D. Manzey
Tasks classically performed by human–human teams in today’s workplaces are increasingly given to human–technology teams instead. The role of technology is not only played by classic decision support systems (DSSs) but more and more by artificial intelligence (AI). Reliability is a key factor influencing trust in technology. Therefore, we investigated the reliability participants require in order to perceive the support agents (human, AI, and DSS) as “highly reliable.” We then examined how trust differed between these highly reliable agents. Whilst there is a range of research identifying trust as an important determinant in human–DSS interaction, the question is whether these findings are also applicable to the interaction between humans and AI. To study these issues, we conducted an experiment (N = 300) with two different tasks, usually performed by dyadic teams (loan assignment and x-ray screening), from two different perspectives (i.e., working together or being evaluated by the agent). In contrast to our hypotheses, the required reliability if working together was equal regardless of the agent. Nevertheless, participants trusted the human more than an AI or DSS. They also required that AI be more reliable than a human when used to evaluate themselves, thus illustrating the importance of changing perspective.
在当今的工作场所,传统上由人工-人工团队执行的任务越来越多地交给了人工-技术团队。技术的作用不仅由经典的决策支持系统发挥,而且越来越多地由人工智能发挥。可靠性是影响技术信任的关键因素。因此,我们调查了参与者认为支持代理(人类、人工智能和DSS)“高度可靠”所需的可靠性。然后,我们研究了这些高度可靠代理之间的信任差异。虽然有一系列研究将信任确定为人类与DSS互动的重要决定因素,但问题是这些发现是否也适用于人类与人工智能之间的互动。为了研究这些问题,我们进行了一项实验(N=300),其中包括两项不同的任务,通常由二人小组执行(贷款分配和x射线筛查),从两个不同的角度(即合作或由代理人评估)。与我们的假设相反,如果一起工作,无论代理如何,所需的可靠性都是相等的。尽管如此,参与者更信任人类,而不是人工智能或DSS。他们还要求人工智能在评估自己时比人类更可靠,从而说明了改变视角的重要性。
{"title":"How Much Reliability Is Enough? A Context-Specific View on Human Interaction With (Artificial) Agents From Different Perspectives","authors":"Ksenia Appelganc, Tobias Rieger, Eileen Roesler, D. Manzey","doi":"10.1177/15553434221104615","DOIUrl":"https://doi.org/10.1177/15553434221104615","url":null,"abstract":"Tasks classically performed by human–human teams in today’s workplaces are increasingly given to human–technology teams instead. The role of technology is not only played by classic decision support systems (DSSs) but more and more by artificial intelligence (AI). Reliability is a key factor influencing trust in technology. Therefore, we investigated the reliability participants require in order to perceive the support agents (human, AI, and DSS) as “highly reliable.” We then examined how trust differed between these highly reliable agents. Whilst there is a range of research identifying trust as an important determinant in human–DSS interaction, the question is whether these findings are also applicable to the interaction between humans and AI. To study these issues, we conducted an experiment (N = 300) with two different tasks, usually performed by dyadic teams (loan assignment and x-ray screening), from two different perspectives (i.e., working together or being evaluated by the agent). In contrast to our hypotheses, the required reliability if working together was equal regardless of the agent. Nevertheless, participants trusted the human more than an AI or DSS. They also required that AI be more reliable than a human when used to evaluate themselves, thus illustrating the importance of changing perspective.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":"16 1","pages":"207 - 221"},"PeriodicalIF":2.0,"publicationDate":"2022-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42031827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A Naturalistic Investigation of Trust, AI, and Intelligence Work 信任、人工智能和情报工作的自然调查
IF 2 Q3 ENGINEERING, INDUSTRIAL Pub Date : 2022-05-25 DOI: 10.1177/15553434221103718
Stephen L. Dorton, Samantha B. Harper
Artificial Intelligence (AI) is often viewed as the means by which the intelligence community will cope with increasing amounts of data. There are challenges in adoption, however, as outputs of such systems may be difficult to trust, for a variety of factors. We conducted a naturalistic study using the Critical Incident Technique (CIT) to identify which factors were present in incidents where trust in an AI technology used in intelligence work (i.e., the collection, processing, analysis, and dissemination of intelligence) was gained or lost. We found that explainability and performance of the AI were the most prominent factors in responses; however, several other factors affected the development of trust. Further, most incidents involved two or more trust factors, demonstrating that trust is a multifaceted phenomenon. We also conducted a broader thematic analysis to identify other trends in the data. We found that trust in AI is often affected by the interaction of other people with the AI (i.e., people who develop it or use its outputs), and that involving end users in the development of the AI also affects trust. We provide an overview of key findings, practical implications for design, and possible future areas for research.
人工智能(AI)通常被视为情报界应对日益增长的数据量的手段。然而,在采用方面存在挑战,因为由于各种因素,这种系统的输出可能难以信任。我们使用关键事件技术(CIT)进行了一项自然主义研究,以确定在获得或失去对情报工作(即情报的收集、处理、分析和传播)中使用的人工智能技术的信任的事件中存在哪些因素。我们发现AI的可解释性和性能是回应中最突出的因素;然而,其他几个因素影响了信任的发展。此外,大多数事件涉及两个或两个以上的信任因素,表明信任是一个多方面的现象。我们还进行了更广泛的专题分析,以确定数据中的其他趋势。我们发现,对人工智能的信任经常受到其他人与人工智能的互动(即开发人工智能或使用其输出的人)的影响,而让最终用户参与人工智能的开发也会影响信任。我们概述了主要发现、对设计的实际影响以及未来可能的研究领域。
{"title":"A Naturalistic Investigation of Trust, AI, and Intelligence Work","authors":"Stephen L. Dorton, Samantha B. Harper","doi":"10.1177/15553434221103718","DOIUrl":"https://doi.org/10.1177/15553434221103718","url":null,"abstract":"Artificial Intelligence (AI) is often viewed as the means by which the intelligence community will cope with increasing amounts of data. There are challenges in adoption, however, as outputs of such systems may be difficult to trust, for a variety of factors. We conducted a naturalistic study using the Critical Incident Technique (CIT) to identify which factors were present in incidents where trust in an AI technology used in intelligence work (i.e., the collection, processing, analysis, and dissemination of intelligence) was gained or lost. We found that explainability and performance of the AI were the most prominent factors in responses; however, several other factors affected the development of trust. Further, most incidents involved two or more trust factors, demonstrating that trust is a multifaceted phenomenon. We also conducted a broader thematic analysis to identify other trends in the data. We found that trust in AI is often affected by the interaction of other people with the AI (i.e., people who develop it or use its outputs), and that involving end users in the development of the AI also affects trust. We provide an overview of key findings, practical implications for design, and possible future areas for research.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":"16 1","pages":"222 - 236"},"PeriodicalIF":2.0,"publicationDate":"2022-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46457637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Clinical Reasoning among Registered Nurses in Emergency Medical Services: A Case Study 急诊注册护士临床推理的个案研究
IF 2 Q3 ENGINEERING, INDUSTRIAL Pub Date : 2022-05-24 DOI: 10.1177/15553434221097788
Ulf Andersson, M. Andersson Hagiwara, B. Wireklint Sundström, Henrik Andersson, Hanna Maurin Söderholm
In emergency medical services (EMS), the clinical reasoning (CR) of registered nurses (RNs) working in ambulance care plays an important role in providing care and treatment that is timely, accurate, appropriate and safe. However, limited existing knowledge about how CR is formed and influenced by the EMS mission hinders the development of service provision and decision support tools for RNs that would further enhance patient safety. To explore the nature of CR and influencing factors in this context, an inductive case study examined 34 observed patient–RN encounters in an EMS setting focusing on ambulance care. The results reveal a fragmented CR approach involving several parallel decision-making processes grounded in and led by patients’ narratives. The findings indicate that RNs are not always aware of their own CR and associated influences until they actively reflect on the process, and additional research is needed to clarify this complex phenomenon.
在急救医疗服务(EMS)中,在救护车护理中工作的注册护士的临床推理(CR)在提供及时、准确、适当和安全的护理和治疗方面发挥着重要作用。然而,关于CR是如何形成的以及如何受到EMS任务影响的现有知识有限,阻碍了为RN开发服务提供和决策支持工具,从而进一步提高患者安全性。为了探讨CR的性质和在这种情况下的影响因素,一项归纳性案例研究调查了34例在EMS环境中观察到的患者-RN遭遇,重点是救护车护理。研究结果揭示了一种分散的CR方法,包括基于患者叙述并由其主导的几个平行决策过程。研究结果表明,RN在积极反思这一过程之前,并不总是意识到自己的CR和相关影响,需要进一步的研究来澄清这一复杂现象。
{"title":"Clinical Reasoning among Registered Nurses in Emergency Medical Services: A Case Study","authors":"Ulf Andersson, M. Andersson Hagiwara, B. Wireklint Sundström, Henrik Andersson, Hanna Maurin Söderholm","doi":"10.1177/15553434221097788","DOIUrl":"https://doi.org/10.1177/15553434221097788","url":null,"abstract":"In emergency medical services (EMS), the clinical reasoning (CR) of registered nurses (RNs) working in ambulance care plays an important role in providing care and treatment that is timely, accurate, appropriate and safe. However, limited existing knowledge about how CR is formed and influenced by the EMS mission hinders the development of service provision and decision support tools for RNs that would further enhance patient safety. To explore the nature of CR and influencing factors in this context, an inductive case study examined 34 observed patient–RN encounters in an EMS setting focusing on ambulance care. The results reveal a fragmented CR approach involving several parallel decision-making processes grounded in and led by patients’ narratives. The findings indicate that RNs are not always aware of their own CR and associated influences until they actively reflect on the process, and additional research is needed to clarify this complex phenomenon.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":"16 1","pages":"123 - 156"},"PeriodicalIF":2.0,"publicationDate":"2022-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48106227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Perceptual–Cognitive Expertise in Law Enforcement: An Object-Identification Task 执法中的知觉-认知专长:一个客体识别任务
IF 2 Q3 ENGINEERING, INDUSTRIAL Pub Date : 2022-05-22 DOI: 10.1177/15553434221104600
Dakota D. Scott, Lisa Vangsness, Joel Suss
The few perceptual–cognitive expertise and deception studies in the domain of law enforcement have yet to examine perceptual–cognitive expertise differences of police trainees and police officers. The current study uses methods from the perceptual–cognitive expertise and deception models. Participants watched temporally occluded videos of actors honestly drawing a weapon and deceptively drawing a non-weapon from a concealed location on their body. Participants determined if the actor was holding a weapon or a non-weapon. Using signal-detection metrics—sensitivity and response bias—we did not find evidence of perceptual–cognitive expertise; performance measures did not differ significantly between police trainees and experienced officers. However, consistent with the hypotheses, we did find that both police trainees and police officers became more sensitive in identifying the object as occlusion points progressed. Additionally, we found that across police trainees and police officers, their response bias became more liberal (i.e., more likely to identify the object as a weapon) as occlusion points progressed. This information has potential impacts for law enforcement practices and additional research.
少数执法领域的感知-认知专业知识和欺骗研究尚未检查警察培训生和警察的感知-认知专业知识差异。目前的研究使用了知觉-认知专业知识和欺骗模型的方法。参与者观看了暂时屏蔽的视频,视频中演员诚实地从他们身上的隐藏位置抽出武器,并欺骗性地抽出非武器。参与者判断演员是拿着武器还是没有武器。使用信号检测指标-敏感性和反应偏差-我们没有发现感知-认知专业知识的证据;警察培训生和经验丰富的警察之间的绩效衡量没有显着差异。然而,与假设一致,我们确实发现,随着遮挡点的进展,受训警察和警官在识别物体方面变得更加敏感。此外,我们发现,在受训警察和警官中,随着遮挡点的增加,他们的反应偏差变得更加自由(即更有可能将物体识别为武器)。这些信息对执法实践和其他研究具有潜在影响。
{"title":"Perceptual–Cognitive Expertise in Law Enforcement: An Object-Identification Task","authors":"Dakota D. Scott, Lisa Vangsness, Joel Suss","doi":"10.1177/15553434221104600","DOIUrl":"https://doi.org/10.1177/15553434221104600","url":null,"abstract":"The few perceptual–cognitive expertise and deception studies in the domain of law enforcement have yet to examine perceptual–cognitive expertise differences of police trainees and police officers. The current study uses methods from the perceptual–cognitive expertise and deception models. Participants watched temporally occluded videos of actors honestly drawing a weapon and deceptively drawing a non-weapon from a concealed location on their body. Participants determined if the actor was holding a weapon or a non-weapon. Using signal-detection metrics—sensitivity and response bias—we did not find evidence of perceptual–cognitive expertise; performance measures did not differ significantly between police trainees and experienced officers. However, consistent with the hypotheses, we did find that both police trainees and police officers became more sensitive in identifying the object as occlusion points progressed. Additionally, we found that across police trainees and police officers, their response bias became more liberal (i.e., more likely to identify the object as a weapon) as occlusion points progressed. This information has potential impacts for law enforcement practices and additional research.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":"16 1","pages":"157 - 176"},"PeriodicalIF":2.0,"publicationDate":"2022-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47000222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessment of Trust in Automation in the “Real World”: Requirements for New Trust in Automation Measurement Techniques for Use by Practitioners “真实世界”中对自动化的信任评估:对从业者使用的自动化测量技术的新信任的要求
IF 2 Q3 ENGINEERING, INDUSTRIAL Pub Date : 2022-05-06 DOI: 10.1177/15553434221096261
N. Tenhundfeld, Mustafa Demir, E. D. de Visser
Trust in automation is a foundational principle in Human Factors Engineering. An understanding of trust can help predict and alter much of human-machine interaction (HMI). However, despite the utility of assessing trust in automation in applied settings, there are inherent and unique challenges in trust assessment for those who seek to do so outside of the confines of the sterile lab environment. Because of these challenges, new approaches for trust in automation assessment need to be developed to best suit the unique demands of trust assessment in the real world. This paper lays out six requirements for these future measures: they should (1) be short, unobtrusive, and interaction-based, (2) be context-specific and adaptable, (3) be dynamic, (4) account for autonomy versus automation dependency, (5) account for task dependency, and (6) account for levels of risk. For the benefits of trust assessment to be realized in the “real world,” future research needs to leverage the existing body of literature on trust in automation while looking toward the needs of the practitioner.
对自动化的信任是人因工程的一个基本原则。对信任的理解可以帮助预测和改变人机交互(HMI)。然而,尽管评估自动化在应用环境中的信任是有用的,但对于那些试图在无菌实验室环境之外进行信任评估的人来说,信任评估存在固有的独特挑战。由于这些挑战,需要开发自动化评估中信任的新方法,以最适合现实世界中信任评估的独特需求。本文列出了这些未来措施的六个要求:它们应该(1)简短、不引人注目、基于交互,(2)特定于上下文并具有适应性,(3)动态,(4)考虑自主性与自动化依赖性,(5)考虑任务依赖性,以及(6)考虑风险水平。为了在“现实世界”中实现信任评估的好处,未来的研究需要利用现有的自动化信任文献,同时关注从业者的需求。
{"title":"Assessment of Trust in Automation in the “Real World”: Requirements for New Trust in Automation Measurement Techniques for Use by Practitioners","authors":"N. Tenhundfeld, Mustafa Demir, E. D. de Visser","doi":"10.1177/15553434221096261","DOIUrl":"https://doi.org/10.1177/15553434221096261","url":null,"abstract":"Trust in automation is a foundational principle in Human Factors Engineering. An understanding of trust can help predict and alter much of human-machine interaction (HMI). However, despite the utility of assessing trust in automation in applied settings, there are inherent and unique challenges in trust assessment for those who seek to do so outside of the confines of the sterile lab environment. Because of these challenges, new approaches for trust in automation assessment need to be developed to best suit the unique demands of trust assessment in the real world. This paper lays out six requirements for these future measures: they should (1) be short, unobtrusive, and interaction-based, (2) be context-specific and adaptable, (3) be dynamic, (4) account for autonomy versus automation dependency, (5) account for task dependency, and (6) account for levels of risk. For the benefits of trust assessment to be realized in the “real world,” future research needs to leverage the existing body of literature on trust in automation while looking toward the needs of the practitioner.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":"16 1","pages":"101 - 118"},"PeriodicalIF":2.0,"publicationDate":"2022-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42330259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A Comparison of Dynamical Perceptual-Motor Primitives and Deep Reinforcement Learning for Human-Artificial Agent Training Systems 动态感知-运动原语与深度强化学习在人机智能体训练系统中的比较
IF 2 Q3 ENGINEERING, INDUSTRIAL Pub Date : 2022-04-25 DOI: 10.1177/15553434221092930
Lillian M. Rigoli, Gaurav Patil, Patrick Nalepka, Rachel W. Kallen, S. Hosking, Christopher J. Best, Michael J. Richardson
Effective team performance often requires that individuals engage in team training exercises. However, organizing team-training scenarios presents economic and logistical challenges and can be prone to trainer bias and fatigue. Accordingly, a growing body of research is investigating the effectiveness of employing artificial agents (AAs) as synthetic teammates in team training simulations, and, relatedly, how to best develop AAs capable of robust, human-like behavioral interaction. Motivated by these challenges, the current study examined whether task dynamical models of expert human herding behavior could be embedded in the control architecture of AAs to train novice actors to perform a complex multiagent herding task. Training outcomes were compared to human-expert trainers, novice baseline performance, and AAs developed using deep reinforcement learning (DRL). Participants’ subjective preferences for the AAs developed using DRL or dynamical models of human performance were also investigated. The results revealed that AAs controlled by dynamical models of human expert performance could train novice actors at levels equivalent to expert human trainers and were also preferred over AAs developed using DRL. The implications for the development of AAs for robust human-AA interaction and training are discussed, including the potential benefits of employing hybrid Dynamical-DRL techniques for AA development.
有效的团队绩效通常需要个人参与团队训练。然而,组织团队训练场景会带来经济和后勤方面的挑战,并且容易产生训练师偏见和疲劳。因此,越来越多的研究机构正在调查在团队训练模拟中使用人工代理(AAs)作为合成队友的有效性,以及相关的,如何最好地开发具有强大的,类似人类行为交互能力的AAs。在这些挑战的激励下,本研究探讨了是否可以将专家人类羊群行为的任务动态模型嵌入到人工智能系统的控制体系结构中,以训练新手执行复杂的多智能体羊群任务。将训练结果与人类专家训练师、新手基线表现和使用深度强化学习(DRL)开发的人工智能(AAs)进行比较。参与者对使用DRL或人类表现动态模型开发的AAs的主观偏好也进行了调查。结果表明,由人类专家表演动态模型控制的人工智能系统可以训练新手达到与人类专家训练师相当的水平,并且比使用DRL开发的人工智能系统更受欢迎。本文讨论了人工智能开发对人类与人工智能交互和训练的影响,包括采用混合动态- drl技术进行人工智能开发的潜在好处。
{"title":"A Comparison of Dynamical Perceptual-Motor Primitives and Deep Reinforcement Learning for Human-Artificial Agent Training Systems","authors":"Lillian M. Rigoli, Gaurav Patil, Patrick Nalepka, Rachel W. Kallen, S. Hosking, Christopher J. Best, Michael J. Richardson","doi":"10.1177/15553434221092930","DOIUrl":"https://doi.org/10.1177/15553434221092930","url":null,"abstract":"Effective team performance often requires that individuals engage in team training exercises. However, organizing team-training scenarios presents economic and logistical challenges and can be prone to trainer bias and fatigue. Accordingly, a growing body of research is investigating the effectiveness of employing artificial agents (AAs) as synthetic teammates in team training simulations, and, relatedly, how to best develop AAs capable of robust, human-like behavioral interaction. Motivated by these challenges, the current study examined whether task dynamical models of expert human herding behavior could be embedded in the control architecture of AAs to train novice actors to perform a complex multiagent herding task. Training outcomes were compared to human-expert trainers, novice baseline performance, and AAs developed using deep reinforcement learning (DRL). Participants’ subjective preferences for the AAs developed using DRL or dynamical models of human performance were also investigated. The results revealed that AAs controlled by dynamical models of human expert performance could train novice actors at levels equivalent to expert human trainers and were also preferred over AAs developed using DRL. The implications for the development of AAs for robust human-AA interaction and training are discussed, including the potential benefits of employing hybrid Dynamical-DRL techniques for AA development.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":"16 1","pages":"79 - 100"},"PeriodicalIF":2.0,"publicationDate":"2022-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44193336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Examining Physicians’ Explanatory Reasoning in Re-Diagnosis Scenarios for Improving AI Diagnostic Systems 研究医生在再诊断场景中的解释推理,以改进人工智能诊断系统
IF 2 Q3 ENGINEERING, INDUSTRIAL Pub Date : 2022-04-21 DOI: 10.1177/15553434221085114
Lamia Alam, Shane T. Mueller
AI systems are increasingly being developed to provide the first point of contact for patients. These systems are typically focused on question-answering and integrating chat systems with diagnostic algorithms, but are likely to suffer from many of the same deficiencies in explanation that have plagued medical diagnostic systems since the 1970s (Shortliffe, 1979). To provide better guidance about how such systems should approach explanations, we report on an interview study in which we identified explanations that physicians used in the context of re-diagnosis or a change in diagnosis. Seven current and former physicians with a variety of specialties and experience were recruited to take part in the interviews. Several high-level observations were made by reviewing the interview notes. Nine broad categories of explanation emerged from the thematic analysis of the explanation contents. We also present these in a diagnosis meta-timeline that encapsulates many of the commonalities we saw across diagnoses during the interviews. Based on the results, we provided some design recommendations to consider for developing diagnostic AI systems. Altogether, this study suggests explanation strategies, approaches, and methods that might be used by medical diagnostic AI systems to improve user trust and satisfaction with these systems.
越来越多的人工智能系统被开发出来,为患者提供第一个接触点。这些系统通常专注于问答和集成聊天系统与诊断算法,但可能会遭受许多相同的缺陷,解释自20世纪70年代以来一直困扰着医疗诊断系统(Shortliffe, 1979)。为了更好地指导此类系统如何处理解释,我们报告了一项访谈研究,在该研究中,我们确定了医生在重新诊断或改变诊断时使用的解释。7名具有不同专业和经验的现任和前任医生被招募参加面试。通过审查面谈记录,提出了若干高级别意见。通过对解释内容的专题分析,可归纳出九大类解释。我们还在诊断元时间轴中展示了这些,该时间轴包含了我们在访谈中看到的诊断中的许多共性。基于结果,我们提供了一些设计建议,以供开发诊断人工智能系统时考虑。总之,本研究提出了医疗诊断人工智能系统可能使用的解释策略、方法和方法,以提高用户对这些系统的信任和满意度。
{"title":"Examining Physicians’ Explanatory Reasoning in Re-Diagnosis Scenarios for Improving AI Diagnostic Systems","authors":"Lamia Alam, Shane T. Mueller","doi":"10.1177/15553434221085114","DOIUrl":"https://doi.org/10.1177/15553434221085114","url":null,"abstract":"AI systems are increasingly being developed to provide the first point of contact for patients. These systems are typically focused on question-answering and integrating chat systems with diagnostic algorithms, but are likely to suffer from many of the same deficiencies in explanation that have plagued medical diagnostic systems since the 1970s (Shortliffe, 1979). To provide better guidance about how such systems should approach explanations, we report on an interview study in which we identified explanations that physicians used in the context of re-diagnosis or a change in diagnosis. Seven current and former physicians with a variety of specialties and experience were recruited to take part in the interviews. Several high-level observations were made by reviewing the interview notes. Nine broad categories of explanation emerged from the thematic analysis of the explanation contents. We also present these in a diagnosis meta-timeline that encapsulates many of the commonalities we saw across diagnoses during the interviews. Based on the results, we provided some design recommendations to consider for developing diagnostic AI systems. Altogether, this study suggests explanation strategies, approaches, and methods that might be used by medical diagnostic AI systems to improve user trust and satisfaction with these systems.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":"16 1","pages":"63 - 78"},"PeriodicalIF":2.0,"publicationDate":"2022-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49040783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
Journal of Cognitive Engineering and Decision Making
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1