首页 > 最新文献

Topics in Cognitive Science最新文献

英文 中文
Cognitive Models for Machine Theory of Mind. 机器心智理论的认知模型。
IF 2.9 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-04-01 Epub Date: 2024-12-01 DOI: 10.1111/tops.12773
Christian Lebiere, Peter Pirolli, Matthew Johnson, Michael Martin, Donald Morrison

Some of the required characteristics for a true machine theory of mind (MToM) include the ability to (1) reproduce the full diversity of human thought and behavior, (2) develop a personalized model of an individual with very limited data, and (3) provide an explanation for behavioral predictions grounded in the cognitive processes of the individual. We propose that a certain class of cognitive models provide an approach that is well suited to meeting those requirements. Being grounded in a mechanistic framework like a cognitive architecture such as ACT-R naturally fulfills the third requirement by mapping behavior to cognitive mechanisms. Exploiting a modeling paradigm such as instance-based learning accounts for the first requirement by reflecting variations in individual experience into a diversity of behavior. Mechanisms such as knowledge tracing and model tracing allow a specific run of the cognitive model to be aligned with a given individual behavior trace, fulfilling the second requirement. We illustrate these principles with a cognitive model of decision-making in a search and rescue task in the Minecraft simulation environment. We demonstrate that cognitive models personalized to individual human players can provide the MToM capability to optimize artificial intelligence agents by diagnosing the underlying causes of observed human behavior, projecting the future effects of potential interventions, and managing the adaptive process of shaping human behavior. Examples of the inputs provided by such analytic cognitive agents include predictions of cognitive load, probability of error, estimates of player self-efficacy, and trust calibration. Finally, we discuss implications for future research and applications to collective human-machine intelligence.

真正的机器心智理论(MToM)需要具备的一些特征包括:(1)重现人类思想和行为的全部多样性,(2)利用非常有限的数据开发个人的个性化模型,以及(3)为基于个人认知过程的行为预测提供解释。我们提出,某一类认知模型提供了一种非常适合满足这些要求的方法。基于像ACT-R这样的认知架构这样的机制框架,通过将行为映射到认知机制,自然地满足了第三个要求。利用建模范例,如基于实例的学习,通过将个人经验的变化反映到行为的多样性中来解释第一个要求。诸如知识跟踪和模型跟踪之类的机制允许认知模型的特定运行与给定的个人行为跟踪保持一致,从而满足第二个需求。我们用《我的世界》模拟环境中搜索和救援任务的认知决策模型来说明这些原则。我们证明,针对个体玩家的个性化认知模型可以提供MToM能力,通过诊断观察到的人类行为的潜在原因,预测潜在干预的未来影响,以及管理塑造人类行为的适应过程,来优化人工智能代理。这种分析性认知代理提供的输入示例包括认知负荷预测、错误概率、玩家自我效能评估和信任校准。最后,我们讨论了集体人机智能对未来研究和应用的影响。
{"title":"Cognitive Models for Machine Theory of Mind.","authors":"Christian Lebiere, Peter Pirolli, Matthew Johnson, Michael Martin, Donald Morrison","doi":"10.1111/tops.12773","DOIUrl":"10.1111/tops.12773","url":null,"abstract":"<p><p>Some of the required characteristics for a true machine theory of mind (MToM) include the ability to (1) reproduce the full diversity of human thought and behavior, (2) develop a personalized model of an individual with very limited data, and (3) provide an explanation for behavioral predictions grounded in the cognitive processes of the individual. We propose that a certain class of cognitive models provide an approach that is well suited to meeting those requirements. Being grounded in a mechanistic framework like a cognitive architecture such as ACT-R naturally fulfills the third requirement by mapping behavior to cognitive mechanisms. Exploiting a modeling paradigm such as instance-based learning accounts for the first requirement by reflecting variations in individual experience into a diversity of behavior. Mechanisms such as knowledge tracing and model tracing allow a specific run of the cognitive model to be aligned with a given individual behavior trace, fulfilling the second requirement. We illustrate these principles with a cognitive model of decision-making in a search and rescue task in the Minecraft simulation environment. We demonstrate that cognitive models personalized to individual human players can provide the MToM capability to optimize artificial intelligence agents by diagnosing the underlying causes of observed human behavior, projecting the future effects of potential interventions, and managing the adaptive process of shaping human behavior. Examples of the inputs provided by such analytic cognitive agents include predictions of cognitive load, probability of error, estimates of player self-efficacy, and trust calibration. Finally, we discuss implications for future research and applications to collective human-machine intelligence.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":"268-290"},"PeriodicalIF":2.9,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12093916/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142773728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Inner Loop of Collective Human-Machine Intelligence. 人机集体智能的内循环。
IF 2.9 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-04-01 Epub Date: 2023-02-20 DOI: 10.1111/tops.12642
Scott Cheng-Hsin Yang, Tomas Folke, Patrick Shafto

With the rise of artificial intelligence (AI) and the desire to ensure that such machines work well with humans, it is essential for AI systems to actively model their human teammates, a capability referred to as Machine Theory of Mind (MToM). In this paper, we introduce the inner loop of human-machine teaming expressed as communication with MToM capability. We present three different approaches to MToM: (1) constructing models of human inference with well-validated psychological theories and empirical measurements; (2) modeling human as a copy of the AI; and (3) incorporating well-documented domain knowledge about human behavior into the above two approaches. We offer a formal language for machine communication and MToM, where each term has a clear mechanistic interpretation. We exemplify the overarching formalism and the specific approaches in two concrete example scenarios. Related work that demonstrates these approaches is highlighted along the way. The formalism, examples, and empirical support provide a holistic picture of the inner loop of human-machine teaming as a foundational building block of collective human-machine intelligence.

随着人工智能(AI)的兴起,人们希望确保这些机器能与人类很好地合作,因此人工智能系统必须主动为人类队友建模,这种能力被称为 "机器心智理论"(MToM)。在本文中,我们介绍了人机协作的内部循环,即具有 MToM 能力的通信。我们介绍了三种不同的 MToM 方法:(1) 利用经过充分验证的心理学理论和经验测量构建人类推理模型;(2) 将人类建模为人工智能的副本;(3) 将经过充分记录的人类行为领域知识融入上述两种方法。我们为机器交流和 MToM 提供了一种形式语言,其中每个术语都有明确的机械解释。我们在两个具体的示例场景中示范了总体形式主义和具体方法。我们还重点介绍了证明这些方法的相关工作。形式主义、示例和经验支持提供了人机协作内部循环的整体图景,是人机集体智能的基础构件。
{"title":"The Inner Loop of Collective Human-Machine Intelligence.","authors":"Scott Cheng-Hsin Yang, Tomas Folke, Patrick Shafto","doi":"10.1111/tops.12642","DOIUrl":"10.1111/tops.12642","url":null,"abstract":"<p><p>With the rise of artificial intelligence (AI) and the desire to ensure that such machines work well with humans, it is essential for AI systems to actively model their human teammates, a capability referred to as Machine Theory of Mind (MToM). In this paper, we introduce the inner loop of human-machine teaming expressed as communication with MToM capability. We present three different approaches to MToM: (1) constructing models of human inference with well-validated psychological theories and empirical measurements; (2) modeling human as a copy of the AI; and (3) incorporating well-documented domain knowledge about human behavior into the above two approaches. We offer a formal language for machine communication and MToM, where each term has a clear mechanistic interpretation. We exemplify the overarching formalism and the specific approaches in two concrete example scenarios. Related work that demonstrates these approaches is highlighted along the way. The formalism, examples, and empirical support provide a holistic picture of the inner loop of human-machine teaming as a foundational building block of collective human-machine intelligence.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":"248-267"},"PeriodicalIF":2.9,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12093933/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10748969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Role of Adaptation in Collective Human-AI Teaming. 适应性在人与人工智能集体协作中的作用
IF 2.9 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-04-01 Epub Date: 2022-11-14 DOI: 10.1111/tops.12633
Michelle Zhao, Reid Simmons, Henny Admoni

This paper explores a framework for defining artificial intelligence (AI) that adapts to individuals within a group, and discusses the technical challenges for collaborative AI systems that must work with different human partners. Collaborative AI is not one-size-fits-all, and thus AI systems must tune their output based on each human partner's needs and abilities. For example, when communicating with a partner, an AI should consider how prepared their partner is to receive and correctly interpret the information they are receiving. Forgoing such individual considerations may adversely impact the partner's mental state and proficiency. On the other hand, successfully adapting to each person's (or team member's) behavior and abilities can yield performance benefits for the human-AI team. Under this framework, an AI teammate adapts to human partners by first learning components of the human's decision-making process and then updating its own behaviors to positively influence the ongoing collaboration. This paper explains the role of this AI adaptation formalism in dyadic human-AI interactions and examines its application through a case study in a simulated navigation domain.

本文探讨了定义适应群体中个体的人工智能(AI)的框架,并讨论了必须与不同人类伙伴合作的协作式人工智能系统所面临的技术挑战。协作式人工智能并非放之四海而皆准,因此人工智能系统必须根据每个人类伙伴的需求和能力调整其输出。例如,在与伙伴交流时,人工智能应考虑其伙伴在接收和正确解释所接收信息方面的准备程度。放弃这种个性化考虑可能会对伙伴的精神状态和熟练程度产生不利影响。另一方面,成功适应每个人(或团队成员)的行为和能力可以为人类-人工智能团队带来性能上的优势。在这一框架下,人工智能队友首先通过学习人类决策过程的组成部分,然后更新自己的行为来适应人类伙伴,从而对正在进行的合作产生积极影响。本文解释了这一人工智能适应形式主义在人类-人工智能二元互动中的作用,并通过模拟导航领域的案例研究对其应用进行了探讨。
{"title":"The Role of Adaptation in Collective Human-AI Teaming.","authors":"Michelle Zhao, Reid Simmons, Henny Admoni","doi":"10.1111/tops.12633","DOIUrl":"10.1111/tops.12633","url":null,"abstract":"<p><p>This paper explores a framework for defining artificial intelligence (AI) that adapts to individuals within a group, and discusses the technical challenges for collaborative AI systems that must work with different human partners. Collaborative AI is not one-size-fits-all, and thus AI systems must tune their output based on each human partner's needs and abilities. For example, when communicating with a partner, an AI should consider how prepared their partner is to receive and correctly interpret the information they are receiving. Forgoing such individual considerations may adversely impact the partner's mental state and proficiency. On the other hand, successfully adapting to each person's (or team member's) behavior and abilities can yield performance benefits for the human-AI team. Under this framework, an AI teammate adapts to human partners by first learning components of the human's decision-making process and then updating its own behaviors to positively influence the ongoing collaboration. This paper explains the role of this AI adaptation formalism in dyadic human-AI interactions and examines its application through a case study in a simulated navigation domain.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":"291-323"},"PeriodicalIF":2.9,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12093936/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9339381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fostering Collective Intelligence in Human-AI Collaboration: Laying the Groundwork for COHUMAIN. 在人类-人工智能协作中培养集体智慧:为cohumanin奠定基础。
IF 2.9 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-04-01 Epub Date: 2023-06-29 DOI: 10.1111/tops.12679
Pranav Gupta, Thuy Ngoc Nguyen, Cleotilde Gonzalez, Anita Williams Woolley

Artificial Intelligence (AI) powered machines are increasingly mediating our work and many of our managerial, economic, and cultural interactions. While technology enhances individual capability in many ways, how do we know that the sociotechnical system as a whole, consisting of a complex web of hundreds of human-machine interactions, is exhibiting collective intelligence? Research on human-machine interactions has been conducted within different disciplinary silos, resulting in social science models that underestimate technology and vice versa. Bringing together these different perspectives and methods at this juncture is critical. To truly advance our understanding of this important and quickly evolving area, we need vehicles to help research connect across disciplinary boundaries. This paper advocates for establishing an interdisciplinary research domain-Collective Human-Machine Intelligence (COHUMAIN). It outlines a research agenda for a holistic approach to designing and developing the dynamics of sociotechnical systems. In illustrating the kind of approach, we envision in this domain, we describe recent work on a sociocognitive architecture, the transactive systems model of collective intelligence, that articulates the critical processes underlying the emergence and maintenance of collective intelligence and extend it to human-AI systems. We connect this with synergistic work on a compatible cognitive architecture, instance-based learning theory and apply it to the design of AI agents that collaborate with humans. We present this work as a call to researchers working on related questions to not only engage with our proposal but also develop their own sociocognitive architectures and unlock the real potential of human-machine intelligence.

人工智能(AI)驱动的机器正越来越多地调解我们的工作以及许多管理、经济和文化互动。虽然技术在许多方面提高了个人的能力,但我们怎么知道社会技术系统作为一个整体,由数百个人机交互的复杂网络组成,正在展示集体智慧呢?对人机交互的研究在不同的学科领域进行,导致社会科学模型低估了技术,反之亦然。在这个关键时刻将这些不同的观点和方法结合起来是至关重要的。为了真正增进我们对这一重要且快速发展的领域的理解,我们需要工具来帮助研究跨越学科界限。本文主张建立一个跨学科的研究领域——集体人机智能(COHUMAIN)。它概述了设计和发展社会技术系统动态的整体方法的研究议程。为了说明我们在这一领域设想的方法,我们描述了最近关于社会认知架构的工作,即集体智能的交互系统模型,它阐明了集体智能出现和维护的关键过程,并将其扩展到人类-人工智能系统。我们将此与兼容的认知架构、基于实例的学习理论的协同工作联系起来,并将其应用于与人类合作的人工智能代理的设计。我们将这项工作作为对研究相关问题的研究人员的呼吁,不仅要参与我们的建议,还要发展他们自己的社会认知架构,并释放人机智能的真正潜力。
{"title":"Fostering Collective Intelligence in Human-AI Collaboration: Laying the Groundwork for COHUMAIN.","authors":"Pranav Gupta, Thuy Ngoc Nguyen, Cleotilde Gonzalez, Anita Williams Woolley","doi":"10.1111/tops.12679","DOIUrl":"10.1111/tops.12679","url":null,"abstract":"<p><p>Artificial Intelligence (AI) powered machines are increasingly mediating our work and many of our managerial, economic, and cultural interactions. While technology enhances individual capability in many ways, how do we know that the sociotechnical system as a whole, consisting of a complex web of hundreds of human-machine interactions, is exhibiting collective intelligence? Research on human-machine interactions has been conducted within different disciplinary silos, resulting in social science models that underestimate technology and vice versa. Bringing together these different perspectives and methods at this juncture is critical. To truly advance our understanding of this important and quickly evolving area, we need vehicles to help research connect across disciplinary boundaries. This paper advocates for establishing an interdisciplinary research domain-Collective Human-Machine Intelligence (COHUMAIN). It outlines a research agenda for a holistic approach to designing and developing the dynamics of sociotechnical systems. In illustrating the kind of approach, we envision in this domain, we describe recent work on a sociocognitive architecture, the transactive systems model of collective intelligence, that articulates the critical processes underlying the emergence and maintenance of collective intelligence and extend it to human-AI systems. We connect this with synergistic work on a compatible cognitive architecture, instance-based learning theory and apply it to the design of AI agents that collaborate with humans. We present this work as a call to researchers working on related questions to not only engage with our proposal but also develop their own sociocognitive architectures and unlock the real potential of human-machine intelligence.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":"189-216"},"PeriodicalIF":2.9,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12093911/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9697847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Shifting Between Models of Mind: New Insights Into How Human Minds Give Rise to Experiences of Spiritual Presence and Alternative Realities. 在思维模式之间转换:人类思维如何产生精神存在和替代现实体验的新见解。
IF 2.9 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-04-01 Epub Date: 2025-03-09 DOI: 10.1111/tops.70002
Kara Weisman, Tanya Marie Luhrmann

Phenomenal experiences of immaterial spiritual beings-hearing the voice of God, seeing the spirit of an ancestor-are a valuable and largely untapped resource for the field of cognitive science. Such experiences, we argue, are experiences of the mind, tied to mental models and cognitive-epistemic attitudes about the mind, and thus provide a striking example of how, with the right combination of mental models and cognitive-epistemic attitudes, one's own thoughts and inner sensations can be experienced as coming from somewhere or someone else. In this paper, we present results from a large-scale study of U.S. adults (N = 1779) that provides new support for our theory that spiritual experiences are facilitated by a dynamic interaction between mental models and cognitive-epistemic attitudes: A person is more likely to hear God speak if they have the epistemic flexibility and cultural support to shift, temporarily, away from a mundane model of mind into a more "porous" way of thinking and being. This, in turn, lays the foundation for a meditation on how mental models and cognitive-epistemic attitudes might also interact to facilitate other phenomena of interest to cognitive science, such as fiction writing and scientific discovery.

非物质精神存在的现象体验——听到上帝的声音,看到祖先的灵魂——对于认知科学领域来说是一种宝贵的、尚未开发的资源。我们认为,这样的经验是心灵的经验,与心智模型和关于心灵的认知-认识态度联系在一起,因此提供了一个引人注目的例子,说明如何通过心智模型和认知-认识态度的正确结合,一个人自己的思想和内在感觉可以被体验为来自某处或他人。在这篇论文中,我们展示了一项对美国成年人(N = 1779)的大规模研究结果,为我们的理论提供了新的支持,即精神体验是由心理模式和认知-认知态度之间的动态相互作用促进的:如果一个人有认知灵活性和文化支持,可以暂时从世俗的思维模式转变为一种更“多孔”的思维和存在方式,那么他更有可能听到上帝的声音。这反过来又为思考心理模型和认知-认知态度如何相互作用以促进认知科学感兴趣的其他现象(如小说写作和科学发现)奠定了基础。
{"title":"Shifting Between Models of Mind: New Insights Into How Human Minds Give Rise to Experiences of Spiritual Presence and Alternative Realities.","authors":"Kara Weisman, Tanya Marie Luhrmann","doi":"10.1111/tops.70002","DOIUrl":"10.1111/tops.70002","url":null,"abstract":"<p><p>Phenomenal experiences of immaterial spiritual beings-hearing the voice of God, seeing the spirit of an ancestor-are a valuable and largely untapped resource for the field of cognitive science. Such experiences, we argue, are experiences of the mind, tied to mental models and cognitive-epistemic attitudes about the mind, and thus provide a striking example of how, with the right combination of mental models and cognitive-epistemic attitudes, one's own thoughts and inner sensations can be experienced as coming from somewhere or someone else. In this paper, we present results from a large-scale study of U.S. adults (N = 1779) that provides new support for our theory that spiritual experiences are facilitated by a dynamic interaction between mental models and cognitive-epistemic attitudes: A person is more likely to hear God speak if they have the epistemic flexibility and cultural support to shift, temporarily, away from a mundane model of mind into a more \"porous\" way of thinking and being. This, in turn, lays the foundation for a meditation on how mental models and cognitive-epistemic attitudes might also interact to facilitate other phenomena of interest to cognitive science, such as fiction writing and scientific discovery.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":"144-179"},"PeriodicalIF":2.9,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143587711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human Performance in Competitive and Collaborative Human-Machine Teams. 竞争和协作人机团队中的人的表现。
IF 2.9 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-04-01 Epub Date: 2023-07-13 DOI: 10.1111/tops.12683
Murray S Bennett, Laiton Hedley, Jonathon Love, Joseph W Houpt, Scott D Brown, Ami Eidels

In the modern world, many important tasks have become too complex for a single unaided individual to manage. Teams conduct some safety-critical tasks to improve task performance and minimize the risk of error. These teams have traditionally consisted of human operators, yet, nowadays, artificial intelligence and machine systems are incorporated into team environments to improve performance and capacity. We used a computerized task modeled after a classic arcade game to investigate the performance of human-machine and human-human teams. We manipulated the group conditions between team members; sometimes, they were instructed to collaborate, compete, or work separately. We evaluated players' performance in the main task (gameplay) and, in post hoc analyses, participant behavioral patterns to inform group strategies. We compared game performance between team types (human-human vs. human-machine) and group conditions (competitive, collaborative, independent). Adapting workload capacity analysis to human-machine teams, we found performance under both team types and all group conditions suffered a performance efficiency cost. However, we observed a reduced cost in collaborative over competitive teams within human-human pairings, but this effect was diminished when playing with a machine partner. The implications of workload capacity analysis as a powerful tool for human-machine team performance measurement are discussed.

在现代社会,许多重要的任务已经变得过于复杂,一个人无法独立完成。团队执行一些安全关键任务,以提高任务性能并最小化错误风险。传统上,这些团队由人工操作员组成,然而,如今,人工智能和机器系统被纳入团队环境,以提高绩效和能力。我们使用了一个模仿经典街机游戏的计算机化任务来调查人机和人机团队的表现。我们操纵了团队成员之间的小组条件;有时,他们被要求合作、竞争或单独工作。我们评估了玩家在主要任务(游戏玩法)中的表现,并在事后分析中分析了参与者的行为模式,从而为团队策略提供信息。我们比较了团队类型(人类vs.人机)和团队条件(竞争、合作、独立)的游戏表现。将工作负载能力分析应用于人机团队,我们发现在两种团队类型和所有组条件下的绩效都受到绩效效率成本的影响。然而,我们观察到,在人类配对中,合作团队比竞争团队的成本更低,但与机器伙伴合作时,这种影响就会减弱。讨论了工作负荷能力分析作为人机团队绩效测量的有力工具的意义。
{"title":"Human Performance in Competitive and Collaborative Human-Machine Teams.","authors":"Murray S Bennett, Laiton Hedley, Jonathon Love, Joseph W Houpt, Scott D Brown, Ami Eidels","doi":"10.1111/tops.12683","DOIUrl":"10.1111/tops.12683","url":null,"abstract":"<p><p>In the modern world, many important tasks have become too complex for a single unaided individual to manage. Teams conduct some safety-critical tasks to improve task performance and minimize the risk of error. These teams have traditionally consisted of human operators, yet, nowadays, artificial intelligence and machine systems are incorporated into team environments to improve performance and capacity. We used a computerized task modeled after a classic arcade game to investigate the performance of human-machine and human-human teams. We manipulated the group conditions between team members; sometimes, they were instructed to collaborate, compete, or work separately. We evaluated players' performance in the main task (gameplay) and, in post hoc analyses, participant behavioral patterns to inform group strategies. We compared game performance between team types (human-human vs. human-machine) and group conditions (competitive, collaborative, independent). Adapting workload capacity analysis to human-machine teams, we found performance under both team types and all group conditions suffered a performance efficiency cost. However, we observed a reduced cost in collaborative over competitive teams within human-human pairings, but this effect was diminished when playing with a machine partner. The implications of workload capacity analysis as a powerful tool for human-machine team performance measurement are discussed.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":"324-348"},"PeriodicalIF":2.9,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12093930/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9770733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Introduction to topiCS Volume 17, Issue 2. 主题导论第17卷,第2期。
IF 2.9 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-04-01 Epub Date: 2025-03-24 DOI: 10.1111/tops.70006
Andrea Bender
{"title":"Introduction to topiCS Volume 17, Issue 2.","authors":"Andrea Bender","doi":"10.1111/tops.70006","DOIUrl":"10.1111/tops.70006","url":null,"abstract":"","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":"142-143"},"PeriodicalIF":2.9,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143701769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-beliefs, Transactive Memory Systems, and Collective Identification in Teams: Articulating the Socio-Cognitive Underpinnings of COHUMAIN. 团队中的自我信念、交互记忆系统和集体认同:阐明共同认知的社会认知基础。
IF 2.9 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-04-01 Epub Date: 2023-07-04 DOI: 10.1111/tops.12681
Ishani Aggarwal, Gabriela Cuconato, Nüfer Yasin Ateş, Nicoleta Meslec

Socio-cognitive theory conceptualizes individual contributors as both enactors of cognitive processes and targets of a social context's determinative influences. The present research investigates how contributors' metacognition or self-beliefs, combine with others' views of themselves to inform collective team states related to learning about other agents (i.e., transactive memory systems) and forming social attachments with other agents (i.e., collective team identification), both important teamwork states that have implications for team collective intelligence. We test the predictions in a longitudinal study with 78 teams. Additionally, we provide interview data from industry experts in human-artificial intelligence teams. Our findings contribute to an emerging socio-cognitive architecture for COllective HUman-MAchine INtelligence (i.e., COHUMAIN) by articulating its underpinnings in individual and collective cognition and metacognition. Our resulting model has implications for the critical inputs necessary to design and enable a higher level of integration of human and machine teammates.

社会认知理论将个体贡献者概念化为认知过程的实施者和社会环境决定性影响的目标。本研究探讨了贡献者的元认知或自我信念如何与他人对自己的看法相结合,从而告知集体团队状态,这些状态与了解其他代理人(即交互记忆系统)和与其他代理人形成社会依恋(即集体团队认同)有关,这两种重要的团队合作状态都对团队集体智慧有影响。我们在78个团队的纵向研究中测试了这些预测。此外,我们还提供人工智能团队行业专家的访谈数据。我们的发现通过阐明其在个人和集体认知以及元认知中的基础,为集体人机智能(即COHUMAIN)的新兴社会认知架构做出了贡献。我们的结果模型暗示了设计和实现更高层次的人与机器团队集成所必需的关键输入。
{"title":"Self-beliefs, Transactive Memory Systems, and Collective Identification in Teams: Articulating the Socio-Cognitive Underpinnings of COHUMAIN.","authors":"Ishani Aggarwal, Gabriela Cuconato, Nüfer Yasin Ateş, Nicoleta Meslec","doi":"10.1111/tops.12681","DOIUrl":"10.1111/tops.12681","url":null,"abstract":"<p><p>Socio-cognitive theory conceptualizes individual contributors as both enactors of cognitive processes and targets of a social context's determinative influences. The present research investigates how contributors' metacognition or self-beliefs, combine with others' views of themselves to inform collective team states related to learning about other agents (i.e., transactive memory systems) and forming social attachments with other agents (i.e., collective team identification), both important teamwork states that have implications for team collective intelligence. We test the predictions in a longitudinal study with 78 teams. Additionally, we provide interview data from industry experts in human-artificial intelligence teams. Our findings contribute to an emerging socio-cognitive architecture for COllective HUman-MAchine INtelligence (i.e., COHUMAIN) by articulating its underpinnings in individual and collective cognition and metacognition. Our resulting model has implications for the critical inputs necessary to design and enable a higher level of integration of human and machine teammates.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":"217-247"},"PeriodicalIF":2.9,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12093922/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9758826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Do We Collaborate With What We Design? 我们是否与我们的设计协作?
IF 2.9 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-04-01 Epub Date: 2023-08-15 DOI: 10.1111/tops.12682
Katie D Evans, Scott A Robbins, Joanna J Bryson

The use of terms like "collaboration" and "co-workers" to describe interactions between human beings and certain artificial intelligence (AI) systems has gained significant traction in recent years. Yet, it remains an open question whether such anthropomorphic metaphors provide either a fertile or even a purely innocuous lens through which to conceptualize designed commercial products. Rather, a respect for human dignity and the principle of transparency may require us to draw a sharp distinction between real and faux peers. At the heart of the concept of collaboration lies the assumption that the collaborating parties are (or behave as if they are) of similar status: two agents capable of comparable forms of intentional action, moral agency, or moral responsibility. In application to current AI systems, this not only seems to fail ontologically but also from a socio-political perspective. AI in the workplace is primarily an extension of capital, not of labor, and the AI "co-workers" of most individuals will likely be owned and operated by their employer. In this paper, we critically assess both the accuracy and desirability of using the term "collaboration" to describe interactions between humans and AI systems. We begin by proposing an alternative ontology of human-machine interaction, one which features not two equivalently autonomous agents, but rather one machine that exists in a relationship of heteronomy to one or more human agents. In this sense, while the machine may have a significant degree of independence concerning the means by which it achieves its ends, the ends themselves are always chosen by at least one human agent, whose interests may differ from those of the individuals interacting with the machine. We finally consider the motivations and risks inherent to the continued use of the term "collaboration," exploring its strained relation to the concept of transparency, and consequences for the future of work.

近年来,使用“协作”和“同事”等术语来描述人类与某些人工智能(AI)系统之间的互动得到了极大的关注。然而,这种拟人化的隐喻是否提供了一个丰富的,甚至是一个纯粹无害的镜头,通过它来概念化设计的商业产品,这仍然是一个悬而未决的问题。相反,对人的尊严和透明度原则的尊重可能要求我们对真正的和虚假的同伴作出明确的区分。协作概念的核心是这样一个假设,即合作各方(或表现得好像他们是)具有相似的地位:两个具有类似形式的有意行为、道德代理或道德责任的代理。在应用于当前的人工智能系统时,这不仅在本体论上似乎失败了,而且从社会政治的角度来看也是如此。工作场所的人工智能主要是资本的延伸,而不是劳动力的延伸,大多数人的人工智能“同事”很可能由雇主拥有和运营。在本文中,我们批判性地评估了使用术语“协作”来描述人类和人工智能系统之间的交互的准确性和可取性。我们首先提出了人机交互的另一种本体,它的特征不是两个同等自治的代理,而是一个机器存在于一个或多个人类代理的他律关系中。从这个意义上说,虽然机器在实现其目的的手段上可能有很大程度的独立性,但目的本身总是由至少一个人来选择,而这个人的利益可能与与机器交互的个人的利益不同。最后,我们考虑了继续使用“协作”一词的动机和内在风险,探索了它与透明度概念的紧张关系,以及对未来工作的影响。
{"title":"Do We Collaborate With What We Design?","authors":"Katie D Evans, Scott A Robbins, Joanna J Bryson","doi":"10.1111/tops.12682","DOIUrl":"10.1111/tops.12682","url":null,"abstract":"<p><p>The use of terms like \"collaboration\" and \"co-workers\" to describe interactions between human beings and certain artificial intelligence (AI) systems has gained significant traction in recent years. Yet, it remains an open question whether such anthropomorphic metaphors provide either a fertile or even a purely innocuous lens through which to conceptualize designed commercial products. Rather, a respect for human dignity and the principle of transparency may require us to draw a sharp distinction between real and faux peers. At the heart of the concept of collaboration lies the assumption that the collaborating parties are (or behave as if they are) of similar status: two agents capable of comparable forms of intentional action, moral agency, or moral responsibility. In application to current AI systems, this not only seems to fail ontologically but also from a socio-political perspective. AI in the workplace is primarily an extension of capital, not of labor, and the AI \"co-workers\" of most individuals will likely be owned and operated by their employer. In this paper, we critically assess both the accuracy and desirability of using the term \"collaboration\" to describe interactions between humans and AI systems. We begin by proposing an alternative ontology of human-machine interaction, one which features not two equivalently autonomous agents, but rather one machine that exists in a relationship of heteronomy to one or more human agents. In this sense, while the machine may have a significant degree of independence concerning the means by which it achieves its ends, the ends themselves are always chosen by at least one human agent, whose interests may differ from those of the individuals interacting with the machine. We finally consider the motivations and risks inherent to the continued use of the term \"collaboration,\" exploring its strained relation to the concept of transparency, and consequences for the future of work.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":"392-411"},"PeriodicalIF":2.9,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12093928/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10064918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Establishing Human Observer Criterion in Evaluating Artificial Social Intelligence Agents in a Search and Rescue Task. 在搜救任务中评价人工社会智能主体的人类观察者准则的建立。
IF 2.9 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Pub Date : 2025-04-01 Epub Date: 2023-04-13 DOI: 10.1111/tops.12648
Lixiao Huang, Jared Freeman, Nancy J Cooke, Myke C Cohen, Xiaoyun Yin, Jeska Clark, Matt Wood, Verica Buchanan, Christopher Corral, Federico Scholcover, Anagha Mudigonda, Lovein Thomas, Aaron Teo, John Colonna-Romano

Artificial social intelligence (ASI) agents have great potential to aid the success of individuals, human-human teams, and human-artificial intelligence teams. To develop helpful ASI agents, we created an urban search and rescue task environment in Minecraft to evaluate ASI agents' ability to infer participants' knowledge training conditions and predict participants' next victim type to be rescued. We evaluated ASI agents' capabilities in three ways: (a) comparison to ground truth-the actual knowledge training condition and participant actions; (b) comparison among different ASI agents; and (c) comparison to a human observer criterion, whose accuracy served as a reference point. The human observers and the ASI agents used video data and timestamped event messages from the testbed, respectively, to make inferences about the same participants and topic (knowledge training condition) and the same instances of participant actions (rescue of victims). Overall, ASI agents performed better than human observers in inferring knowledge training conditions and predicting actions. Refining the human criterion can guide the design and evaluation of ASI agents for complex task environments and team composition.

人工社会智能(ASI)代理在帮助个人、人与人之间的团队以及人与人工智能团队取得成功方面具有巨大的潜力。为了开发有用的ASI智能体,我们在《我的世界》中创建了一个城市搜索和救援任务环境,以评估ASI智能体推断参与者的知识培训条件和预测参与者下一个需要救援的受害者类型的能力。我们通过三种方式评估ASI智能体的能力:(a)与实际知识训练条件和参与者行为的比较;(b)不同ASI药剂之间的比较;(c)与人类观察者标准的比较,其精度作为参考点。人类观察者和ASI代理分别使用来自测试平台的视频数据和时间戳事件消息来推断相同的参与者和主题(知识训练条件)以及参与者行动的相同实例(救援受害者)。总体而言,ASI智能体在推断知识训练条件和预测行为方面比人类观察者表现得更好。精炼人类标准可以指导复杂任务环境和团队组成的ASI代理的设计和评估。
{"title":"Establishing Human Observer Criterion in Evaluating Artificial Social Intelligence Agents in a Search and Rescue Task.","authors":"Lixiao Huang, Jared Freeman, Nancy J Cooke, Myke C Cohen, Xiaoyun Yin, Jeska Clark, Matt Wood, Verica Buchanan, Christopher Corral, Federico Scholcover, Anagha Mudigonda, Lovein Thomas, Aaron Teo, John Colonna-Romano","doi":"10.1111/tops.12648","DOIUrl":"10.1111/tops.12648","url":null,"abstract":"<p><p>Artificial social intelligence (ASI) agents have great potential to aid the success of individuals, human-human teams, and human-artificial intelligence teams. To develop helpful ASI agents, we created an urban search and rescue task environment in Minecraft to evaluate ASI agents' ability to infer participants' knowledge training conditions and predict participants' next victim type to be rescued. We evaluated ASI agents' capabilities in three ways: (a) comparison to ground truth-the actual knowledge training condition and participant actions; (b) comparison among different ASI agents; and (c) comparison to a human observer criterion, whose accuracy served as a reference point. The human observers and the ASI agents used video data and timestamped event messages from the testbed, respectively, to make inferences about the same participants and topic (knowledge training condition) and the same instances of participant actions (rescue of victims). Overall, ASI agents performed better than human observers in inferring knowledge training conditions and predicting actions. Refining the human criterion can guide the design and evaluation of ASI agents for complex task environments and team composition.</p>","PeriodicalId":47822,"journal":{"name":"Topics in Cognitive Science","volume":" ","pages":"349-373"},"PeriodicalIF":2.9,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9659226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Topics in Cognitive Science
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1