Improving Common Ground in Human-Machine Teaming: Dimensions, Gaps, and Priorities

Robert Wray, James R. Kirk, J. Folsom-Kovarik
{"title":"Improving Common Ground in Human-Machine Teaming: Dimensions, Gaps, and Priorities","authors":"Robert Wray, James R. Kirk, J. Folsom-Kovarik","doi":"10.54941/ahfe1001463","DOIUrl":null,"url":null,"abstract":"“Common ground” is the knowledge, facts, beliefs, etc. that are shared between participants in some joint activity. Much of human conversation concerns “grounding,” or ensuring that some assertion is actually shared between participants. Even for highly trained tasks, such teammates executing a military mission, each participant devotes attention to contributing new assertions, making adjustments based on the statements of others, offering potential repairs to resolve potential discrepancies in the common ground and so forth.In conversational interactions between humans and machines (or “agents”), this activity to build and to maintain a common ground is typically one-sided and fixed. It is one-sided because the human must do almost all the work of creating substantive common ground in the interaction. It is fixed because the agent does not adapt its understanding to what the human knows, prefers, and expects. Instead, the human must adapt to the agent. These limitations create burdensome cognitive demand, result in frustration and distrust in automation, and make the notion of an agent “teammate” seem an ambition far from reachable in today’s state-of-art. We are seeking to enable agents to more fully partner in building and maintaining common ground as well as to enable them to adapt their understanding of a joint activity. While “common ground” is often called out as a gap in human-machine teaming, there is not an extant, detailed analysis of the components of common ground and a mapping of these components to specific classes of functions (what specific agent capabilities is required to achieve common ground?) and deficits (what kinds of errors may arise when the functions are insufficient for a particular component of the common ground?). In this paper, we provide such an analysis, focusing on the requirements for human-machine teaming in a military context where interactions are task-oriented and generally well-trained.Drawing on the literature of human communication, we identify the components of information included in common ground. We identify three main axes: the temporal dimension of common ground and personal and communal common ground. The analysis further subdivides these distinctions, differentiating between aspects of the common ground such as personal history between participants, norms and expectations based on those norms, and the extent to which actions taken by participants in a human-machine interaction context are “public” events or not. Within each dimension, we also provide examples of specific issues that may arise due to problems due to lack of common ground related to a specific dimension. The analysis thus defines, at a more granular level than existing analyses, how specific categories of deficits in shared knowledge or processing differences manifests in misalignment in shared understanding. The paper both identifies specific challenges and prioritizes them according to acuteness of need. In other words, not all of the gaps require immediate attention to improve human-machine interaction. Further, the solution to specific issues may sometimes depend on solutions to other issues. As a consequence, this analysis facilitates greater understanding of how to attack issues in misalignment in both the nearer- and longer-terms.","PeriodicalId":405313,"journal":{"name":"Artificial Intelligence and Social Computing","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence and Social Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.54941/ahfe1001463","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

“Common ground” is the knowledge, facts, beliefs, etc. that are shared between participants in some joint activity. Much of human conversation concerns “grounding,” or ensuring that some assertion is actually shared between participants. Even for highly trained tasks, such teammates executing a military mission, each participant devotes attention to contributing new assertions, making adjustments based on the statements of others, offering potential repairs to resolve potential discrepancies in the common ground and so forth.In conversational interactions between humans and machines (or “agents”), this activity to build and to maintain a common ground is typically one-sided and fixed. It is one-sided because the human must do almost all the work of creating substantive common ground in the interaction. It is fixed because the agent does not adapt its understanding to what the human knows, prefers, and expects. Instead, the human must adapt to the agent. These limitations create burdensome cognitive demand, result in frustration and distrust in automation, and make the notion of an agent “teammate” seem an ambition far from reachable in today’s state-of-art. We are seeking to enable agents to more fully partner in building and maintaining common ground as well as to enable them to adapt their understanding of a joint activity. While “common ground” is often called out as a gap in human-machine teaming, there is not an extant, detailed analysis of the components of common ground and a mapping of these components to specific classes of functions (what specific agent capabilities is required to achieve common ground?) and deficits (what kinds of errors may arise when the functions are insufficient for a particular component of the common ground?). In this paper, we provide such an analysis, focusing on the requirements for human-machine teaming in a military context where interactions are task-oriented and generally well-trained.Drawing on the literature of human communication, we identify the components of information included in common ground. We identify three main axes: the temporal dimension of common ground and personal and communal common ground. The analysis further subdivides these distinctions, differentiating between aspects of the common ground such as personal history between participants, norms and expectations based on those norms, and the extent to which actions taken by participants in a human-machine interaction context are “public” events or not. Within each dimension, we also provide examples of specific issues that may arise due to problems due to lack of common ground related to a specific dimension. The analysis thus defines, at a more granular level than existing analyses, how specific categories of deficits in shared knowledge or processing differences manifests in misalignment in shared understanding. The paper both identifies specific challenges and prioritizes them according to acuteness of need. In other words, not all of the gaps require immediate attention to improve human-machine interaction. Further, the solution to specific issues may sometimes depend on solutions to other issues. As a consequence, this analysis facilitates greater understanding of how to attack issues in misalignment in both the nearer- and longer-terms.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
改进人机合作的共同点:维度、差距和优先级
“Common ground”是指在某些联合活动中参与者所共享的知识、事实、信念等。人类的许多对话都涉及“基础”,或确保参与者之间确实分享了某些主张。即使是训练有素的任务,比如执行军事任务的队友,每个参与者都致力于贡献新的主张,根据其他人的陈述进行调整,提供潜在的修复以解决共同基础中的潜在差异,等等。在人与机器(或“代理”)之间的会话交互中,这种建立和维护共同基础的活动通常是片面和固定的。它是片面的,因为人类必须做几乎所有的工作,在互动中创造实质性的共同点。它是固定的,因为智能体没有调整它的理解来适应人类所知道的、喜欢的和期望的东西。相反,人类必须适应代理。这些限制产生了繁重的认知需求,导致了自动化中的挫折和不信任,并使代理“队友”的概念在当今的技术水平下似乎遥不可及。我们正在设法使代理人能够在建立和维持共同基础方面更充分地合作,并使他们能够调整他们对联合活动的理解。虽然“公共基础”经常被称为人机合作中的一个空白,但目前还没有对公共基础的组件进行详细的分析,也没有将这些组件映射到特定的功能类别(实现公共基础需要哪些特定的代理功能?)和缺陷(当功能不足以满足公共基础的特定组件时,可能会出现哪些类型的错误?)在本文中,我们提供了这样的分析,重点关注军事环境中人机团队的需求,其中交互以任务为导向并且通常训练有素。根据人类交流的文献,我们确定了包含在共同基础中的信息的组成部分。我们确定了三个主轴:共同点的时间维度和个人和公共共同点。分析进一步细分了这些区别,区分了共同基础的各个方面,如参与者之间的个人历史、基于这些规范的规范和期望,以及参与者在人机交互环境中采取的行动在多大程度上是“公共”事件。在每个维度中,我们还提供了由于缺乏与特定维度相关的共同点而导致的问题可能产生的特定问题的示例。因此,该分析在比现有分析更细粒度的水平上定义了共享知识或处理差异中特定类别的缺陷如何在共享理解的不一致中表现出来。该文件既确定了具体的挑战,又根据需求的紧迫性对其进行了优先排序。换句话说,并非所有的差距都需要立即关注以改善人机交互。此外,特定问题的解决方案有时可能依赖于其他问题的解决方案。因此,这种分析有助于更好地理解如何在短期和长期内解决不一致的问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Hepatitis predictive analysis model through deep learning using neural networks based on patient history A machine learning approach for optimizing waiting times in a hand surgery operation center Automated Decision Support for Collaborative, Interactive Classification Dynamically monitoring crowd-worker's reliability with interval-valued labels Detection of inappropriate images on smartphones based on computer vision techniques
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1