首页 > 最新文献

ACM Transactions on Human-Robot Interaction最新文献

英文 中文
Understanding Human Dynamic Sampling Objectives to Enable Robot-assisted Scientific Decision Making 理解人类动态采样目标,使机器人辅助科学决策
Q2 Computer Science Pub Date : 2023-09-13 DOI: 10.1145/3623383
Shipeng Liu, Cristina G. Wilson, Bhaskar Krishnamachari, Feifei Qian
Truly collaborative scientific field data collection between human scientists and autonomous robot systems requires a shared understanding of the search objectives and tradeoffs faced when making decisions. Therefore, critical to developing intelligent robots to aid human experts, is an understanding of how scientists make such decisions and how they adapt their data collection strategies when presented with new information in situ . In this study we examined the dynamic data collection decisions of 108 expert geoscience researchers using a simulated field scenario. Human data collection behaviors suggested two distinct objectives: an information-based objective to maximize information coverage, and a discrepancy-based objective to maximize hypothesis verification. We developed a highly-simplified quantitative decision model that allows the robot to predict potential human data collection locations based on the two observed human data collection objectives. Predictions from the simple model revealed a transition from information-based to discrepancy-based objective as the level of information increased. The findings will allow robotic teammates to connect experts’ dynamic science objectives with the adaptation of their sampling behaviors, and in the long term, enable the development of more cognitively-compatible robotic field assistants.
人类科学家和自主机器人系统之间真正的协作科学领域数据收集需要对搜索目标和决策时面临的权衡有共同的理解。因此,开发智能机器人来帮助人类专家的关键是了解科学家如何做出这样的决定,以及他们如何在现场呈现新信息时调整他们的数据收集策略。在这项研究中,我们使用模拟的现场场景检查了108位地球科学专家的动态数据收集决策。人类数据收集行为表明了两个不同的目标:基于信息的目标是最大化信息覆盖,基于差异的目标是最大化假设验证。我们开发了一个高度简化的定量决策模型,允许机器人根据两个观察到的人类数据收集目标预测潜在的人类数据收集位置。简单模型的预测显示,随着信息水平的增加,目标从基于信息到基于差异的转变。这些发现将使机器人队友能够将专家的动态科学目标与他们的采样行为的适应联系起来,从长远来看,能够开发出更多认知兼容的机器人现场助理。
{"title":"Understanding Human Dynamic Sampling Objectives to Enable Robot-assisted Scientific Decision Making","authors":"Shipeng Liu, Cristina G. Wilson, Bhaskar Krishnamachari, Feifei Qian","doi":"10.1145/3623383","DOIUrl":"https://doi.org/10.1145/3623383","url":null,"abstract":"Truly collaborative scientific field data collection between human scientists and autonomous robot systems requires a shared understanding of the search objectives and tradeoffs faced when making decisions. Therefore, critical to developing intelligent robots to aid human experts, is an understanding of how scientists make such decisions and how they adapt their data collection strategies when presented with new information in situ . In this study we examined the dynamic data collection decisions of 108 expert geoscience researchers using a simulated field scenario. Human data collection behaviors suggested two distinct objectives: an information-based objective to maximize information coverage, and a discrepancy-based objective to maximize hypothesis verification. We developed a highly-simplified quantitative decision model that allows the robot to predict potential human data collection locations based on the two observed human data collection objectives. Predictions from the simple model revealed a transition from information-based to discrepancy-based objective as the level of information increased. The findings will allow robotic teammates to connect experts’ dynamic science objectives with the adaptation of their sampling behaviors, and in the long term, enable the development of more cognitively-compatible robotic field assistants.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135736411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Forging Productive Human-Robot Partnerships Through Task Training 通过任务训练建立富有成效的人机合作伙伴关系
IF 5.1 Q2 Computer Science Pub Date : 2023-08-31 DOI: 10.1145/3611657
Maia Stiber, Yuxiang Gao, R. Taylor, Chien-Ming Huang
Productive human-robot partnerships are vital to successful integration of assistive robots into everyday life. While prior research has explored techniques to facilitate collaboration during human-robot interaction, the work described here aims to forge productive partnerships prior to human-robot interaction, drawing upon team building activities’ aid in establishing effective human teams. Through a 2 (group membership: ingroup and outgroup) × 3 (robot error: main task errors, side task errors, and no errors) online study (N = 62), we demonstrate that 1) a non-social pre-task exercise can help form ingroup relationships; 2) an ingroup robot is perceived as a better, more committed teammate than an outgroup robot (despite the two behaving identically); and 3) participants are more tolerant of negative outcomes when working with an ingroup robot. We discuss how pre-task exercises may serve as an active task failure mitigation strategy.
高效的人机合作关系对于将辅助机器人成功融入日常生活至关重要。虽然之前的研究已经探索了在人机交互过程中促进协作的技术,但这里描述的工作旨在在人机交互之前建立富有成效的伙伴关系,利用团队建设活动帮助建立有效的人类团队。通过一项2(群体成员:内群体和外群体)× 3(机器人错误:主任务错误、副任务错误和无错误)的在线研究(N = 62),我们证明了1)非社会任务前练习可以帮助形成内群体关系;2)内组机器人被认为是比外组机器人更好、更忠诚的队友(尽管两者的行为相同);3)参与者在与内部机器人合作时更能容忍负面结果。我们讨论了任务前练习如何作为一种主动的任务失败缓解策略。
{"title":"Forging Productive Human-Robot Partnerships Through Task Training","authors":"Maia Stiber, Yuxiang Gao, R. Taylor, Chien-Ming Huang","doi":"10.1145/3611657","DOIUrl":"https://doi.org/10.1145/3611657","url":null,"abstract":"Productive human-robot partnerships are vital to successful integration of assistive robots into everyday life. While prior research has explored techniques to facilitate collaboration during human-robot interaction, the work described here aims to forge productive partnerships prior to human-robot interaction, drawing upon team building activities’ aid in establishing effective human teams. Through a 2 (group membership: ingroup and outgroup) × 3 (robot error: main task errors, side task errors, and no errors) online study (N = 62), we demonstrate that 1) a non-social pre-task exercise can help form ingroup relationships; 2) an ingroup robot is perceived as a better, more committed teammate than an outgroup robot (despite the two behaving identically); and 3) participants are more tolerant of negative outcomes when working with an ingroup robot. We discuss how pre-task exercises may serve as an active task failure mitigation strategy.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75149830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Augmented Reality Visualization of Autonomous Mobile Robot Change Detection in Uninstrumented Environments 无仪器环境下自主移动机器人变化检测的增强现实可视化
IF 5.1 Q2 Computer Science Pub Date : 2023-08-21 DOI: 10.1145/3611654
Christopher M. Reardon, J. Gregory, Kerstin S Haring, Benjamin Dossett, Ori Miller, A. Inyang
The creation of information transparency solutions to enable humans to understand robot perception is a challenging requirement for autonomous and artificially intelligent robots to impact a multitude of domains. By taking advantage of comprehensive and high-volume data from robot teammates’ advanced perception and reasoning capabilities, humans will be able to make better decisions, with significant impacts from safety to functionality. We present a solution to this challenge by coupling augmented reality (AR) with an intelligent mobile robot that is autonomously detecting novel changes in an environment. We show that the human teammate can understand and make decisions based on information shared via AR by the robot. Sharing of robot-perceived information is enabled by the robot’s online calculation of the human’s relative position, making the system robust to environments without external instrumentation such as GPS. Our robotic system performs change detection by comparing current metric sensor readings against a previous reading to identify differences. We experimentally explore the design of change detection visualizations and the aggregation of information, the impact of instruction on communication understanding, the effects of visualization and alignment error, and the relationship between situated 3D visualization in AR and human movement in the operational environment on shared situational awareness in human-robot teams. We demonstrate this novel capability and assess the effectiveness of human-robot teaming in crowdsourced data-driven studies, as well as an in-person study where participants are equipped with a commercial off-the-shelf AR headset and teamed with a small ground robot which maneuvers through the environment. The mobile robot scans for changes, which are visualized via AR to the participant. The effectiveness of this communication is evaluated through accuracy and subjective assessment metrics to provide insight into interpretation and experience.
创建信息透明的解决方案,使人类能够理解机器人的感知,是自主和人工智能机器人影响众多领域的一个具有挑战性的要求。通过利用机器人队友先进的感知和推理能力提供的全面和大量数据,人类将能够做出更好的决策,从安全性到功能性都将产生重大影响。我们提出了一种解决方案,通过将增强现实(AR)与智能移动机器人相结合,该机器人可以自主检测环境中的新变化。我们展示了人类队友可以理解并根据机器人通过AR共享的信息做出决策。通过机器人在线计算人类的相对位置,可以共享机器人感知到的信息,使系统在没有外部仪器(如GPS)的环境下具有鲁棒性。我们的机器人系统通过比较当前度量传感器读数与先前读数来识别差异,从而执行变化检测。我们通过实验探讨了变化检测可视化和信息聚合的设计、指令对沟通理解的影响、可视化和对齐误差的影响,以及AR中的情境三维可视化与作战环境中人类运动对人机团队共享态势感知的关系。我们展示了这种新颖的能力,并在众包数据驱动的研究中评估了人-机器人团队的有效性,以及一项面对面的研究,参与者配备了商用现货AR耳机,并与一个小型地面机器人合作,该机器人可以在环境中机动。移动机器人扫描变化,并通过AR向参与者可视化。这种沟通的有效性通过准确性和主观评估指标进行评估,以提供对解释和经验的见解。
{"title":"Augmented Reality Visualization of Autonomous Mobile Robot Change Detection in Uninstrumented Environments","authors":"Christopher M. Reardon, J. Gregory, Kerstin S Haring, Benjamin Dossett, Ori Miller, A. Inyang","doi":"10.1145/3611654","DOIUrl":"https://doi.org/10.1145/3611654","url":null,"abstract":"The creation of information transparency solutions to enable humans to understand robot perception is a challenging requirement for autonomous and artificially intelligent robots to impact a multitude of domains. By taking advantage of comprehensive and high-volume data from robot teammates’ advanced perception and reasoning capabilities, humans will be able to make better decisions, with significant impacts from safety to functionality. We present a solution to this challenge by coupling augmented reality (AR) with an intelligent mobile robot that is autonomously detecting novel changes in an environment. We show that the human teammate can understand and make decisions based on information shared via AR by the robot. Sharing of robot-perceived information is enabled by the robot’s online calculation of the human’s relative position, making the system robust to environments without external instrumentation such as GPS. Our robotic system performs change detection by comparing current metric sensor readings against a previous reading to identify differences. We experimentally explore the design of change detection visualizations and the aggregation of information, the impact of instruction on communication understanding, the effects of visualization and alignment error, and the relationship between situated 3D visualization in AR and human movement in the operational environment on shared situational awareness in human-robot teams. We demonstrate this novel capability and assess the effectiveness of human-robot teaming in crowdsourced data-driven studies, as well as an in-person study where participants are equipped with a commercial off-the-shelf AR headset and teamed with a small ground robot which maneuvers through the environment. The mobile robot scans for changes, which are visualized via AR to the participant. The effectiveness of this communication is evaluated through accuracy and subjective assessment metrics to provide insight into interpretation and experience.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77009714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Is Someone There Or Is That The TV? Detecting Social Presence Using Sound 是有人在还是电视在响?使用声音检测社会存在
IF 5.1 Q2 Computer Science Pub Date : 2023-08-18 DOI: 10.1145/3611658
Nicholas C Georgiou, Rebecca Ramnauth, Emmanuel Adéníran, Michael Lee, Lila Selin, B. Scassellati
Social robots in the home will need to solve audio identification problems to better interact with their users. This paper focuses on the classification between a) natural conversation that includes at least one co-located user and b) media that is playing from electronic sources and does not require a social response, such as television shows. This classification can help social robots detect a user’s social presence using sound. Social robots that are able to solve this problem can apply this information to assist them in making decisions, such as determining when and how to appropriately engage human users. We compiled a dataset from a variety of acoustic environments which contained either natural or media audio, including audio that we recorded in our own homes. Using this dataset, we performed an experimental evaluation on a range of traditional machine learning classifiers, and assessed the classifiers’ abilities to generalize to new recordings, acoustic conditions, and environments. We conclude that a C-Support Vector Classification (SVC) algorithm outperformed other classifiers. Finally, we present a classification pipeline that in-home robots can utilize, and discuss the timing and size of the trained classifiers, as well as privacy and ethics considerations.
家庭中的社交机器人将需要解决音频识别问题,以便更好地与用户互动。本文关注的是a)包括至少一个用户的自然对话和b)从电子来源播放的媒体,不需要社会回应,如电视节目。这种分类可以帮助社交机器人通过声音来检测用户的社交存在。能够解决这个问题的社交机器人可以应用这些信息来帮助它们做出决策,例如确定何时以及如何适当地吸引人类用户。我们从各种声学环境中编译了一个数据集,其中包含自然或媒体音频,包括我们在自己家中录制的音频。使用该数据集,我们对一系列传统机器学习分类器进行了实验评估,并评估了分类器泛化到新录音、声学条件和环境的能力。我们得出结论,c -支持向量分类(SVC)算法优于其他分类器。最后,我们提出了一个家用机器人可以使用的分类管道,并讨论了训练分类器的时间和大小,以及隐私和道德考虑。
{"title":"Is Someone There Or Is That The TV? Detecting Social Presence Using Sound","authors":"Nicholas C Georgiou, Rebecca Ramnauth, Emmanuel Adéníran, Michael Lee, Lila Selin, B. Scassellati","doi":"10.1145/3611658","DOIUrl":"https://doi.org/10.1145/3611658","url":null,"abstract":"Social robots in the home will need to solve audio identification problems to better interact with their users. This paper focuses on the classification between a) natural conversation that includes at least one co-located user and b) media that is playing from electronic sources and does not require a social response, such as television shows. This classification can help social robots detect a user’s social presence using sound. Social robots that are able to solve this problem can apply this information to assist them in making decisions, such as determining when and how to appropriately engage human users. We compiled a dataset from a variety of acoustic environments which contained either natural or media audio, including audio that we recorded in our own homes. Using this dataset, we performed an experimental evaluation on a range of traditional machine learning classifiers, and assessed the classifiers’ abilities to generalize to new recordings, acoustic conditions, and environments. We conclude that a C-Support Vector Classification (SVC) algorithm outperformed other classifiers. Finally, we present a classification pipeline that in-home robots can utilize, and discuss the timing and size of the trained classifiers, as well as privacy and ethics considerations.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73935061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sounding Robots: Design and Evaluation of Auditory Displays for Unintentional Human-Robot Interaction 发声机器人:无意识人机交互听觉显示的设计与评价
IF 5.1 Q2 Computer Science Pub Date : 2023-08-17 DOI: 10.1145/3611655
Bastian Orthmann, Iolanda Leite, R. Bresin, Ilaria Torre
Non-verbal communication is important in HRI, particularly when humans and robots do not need to actively engage in a task together, but rather they co-exist in a shared space. Robots might still need to communicate states such as urgency or availability, and where they intend to go, to avoid collisions and disruptions. Sounds could be used to communicate such states and intentions in an intuitive and non-disruptive way. Here, we propose a multi-layer classification system for displaying various robot information simultaneously via sound. We first conceptualise which robot features could be displayed (robot size, speed, availability for interaction, urgency, and directionality); we then map them to a set of audio parameters. The designed sounds were then evaluated in 5 online studies, where people listened to the sounds and were asked to identify the associated robot features. The sounds were generally understood as intended by participants, especially when they were evaluated one feature at a time, and partially when they were evaluated two features simultaneously. The results of these evaluations suggest that sounds can be successfully used to communicate robot states and intended actions implicitly and intuitively.
非语言交流在HRI中很重要,特别是当人类和机器人不需要一起积极参与任务,而是在共享空间中共存时。机器人可能仍然需要沟通紧急或可用性等状态,以及它们打算去哪里,以避免碰撞和中断。声音可以用一种直观和非破坏性的方式来传达这种状态和意图。在这里,我们提出了一个多层分类系统,通过声音同时显示各种机器人信息。我们首先概念化哪些机器人特征可以被显示(机器人的大小,速度,交互的可用性,紧迫性和方向性);然后我们将它们映射到一组音频参数。然后在5项在线研究中对设计的声音进行评估,在这些研究中,人们听了这些声音,并被要求识别相关的机器人特征。这些声音通常被参与者理解为有意的,尤其是当他们一次评估一个特征时,以及同时评估两个特征时。这些评估的结果表明,声音可以成功地用于隐式和直观地传达机器人状态和预期动作。
{"title":"Sounding Robots: Design and Evaluation of Auditory Displays for Unintentional Human-Robot Interaction","authors":"Bastian Orthmann, Iolanda Leite, R. Bresin, Ilaria Torre","doi":"10.1145/3611655","DOIUrl":"https://doi.org/10.1145/3611655","url":null,"abstract":"Non-verbal communication is important in HRI, particularly when humans and robots do not need to actively engage in a task together, but rather they co-exist in a shared space. Robots might still need to communicate states such as urgency or availability, and where they intend to go, to avoid collisions and disruptions. Sounds could be used to communicate such states and intentions in an intuitive and non-disruptive way. Here, we propose a multi-layer classification system for displaying various robot information simultaneously via sound. We first conceptualise which robot features could be displayed (robot size, speed, availability for interaction, urgency, and directionality); we then map them to a set of audio parameters. The designed sounds were then evaluated in 5 online studies, where people listened to the sounds and were asked to identify the associated robot features. The sounds were generally understood as intended by participants, especially when they were evaluated one feature at a time, and partially when they were evaluated two features simultaneously. The results of these evaluations suggest that sounds can be successfully used to communicate robot states and intended actions implicitly and intuitively.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78106150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data-Driven Communicative Behaviour Generation: A Survey 数据驱动的交际行为生成:一项调查
IF 5.1 Q2 Computer Science Pub Date : 2023-08-16 DOI: 10.1145/3609235
Nurziya Oralbayeva, A. Aly, A. Sandygulova, Tony Belpaeme
The development of data-driven behaviour generating systems has recently become the focus of considerable attention in the fields of human-agent interaction (HAI) and human-robot interaction (HRI). Although rule-based approaches were dominant for years, these proved inflexible and expensive to develop. The difficulty of developing production rules, as well as the need for manual configuration in order to generate artificial behaviours, places a limit on how complex and diverse rule-based behaviours can be. In contrast, actual human-human interaction data collected using tracking and recording devices makes human-like multimodal co-speech behaviour generation possible using machine learning and specifically, in recent years, deep learning. This survey provides an overview of the state-of-the-art of deep learning-based co-speech behaviour generation models and offers an outlook for future research in this area.
近年来,数据驱动行为生成系统的发展已成为人机交互(HAI)和人机交互(HRI)领域中备受关注的焦点。尽管基于规则的方法多年来占主导地位,但事实证明,这些方法缺乏灵活性,开发成本高昂。开发生产规则的困难,以及为了生成人工行为而需要手动配置,限制了基于规则的行为的复杂性和多样性。相比之下,使用跟踪和记录设备收集的实际人机交互数据使得使用机器学习,特别是近年来的深度学习,可以生成类似人类的多模态共语音行为。本调查概述了基于深度学习的协同语音行为生成模型的最新进展,并对该领域的未来研究进行了展望。
{"title":"Data-Driven Communicative Behaviour Generation: A Survey","authors":"Nurziya Oralbayeva, A. Aly, A. Sandygulova, Tony Belpaeme","doi":"10.1145/3609235","DOIUrl":"https://doi.org/10.1145/3609235","url":null,"abstract":"The development of data-driven behaviour generating systems has recently become the focus of considerable attention in the fields of human-agent interaction (HAI) and human-robot interaction (HRI). Although rule-based approaches were dominant for years, these proved inflexible and expensive to develop. The difficulty of developing production rules, as well as the need for manual configuration in order to generate artificial behaviours, places a limit on how complex and diverse rule-based behaviours can be. In contrast, actual human-human interaction data collected using tracking and recording devices makes human-like multimodal co-speech behaviour generation possible using machine learning and specifically, in recent years, deep learning. This survey provides an overview of the state-of-the-art of deep learning-based co-speech behaviour generation models and offers an outlook for future research in this area.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82183664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
New Design Potentials of Non-mimetic Sonification in Human-Robot Interaction 人机交互中非拟声的新设计潜力
IF 5.1 Q2 Computer Science Pub Date : 2023-08-01 DOI: 10.1145/3611646
Elias Naphausen, Andreas Muxel, J. Willmann
With the increasing use and complexity of robotic devices, the requirements for the design of human-robot interfaces are rapidly changing and call for new means of interaction and information transfer. On that scope, the discussed project – being developed by the Hybrid Things Lab at the University of Applied Sciences Augsburg and the Design Research Lab at Bauhaus-Universität Weimar – takes a first step in characterizing a novel field of research, exploring the design potentials of non-mimetic sonification in the context of human-robot interaction (HRI). Featuring an industrial 7-axis manipulator and collecting multiple information (for instance, the position of the end-effector, joint positions and forces) during manipulation, these data sets are being used for creating a novel augmented audible presence, and thus allowing new forms of interaction. As such, this paper considers (1) research parameters for non-mimetic sonification (such as pitch, volume and timbre);(2) a comprehensive empirical pursuit, including setup, exploration, and validation;(3) the overall implications of integrating these findings into a unifying human-robot interaction process. The relation between machinic and auditory dimensionality is of particular concern.
随着机器人设备使用量的增加和复杂性的增加,人机界面设计的要求也在迅速变化,需要新的交互和信息传递手段。在这个范围内,讨论的项目——由奥格斯堡应用科学大学的混合物实验室和Bauhaus-Universität魏玛的设计研究实验室开发——在描述一个新的研究领域迈出了第一步,探索了在人机交互(HRI)背景下非模拟超声的设计潜力。以工业7轴机械手为特色,在操作过程中收集多种信息(例如,末端执行器的位置,关节位置和力),这些数据集被用于创建一种新颖的增强听觉存在,从而允许新的交互形式。因此,本文考虑(1)非模拟超声的研究参数(如音高、音量和音色);(2)全面的实证追求,包括设置、探索和验证;(3)将这些发现整合到统一的人机交互过程中的总体含义。机械维度和听觉维度之间的关系尤其值得关注。
{"title":"New Design Potentials of Non-mimetic Sonification in Human-Robot Interaction","authors":"Elias Naphausen, Andreas Muxel, J. Willmann","doi":"10.1145/3611646","DOIUrl":"https://doi.org/10.1145/3611646","url":null,"abstract":"With the increasing use and complexity of robotic devices, the requirements for the design of human-robot interfaces are rapidly changing and call for new means of interaction and information transfer. On that scope, the discussed project – being developed by the Hybrid Things Lab at the University of Applied Sciences Augsburg and the Design Research Lab at Bauhaus-Universität Weimar – takes a first step in characterizing a novel field of research, exploring the design potentials of non-mimetic sonification in the context of human-robot interaction (HRI). Featuring an industrial 7-axis manipulator and collecting multiple information (for instance, the position of the end-effector, joint positions and forces) during manipulation, these data sets are being used for creating a novel augmented audible presence, and thus allowing new forms of interaction. As such, this paper considers (1) research parameters for non-mimetic sonification (such as pitch, volume and timbre);(2) a comprehensive empirical pursuit, including setup, exploration, and validation;(3) the overall implications of integrating these findings into a unifying human-robot interaction process. The relation between machinic and auditory dimensionality is of particular concern.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76045892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stochastic-Skill-Level-Based Shared Control for Human Training in Urban Air Mobility Scenario 城市空中交通情景下基于随机技能水平的人类训练共享控制
IF 5.1 Q2 Computer Science Pub Date : 2023-06-06 DOI: 10.1145/3603194
Sooyung Byeon, Joonwon Choi, Yutong Zhang, Inseok Hwang
This paper proposes a novel stochastic-skill-level-based shared control framework to assist human novices to emulate human experts in complex dynamic control tasks. The proposed framework aims to infer stochastic-skill-levels (SSLs) of the human novices and provide personalized assistance based on the inferred SSLs. SSL can be assessed as a stochastic variable which denotes the probability that the novice will behave similarly to experts. We propose a data-driven method which can characterize novice demonstrations as a novice model and expert demonstrations as an expert model, respectively. Then, our SSL inference approach utilizes the novice and expert models to assess the SSL of the novices in complex dynamic control tasks. The shared control scheme is designed to dynamically adjust the level of assistance based on the inferred SSL to prevent frustration or tedium during human training due to poorly imposed assistance. The proposed framework is demonstrated by a human subject experiment in a human training scenario for a remotely piloted urban air mobility (UAM) vehicle. The results show that the proposed framework can assess the SSL and tailor the assistance for an individual in real-time. The proposed framework is compared to practice-only training (no assistance) and a baseline shared control approach to test the human learning rates in the designed training scenario with human subjects. A subjective survey is also examined to monitor the user experience of the proposed framework.
本文提出了一种基于随机技能水平的共享控制框架,以帮助人类新手在复杂的动态控制任务中模仿人类专家。提出的框架旨在推断人类新手的随机技能水平(ssl),并根据推断的ssl提供个性化的帮助。SSL可以被评估为一个随机变量,它表示新手的行为与专家相似的概率。我们提出了一种数据驱动的方法,将新手演示分别表征为新手模型和专家演示分别表征为专家模型。然后,我们的SSL推理方法利用新手和专家模型来评估复杂动态控制任务中新手的SSL。共享控制方案的设计目的是根据推断的SSL动态调整辅助级别,以防止在人工训练期间由于强加的辅助不足而感到沮丧或乏味。在远程驾驶城市空中机动(UAM)车辆的人类训练场景中,通过人体受试者实验证明了所提出的框架。结果表明,所提出的框架可以实时评估SSL并为个人定制帮助。将提出的框架与仅练习训练(无辅助)和基线共享控制方法进行比较,以测试人类受试者在设计的训练场景中的人类学习率。还检查了一项主观调查,以监测拟议框架的用户体验。
{"title":"Stochastic-Skill-Level-Based Shared Control for Human Training in Urban Air Mobility Scenario","authors":"Sooyung Byeon, Joonwon Choi, Yutong Zhang, Inseok Hwang","doi":"10.1145/3603194","DOIUrl":"https://doi.org/10.1145/3603194","url":null,"abstract":"This paper proposes a novel stochastic-skill-level-based shared control framework to assist human novices to emulate human experts in complex dynamic control tasks. The proposed framework aims to infer stochastic-skill-levels (SSLs) of the human novices and provide personalized assistance based on the inferred SSLs. SSL can be assessed as a stochastic variable which denotes the probability that the novice will behave similarly to experts. We propose a data-driven method which can characterize novice demonstrations as a novice model and expert demonstrations as an expert model, respectively. Then, our SSL inference approach utilizes the novice and expert models to assess the SSL of the novices in complex dynamic control tasks. The shared control scheme is designed to dynamically adjust the level of assistance based on the inferred SSL to prevent frustration or tedium during human training due to poorly imposed assistance. The proposed framework is demonstrated by a human subject experiment in a human training scenario for a remotely piloted urban air mobility (UAM) vehicle. The results show that the proposed framework can assess the SSL and tailor the assistance for an individual in real-time. The proposed framework is compared to practice-only training (no assistance) and a baseline shared control approach to test the human learning rates in the designed training scenario with human subjects. A subjective survey is also examined to monitor the user experience of the proposed framework.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74266351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Introduction to the Special Issue on “Designing the Robot Body: Critical Perspectives on Affective Embodied Interaction” “设计机器人身体:情感具身互动的批判视角”特刊简介
IF 5.1 Q2 Computer Science Pub Date : 2023-05-17 DOI: 10.1145/3594713
M. Paterson, G. Hoffman, C. Zheng
A
一个
{"title":"Introduction to the Special Issue on “Designing the Robot Body: Critical Perspectives on Affective Embodied Interaction”","authors":"M. Paterson, G. Hoffman, C. Zheng","doi":"10.1145/3594713","DOIUrl":"https://doi.org/10.1145/3594713","url":null,"abstract":"A","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75515966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Affective Corners as a Problematic for Design Interactions 作为设计交互问题的情感角
IF 5.1 Q2 Computer Science Pub Date : 2023-05-15 DOI: 10.1145/3596452
Katherine M. Harrison, Ericka Johnson
Domestic robots are already commonplace in many homes, while humanoid companion robots like Pepper are increasingly becoming part of different kinds of care work. Drawing on fieldwork at a robotics lab, as well as our personal encounters with domestic robots, we use here the metaphor of “hard-to-reach corners” to explore the socio-technical limitations of companion robots and our differing abilities to respond to these limitations. This paper presents “hard-to-reach-corners” as a problematic for design interaction, offering them as an opportunity for thinking about context and intersectional aspects of adaptation.
家用机器人在许多家庭中已经司空见惯,而像Pepper这样的人形伴侣机器人正越来越多地成为各种护理工作的一部分。通过在机器人实验室的实地考察,以及我们与家用机器人的个人接触,我们在这里使用“难以触及的角落”的比喻来探索伴侣机器人的社会技术限制以及我们应对这些限制的不同能力。本文将“难以触及的角落”作为设计交互的一个问题,并将其作为思考环境和适应性交叉方面的机会。
{"title":"Affective Corners as a Problematic for Design Interactions","authors":"Katherine M. Harrison, Ericka Johnson","doi":"10.1145/3596452","DOIUrl":"https://doi.org/10.1145/3596452","url":null,"abstract":"Domestic robots are already commonplace in many homes, while humanoid companion robots like Pepper are increasingly becoming part of different kinds of care work. Drawing on fieldwork at a robotics lab, as well as our personal encounters with domestic robots, we use here the metaphor of “hard-to-reach corners” to explore the socio-technical limitations of companion robots and our differing abilities to respond to these limitations. This paper presents “hard-to-reach-corners” as a problematic for design interaction, offering them as an opportunity for thinking about context and intersectional aspects of adaptation.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73757550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
ACM Transactions on Human-Robot Interaction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1