首页 > 最新文献

ACM Transactions on Human-Robot Interaction最新文献

英文 中文
Field Trial of a Queue-Managing Security Guard Robot 排队管理保安机器人现场试验
Pub Date : 2024-07-25 DOI: 10.1145/3680292
Sachi Edirisinghe, S. Satake, Yuyi Liu, Takayuki Kanda
We developed a security guard robot that is specifically designed to manage queues of people and conducted a field trial at an actual public event to assess its effectiveness. However, the acceptance of robot instructions or admonishments poses challenges in real-world applications. Our primary objective was to achieve an effective and socially acceptable queue-management solution. To accomplish this, we took inspiration from human security guards whose role has already been well-received in society. Our robot, whose design embodied the image of a professional security guard, focused on three key aspects: duties, professional behavior, and appearance. To ensure its competence, we interviewed professional security guards to deepen our understanding of the responsibilities associated with queue management. Based on their insights, we incorporated features of ushering, admonishing, announcing, and question answering into the robot’s functionality. We also prioritized the modeling of professional ushering behavior. During a 10-day field trial at a children’s amusement event, we interviewed both the visitors who interacted with the robot and the event staff. The results revealed that visitors generally complied with its ushering and admonishments, indicating a positive reception. Both visitors and event staff expressed an overall favorable impression of the robot and its queue-management services. These findings suggest that our proposed security guard robot shows great promise as a solution for effective crowd handling in public spaces.
我们开发了一种专门用于管理排队人群的保安机器人,并在实际公共活动中进行了实地试验,以评估其有效性。然而,在实际应用中,机器人对指令或告诫的接受程度构成了挑战。我们的首要目标是实现一种有效且为社会所接受的排队管理解决方案。为了实现这一目标,我们从人类保安中汲取了灵感,因为他们的角色已经在社会中得到了广泛认可。我们的机器人设计体现了专业保安的形象,重点关注三个关键方面:职责、专业行为和外观。为了确保机器人的能力,我们采访了专业保安人员,以加深我们对排队管理相关职责的理解。根据他们的见解,我们在机器人的功能中加入了引导、告诫、广播和问题解答等功能。我们还优先考虑了专业引导员行为的建模。在一次儿童游乐活动中进行的为期 10 天的现场试验中,我们采访了与机器人互动的游客和活动工作人员。结果显示,游客们普遍服从机器人的引导和告诫,表现出积极的接待态度。游客和活动工作人员都对机器人及其排队管理服务表达了总体良好的印象。这些研究结果表明,我们提出的保安机器人有望成为在公共场所有效处理人群的解决方案。
{"title":"Field Trial of a Queue-Managing Security Guard Robot","authors":"Sachi Edirisinghe, S. Satake, Yuyi Liu, Takayuki Kanda","doi":"10.1145/3680292","DOIUrl":"https://doi.org/10.1145/3680292","url":null,"abstract":"We developed a security guard robot that is specifically designed to manage queues of people and conducted a field trial at an actual public event to assess its effectiveness. However, the acceptance of robot instructions or admonishments poses challenges in real-world applications. Our primary objective was to achieve an effective and socially acceptable queue-management solution. To accomplish this, we took inspiration from human security guards whose role has already been well-received in society. Our robot, whose design embodied the image of a professional security guard, focused on three key aspects: duties, professional behavior, and appearance. To ensure its competence, we interviewed professional security guards to deepen our understanding of the responsibilities associated with queue management. Based on their insights, we incorporated features of ushering, admonishing, announcing, and question answering into the robot’s functionality. We also prioritized the modeling of professional ushering behavior. During a 10-day field trial at a children’s amusement event, we interviewed both the visitors who interacted with the robot and the event staff. The results revealed that visitors generally complied with its ushering and admonishments, indicating a positive reception. Both visitors and event staff expressed an overall favorable impression of the robot and its queue-management services. These findings suggest that our proposed security guard robot shows great promise as a solution for effective crowd handling in public spaces.","PeriodicalId":504644,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141802654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Introduction to the Special Issue on Artificial Intelligence for Human-Robot Interaction (AI-HRI) 人工智能促进人机交互(AI-HRI)特刊简介
Pub Date : 2024-07-20 DOI: 10.1145/3672535
Jivko Sinapov, Zhao Han, Shelly Bagchi, Muneeb Ahmad, Matteo Leonetti, Ross Mead, Reuth Mirsky, Emmanuel Senft
{"title":"Introduction to the Special Issue on Artificial Intelligence for Human-Robot Interaction (AI-HRI)","authors":"Jivko Sinapov, Zhao Han, Shelly Bagchi, Muneeb Ahmad, Matteo Leonetti, Ross Mead, Reuth Mirsky, Emmanuel Senft","doi":"10.1145/3672535","DOIUrl":"https://doi.org/10.1145/3672535","url":null,"abstract":"","PeriodicalId":504644,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141818832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding the Interaction between Delivery Robots and Other Road and Sidewalk Users: A Study of User-generated Online Videos 了解送货机器人与其他道路和人行道使用者之间的互动:用户生成的在线视频研究
Pub Date : 2024-07-17 DOI: 10.1145/3677615
Xinyan Yu, Marius Hoggenmüller, Tram Thi Minh Tran, Yiyuan Wang, M. Tomitsch
The deployment of autonomous delivery robots in urban environments presents unique challenges in navigating complex traffic conditions and interacting with diverse road and sidewalk users. Effective communication between robots and road and sidewalk users is crucial to address these challenges. This study investigates real-world encounter scenarios where delivery robots and road and sidewalk users interact, seeking to understand the essential role of communication in ensuring seamless encounters. Following an online ethnography approach, we collected 117 user-generated videos from TikTok and their associated 2067 comments. Our systematic analysis revealed several design opportunities to augment communication between delivery robots and road and sidewalk users, which include facilitating multi-party path negotiation, managing unexpected robot behaviour via transparency information, and expressing robot limitations to request human assistance. Moreover, the triangulation of video and comments analysis provides a set of design considerations to realise these opportunities. The findings contribute to understanding the operational context of delivery robots and offer insights for designing interactions with road and sidewalk users, facilitating their integration into urban spaces.
在城市环境中部署自动送货机器人,在复杂的交通条件下航行并与不同的道路和人行道使用者互动,是一项独特的挑战。要应对这些挑战,机器人与道路和人行道使用者之间的有效沟通至关重要。本研究调查了现实世界中送货机器人与道路和人行道用户互动的相遇场景,试图了解沟通在确保无缝相遇中的重要作用。我们采用在线人种学方法,从 TikTok 收集了 117 个用户生成的视频及其相关的 2067 条评论。我们的系统分析揭示了增强送货机器人与道路和人行道用户之间交流的几个设计机会,其中包括促进多方路径协商、通过透明信息管理机器人的意外行为,以及表达机器人的局限性以请求人类帮助。此外,视频和评论分析的三角分析为实现这些机会提供了一系列设计考虑因素。这些研究结果有助于理解送货机器人的运行环境,并为设计与道路和人行道使用者的互动提供启示,从而促进机器人与城市空间的融合。
{"title":"Understanding the Interaction between Delivery Robots and Other Road and Sidewalk Users: A Study of User-generated Online Videos","authors":"Xinyan Yu, Marius Hoggenmüller, Tram Thi Minh Tran, Yiyuan Wang, M. Tomitsch","doi":"10.1145/3677615","DOIUrl":"https://doi.org/10.1145/3677615","url":null,"abstract":"The deployment of autonomous delivery robots in urban environments presents unique challenges in navigating complex traffic conditions and interacting with diverse road and sidewalk users. Effective communication between robots and road and sidewalk users is crucial to address these challenges. This study investigates real-world encounter scenarios where delivery robots and road and sidewalk users interact, seeking to understand the essential role of communication in ensuring seamless encounters. Following an online ethnography approach, we collected 117 user-generated videos from TikTok and their associated 2067 comments. Our systematic analysis revealed several design opportunities to augment communication between delivery robots and road and sidewalk users, which include facilitating multi-party path negotiation, managing unexpected robot behaviour via transparency information, and expressing robot limitations to request human assistance. Moreover, the triangulation of video and comments analysis provides a set of design considerations to realise these opportunities. The findings contribute to understanding the operational context of delivery robots and offer insights for designing interactions with road and sidewalk users, facilitating their integration into urban spaces.","PeriodicalId":504644,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141830218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enacting Human-Robot Encounters with Theater Professionals on a Mixed Reality Stage 在混合现实舞台上与剧院专业人员上演人与机器人的邂逅
Pub Date : 2024-07-17 DOI: 10.1145/3678186
Marco C. Rozendaal, J. Vroon, M. Bleeker
In this paper, we report on methodological insights gained from a workshop in which we collaborated with theater professionals to enact situated encounters between humans and robots on a mixed reality stage combining VR with real-life interaction. We deployed the skills of theater professionals to investigate the behaviors of humans encountering robots to speculate about the kind of interactions that may result from encountering robots in supermarket settings. The mixed reality stage made it possible to adapt the robot’s morphology quickly, as well as its movement and perceptual capacities, to investigate how this together co-determines possibilities for interaction. This setup allowed us to follow the interactions simultaneously from different perspectives, including the robot’s, which provided the basis for a collective phenomenological analysis of the interactions. Our work contributes to approaches to HRI that do not work towards identifying communicative behaviors that can be universally applied but instead work towards insights that can be used to develop HRI that is emergent, and situation- and robot-specific. Furthermore, it supports a more-than-human-design approach that takes the fundamental differences between humans and robots as a starting point for the creative development of new kinds of communication and interaction.
在本文中,我们报告了在一次研讨会上获得的方法论启示。在这次研讨会上,我们与剧院专业人员合作,在一个结合了虚拟现实与现实生活互动的混合现实舞台上演绎了人类与机器人之间的情景交融。我们利用剧院专业人员的技能来调查人类与机器人相遇时的行为,从而推测在超市环境中与机器人相遇可能产生的互动类型。通过混合现实舞台,我们可以快速调整机器人的形态、动作和感知能力,从而研究这些因素如何共同决定互动的可能性。通过这种设置,我们可以从不同的角度(包括机器人的角度)同时跟踪交互过程,这为对交互过程进行集体现象学分析奠定了基础。我们的工作有助于改进人机交互技术的方法,这些方法并不是为了确定可以普遍应用的交流行为,而是为了获得洞察力,用于开发新兴的、针对具体情况和机器人的人机交互技术。此外,它还支持一种超越人类的设计方法,将人类与机器人之间的根本差异作为创造性开发新型交流与互动的起点。
{"title":"Enacting Human-Robot Encounters with Theater Professionals on a Mixed Reality Stage","authors":"Marco C. Rozendaal, J. Vroon, M. Bleeker","doi":"10.1145/3678186","DOIUrl":"https://doi.org/10.1145/3678186","url":null,"abstract":"In this paper, we report on methodological insights gained from a workshop in which we collaborated with theater professionals to enact situated encounters between humans and robots on a mixed reality stage combining VR with real-life interaction. We deployed the skills of theater professionals to investigate the behaviors of humans encountering robots to speculate about the kind of interactions that may result from encountering robots in supermarket settings. The mixed reality stage made it possible to adapt the robot’s morphology quickly, as well as its movement and perceptual capacities, to investigate how this together co-determines possibilities for interaction. This setup allowed us to follow the interactions simultaneously from different perspectives, including the robot’s, which provided the basis for a collective phenomenological analysis of the interactions. Our work contributes to approaches to HRI that do not work towards identifying communicative behaviors that can be universally applied but instead work towards insights that can be used to develop HRI that is emergent, and situation- and robot-specific. Furthermore, it supports a more-than-human-design approach that takes the fundamental differences between humans and robots as a starting point for the creative development of new kinds of communication and interaction.","PeriodicalId":504644,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141828773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Longitudinal Study of Mobile Telepresence Robots in Older Adults’ Homes: Uses, Social Connection, and Comfort with Technology 在老年人家中使用移动远程呈现机器人的纵向研究:技术的使用、社会联系和舒适度
Pub Date : 2024-07-11 DOI: 10.1145/3674956
Jennifer Rheman, Rune P. Baggett, Martin Simecek, Marlena R. Fraune, Katherine M. Tsui
Mobile telepresence robots can help reduce loneliness by facilitating people to visit each other and have more social presence than visiting via video or audio calls. However, using new technology can be challenging for many older adults. In this paper, we examine how older adults use and want to use mobile telepresence robots, how these robots affect their social connection, and how they can be improved for older adults’ use. We placed a mobile telepresence robot in the home of older adult primary participants ( N = 7; age 60+) for 7 months and facilitated monthly activities between them and a secondary participant ( N = 8; age 18+) of their choice. Participants used the robots as they liked between monthly activities. We collected diary entries and monthly interviews from primary participants and a final interview from secondary participants. Results indicate that older adults found many creative uses for the robots, including conversations, board games, and hide ‘n’ seek. Several participants felt more socially connected with others and a few had improved their comfort with technology because of their use of the robot. They also suggested design recommendations and updates for the robots related to size, mobility, and more, which can help practitioners improve robots for older adults’ use.
与通过视频或音频通话进行探访相比,移动远程呈现机器人可以方便人们相互探访,并有更多的社会存在感,从而有助于减少孤独感。然而,对许多老年人来说,使用新技术可能具有挑战性。在本文中,我们将研究老年人如何使用和希望使用移动远程呈现机器人,这些机器人如何影响他们的社会联系,以及如何改进这些机器人以方便老年人使用。我们在老年人主要参与者(7 人,60 岁以上)的家中放置了一个移动远程呈现机器人,为期 7 个月,并为他们和他们选择的次要参与者(8 人,18 岁以上)之间的每月活动提供便利。参与者在每月活动之间随意使用机器人。我们收集了主要参与者的日记和每月访谈,以及次要参与者的最后访谈。结果表明,老年人为机器人找到了许多创造性的用途,包括聊天、棋盘游戏和捉迷藏。有几位参与者感到与他人的社交联系更加紧密,还有几位参与者因为使用机器人而提高了对技术的舒适度。他们还就机器人的尺寸、移动性等方面提出了设计建议和更新意见,这有助于从业人员改进供老年人使用的机器人。
{"title":"Longitudinal Study of Mobile Telepresence Robots in Older Adults’ Homes: Uses, Social Connection, and Comfort with Technology","authors":"Jennifer Rheman, Rune P. Baggett, Martin Simecek, Marlena R. Fraune, Katherine M. Tsui","doi":"10.1145/3674956","DOIUrl":"https://doi.org/10.1145/3674956","url":null,"abstract":"\u0000 Mobile telepresence robots can help reduce loneliness by facilitating people to visit each other and have more social presence than visiting via video or audio calls. However, using new technology can be challenging for many older adults. In this paper, we examine how older adults use and want to use mobile telepresence robots, how these robots affect their social connection, and how they can be improved for older adults’ use. We placed a mobile telepresence robot in the home of older adult primary participants (\u0000 N\u0000 = 7; age 60+) for 7 months and facilitated monthly activities between them and a secondary participant (\u0000 N\u0000 = 8; age 18+) of their choice. Participants used the robots as they liked between monthly activities. We collected diary entries and monthly interviews from primary participants and a final interview from secondary participants. Results indicate that older adults found many creative uses for the robots, including conversations, board games, and hide ‘n’ seek. Several participants felt more socially connected with others and a few had improved their comfort with technology because of their use of the robot. They also suggested design recommendations and updates for the robots related to size, mobility, and more, which can help practitioners improve robots for older adults’ use.\u0000","PeriodicalId":504644,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141834929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Autonomous Viewpoint Adjustment from Human Demonstrations for Telemanipulation 从人类示范中学习自主视角调整,实现远程操控
Pub Date : 2024-04-24 DOI: 10.1145/3660348
Ruixing Jia, Lei Yang, Ying Cao, Calvin Kalun Or, Wenping Wang, Jia Pan
Teleoperation systems find many applications from earlier search-and-rescue to more recent daily tasks. It is widely acknowledged that using external sensors can decouple the view of the remote scene from the motion of the robot arm during manipulation, facilitating the control task. However, this design requires the coordination of multiple operators or may exhaust a single operator as s/he needs to control both the manipulator arm and the external sensors. To address this challenge, our work introduces a viewpoint prediction model, the first data-driven approach that autonomously adjusts the viewpoint of a dynamic camera to assist in telemanipulation tasks. This model is parameterized by a deep neural network and trained on a set of human demonstrations. We propose a contrastive learning scheme that leverages viewpoints in a camera trajectory as contrastive data for network training. We demonstrated the effectiveness of the proposed viewpoint prediction model by integrating it into a real-world robotic system for telemanipulation. User studies reveal that our model outperforms several camera control methods in terms of control experience and reduces the perceived task load compared to manual camera control. As an assistive module of a telemanipulation system, our method significantly reduces task completion time for users who choose to adopt its recommendation.
远程操纵系统应用广泛,从早期的搜索和救援到最近的日常任务,不一而足。人们普遍认为,使用外部传感器可以在操纵过程中将远程场景的视图与机械臂的运动分离开来,从而为控制任务提供便利。然而,这种设计需要多名操作员的协调配合,也可能使单个操作员疲于奔命,因为他/她需要同时控制机械臂和外部传感器。为了应对这一挑战,我们的工作引入了视点预测模型,这是第一种数据驱动的方法,可自主调整动态摄像机的视点,以协助远程操纵任务。该模型通过深度神经网络进行参数化,并在一组人类演示中进行训练。我们提出了一种对比学习方案,利用摄像机轨迹中的视点作为网络训练的对比数据。我们将所提出的视点预测模型集成到现实世界的远程操控机器人系统中,从而证明了该模型的有效性。用户研究表明,就控制体验而言,我们的模型优于几种相机控制方法,而且与手动相机控制相比,我们的模型减轻了用户的任务负担。作为远距离操纵系统的辅助模块,我们的方法能显著缩短选择采用其建议的用户的任务完成时间。
{"title":"Learning Autonomous Viewpoint Adjustment from Human Demonstrations for Telemanipulation","authors":"Ruixing Jia, Lei Yang, Ying Cao, Calvin Kalun Or, Wenping Wang, Jia Pan","doi":"10.1145/3660348","DOIUrl":"https://doi.org/10.1145/3660348","url":null,"abstract":"Teleoperation systems find many applications from earlier search-and-rescue to more recent daily tasks. It is widely acknowledged that using external sensors can decouple the view of the remote scene from the motion of the robot arm during manipulation, facilitating the control task. However, this design requires the coordination of multiple operators or may exhaust a single operator as s/he needs to control both the manipulator arm and the external sensors. To address this challenge, our work introduces a viewpoint prediction model, the first data-driven approach that autonomously adjusts the viewpoint of a dynamic camera to assist in telemanipulation tasks. This model is parameterized by a deep neural network and trained on a set of human demonstrations. We propose a contrastive learning scheme that leverages viewpoints in a camera trajectory as contrastive data for network training. We demonstrated the effectiveness of the proposed viewpoint prediction model by integrating it into a real-world robotic system for telemanipulation. User studies reveal that our model outperforms several camera control methods in terms of control experience and reduces the perceived task load compared to manual camera control. As an assistive module of a telemanipulation system, our method significantly reduces task completion time for users who choose to adopt its recommendation.","PeriodicalId":504644,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140662965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What is Proactive Human-Robot Interaction? - A review of a progressive field and its definitions 什么是主动式人机交互?- 回顾一个进步的领域及其定义
Pub Date : 2024-04-23 DOI: 10.1145/3650117
Marike K. van den Broek, T. Moeslund
During the last 15 years, an increasing amount of works have investigated proactive robotic behavior in relation to Human-Robot Interaction (HRI). The works engage with a variety of research topics and technical challenges. In this paper a review of the related literature identified through a structured block search is performed. Variations in the corpus are investigated, and a definition of Proactive HRI is provided. Furthermore, a taxonomy is proposed based on the corpus and exemplified through specific works. Finally, a selection of noteworthy observations is discussed.
在过去的 15 年中,越来越多的作品研究了与人机交互(HRI)相关的主动机器人行为。这些作品涉及各种研究课题和技术挑战。本文对通过结构化块搜索确定的相关文献进行了综述。本文调查了语料库中的各种差异,并给出了主动式人机交互的定义。此外,还在语料库的基础上提出了一个分类法,并通过具体作品进行了示范。最后,讨论了一些值得注意的意见。
{"title":"What is Proactive Human-Robot Interaction? - A review of a progressive field and its definitions","authors":"Marike K. van den Broek, T. Moeslund","doi":"10.1145/3650117","DOIUrl":"https://doi.org/10.1145/3650117","url":null,"abstract":"During the last 15 years, an increasing amount of works have investigated proactive robotic behavior in relation to Human-Robot Interaction (HRI). The works engage with a variety of research topics and technical challenges. In this paper a review of the related literature identified through a structured block search is performed. Variations in the corpus are investigated, and a definition of Proactive HRI is provided. Furthermore, a taxonomy is proposed based on the corpus and exemplified through specific works. Finally, a selection of noteworthy observations is discussed.","PeriodicalId":504644,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140670483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance-Aware Trust Modeling Within a Human-Multi-Robot Collaboration Setting 在人与多机器人协作环境中建立性能感知信任模型
Pub Date : 2024-04-22 DOI: 10.1145/3660648
Md Khurram Monir Rabby, M. Khan, Steven Xiaochun Jiang, A. Karimoddini
In this study, a novel time-driven mathematical model for trust is developed considering human-multi-robot performance for a Human-robot Collaboration (HRC) framework. For this purpose, a model is developed to quantify human performance considering the effects of physical and cognitive constraints and factors such as muscle fatigue and recovery, muscle isometric force, human (cognitive and physical) workload and workloads due to the robots’ mistakes, and task complexity. The performance of multi-robot in the HRC setting is modeled based upon the rate of task assignment and completion as well as the mistake probabilities of the individual robots. The human trust in HRC setting with single and multiple robots are modeled over different operation regions, namely unpredictable region, predictable region, dependable region, and faithful region. The relative performance difference between the human operator and the robot is used to analyze the effect on the human operator’s trust in robots’ operation. The developed model is simulated for a manufacturing workspace scenario considering different task complexities and involving multiple robots to complete shared tasks. The simulation results indicate that for a constant multi-robot performance in operation, the human operator’s trust in robots’ operation improves whenever the comparative performance of the robots improves with respect to the human operator performance. The impact of robot hypothetical learning capabilities on human trust in the same HRC setting is also analyzed. The results confirm that a hypothetical learning capability allows robots to reduce human workloads, which improves human performance. The simulation result analysis confirms that the human operator’s trust in the multi-robot operation increases faster with the improvement of the multi-robot performance when the robots have a hypothetical learning capability. An empirical study was conducted involving a human operator and two collaborator robots with two different performance levels in a software-based HRC setting. The experimental results closely followed the pattern of the developed mathematical models when capturing human trust and performance in terms of human-multi-robot collaboration.
在本研究中,考虑到人机协作(HRC)框架中人与多机器人的表现,开发了一种新颖的时间驱动信任数学模型。为此,我们开发了一个模型来量化人类的表现,其中考虑到了物理和认知约束的影响,以及肌肉疲劳和恢复、肌肉等长力、人类(认知和物理)工作量、机器人失误造成的工作量和任务复杂度等因素。多机器人在人机交互环境中的表现是基于任务分配和完成率以及单个机器人的错误概率来建模的。人机交互中心环境中单个机器人和多个机器人的人类信任度在不同的操作区域建模,即不可预测区域、可预测区域、可靠区域和忠诚区域。利用人类操作员和机器人之间的相对性能差异来分析人类操作员对机器人操作信任度的影响。在考虑到不同任务的复杂性并有多个机器人参与完成共同任务的情况下,对所开发的模型进行了制造工作区情景模拟。仿真结果表明,在多机器人性能不变的情况下,只要机器人的比较性能相对于人类操作员的性能有所提高,人类操作员对机器人操作的信任度就会提高。在相同的 HRC 环境下,还分析了机器人假设学习能力对人类信任度的影响。结果证实,假设学习能力可以让机器人减少人类的工作量,从而提高人类的绩效。仿真结果分析证实,当机器人具有假设学习能力时,人类操作员对多机器人操作的信任度会随着多机器人性能的提高而快速提高。在基于软件的人机交互中心环境中,对一名人类操作员和两个具有两种不同性能水平的协作机器人进行了实证研究。实验结果与所开发的数学模型在捕捉人类与多机器人协作方面的人类信任和性能方面的模式密切相关。
{"title":"Performance-Aware Trust Modeling Within a Human-Multi-Robot Collaboration Setting","authors":"Md Khurram Monir Rabby, M. Khan, Steven Xiaochun Jiang, A. Karimoddini","doi":"10.1145/3660648","DOIUrl":"https://doi.org/10.1145/3660648","url":null,"abstract":"In this study, a novel time-driven mathematical model for trust is developed considering human-multi-robot performance for a Human-robot Collaboration (HRC) framework. For this purpose, a model is developed to quantify human performance considering the effects of physical and cognitive constraints and factors such as muscle fatigue and recovery, muscle isometric force, human (cognitive and physical) workload and workloads due to the robots’ mistakes, and task complexity. The performance of multi-robot in the HRC setting is modeled based upon the rate of task assignment and completion as well as the mistake probabilities of the individual robots. The human trust in HRC setting with single and multiple robots are modeled over different operation regions, namely unpredictable region, predictable region, dependable region, and faithful region. The relative performance difference between the human operator and the robot is used to analyze the effect on the human operator’s trust in robots’ operation. The developed model is simulated for a manufacturing workspace scenario considering different task complexities and involving multiple robots to complete shared tasks. The simulation results indicate that for a constant multi-robot performance in operation, the human operator’s trust in robots’ operation improves whenever the comparative performance of the robots improves with respect to the human operator performance. The impact of robot hypothetical learning capabilities on human trust in the same HRC setting is also analyzed. The results confirm that a hypothetical learning capability allows robots to reduce human workloads, which improves human performance. The simulation result analysis confirms that the human operator’s trust in the multi-robot operation increases faster with the improvement of the multi-robot performance when the robots have a hypothetical learning capability. An empirical study was conducted involving a human operator and two collaborator robots with two different performance levels in a software-based HRC setting. The experimental results closely followed the pattern of the developed mathematical models when capturing human trust and performance in terms of human-multi-robot collaboration.","PeriodicalId":504644,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140673394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Dimensional Evaluation of an Augmented Reality Head-Mounted Display User Interface for Controlling Legged Manipulators 多维度评估用于控制腿部机械手的增强现实头戴式显示器用户界面
Pub Date : 2024-04-22 DOI: 10.1145/3660649
Rodrigo Chacón Quesada, Y. Demiris
Controlling assistive robots can be challenging for some users, especially those lacking relevant experience. Augmented Reality (AR) User Interfaces (UIs) have the potential to facilitate this task. Although extensive research regarding legged manipulators exists, comparatively little is on their UIs. Most existing UIs leverage traditional control interfaces such as joysticks, Hand-held (HH) controllers, and 2D UIs. These interfaces not only risk being unintuitive, thus discouraging interaction with the robot partner, but also draw the operator’s focus away from the task and towards the UI. This shift in attention raises additional safety concerns, particularly in potentially hazardous environments where legged manipulators are frequently deployed. Moreover, traditional interfaces limit the operators’ availability to use their hands for other tasks. Towards overcoming these limitations, in this article, we provide a user study comparing an AR Head Mounted Display (HMD) UI we developed for controlling a legged manipulator against off-the-shelf control methods for such robots. This user study involved 27 participants and 135 trials, from which we gathered over 405 completed questionnaires. These trials involved multiple navigation and manipulation tasks with varying difficulty levels using a Boston Dynamics (BD) Spot ® , a 7 DoF Kinova ® robot arm, and a Robotiq ® 2F-85 gripper that we integrated into a legged manipulator. We made the comparison between UIs across multiple dimensions relevant to a successful human-robot interaction. These dimensions include cognitive workload, technology acceptance, fluency, system usability, immersion and trust. Our study employed a factorial experimental design with participants undergoing five different conditions, generating longitudinal data. Due to potential unknown distributions and outliers in such data, using parametric methods for its analysis is questionable, and while non-parametric alternatives exist, they may lead to reduced statistical power. Therefore, to analyse the data that resulted from our experiment, we chose Bayesian data analysis as an effective alternative to address these limitations. Our results show that AR UIs can outpace HH-based control methods and reduce the cognitive requirements when designers include hands-free interactions and cognitive offloading principles into the UI. Furthermore, the use of the AR UI together with our cognitive offloading feature resulted in higher usability scores and significantly higher fluency and Technology Acceptance Model (TAM) scores. Regarding immersion, our results revealed that the response values for the Augmented Reality Immersion (ARI) questionnaire associated with the AR UI are significantly higher than those associated with the HH UI, regardless of the main interaction method with the former, i.e., hand gestures or cognitive offloading. Derived from the participants’ qualitative answers, we believe this is due to a combination of facto
对于某些用户,尤其是缺乏相关经验的用户来说,控制辅助机器人可能是一项挑战。增强现实(AR)用户界面(UI)有可能为这项任务提供便利。尽管目前已有大量关于腿部机械手的研究,但关于其用户界面的研究却相对较少。现有的用户界面大多采用传统的控制界面,如操纵杆、手持(HH)控制器和二维用户界面。这些界面不仅存在不直观的风险,从而阻碍了与机器人伙伴的互动,而且还会将操作员的注意力从任务转移到用户界面上。这种注意力的转移会引发更多的安全问题,尤其是在经常使用有脚机械手的潜在危险环境中。此外,传统的界面也限制了操作员使用双手完成其他任务。为了克服这些限制,我们在本文中提供了一项用户研究,比较了我们为控制有脚机械手而开发的 AR 头戴式显示器 (HMD) 用户界面与现成的此类机器人控制方法。这项用户研究涉及 27 名参与者和 135 次试验,我们从中收集了超过 405 份填写完毕的调查问卷。这些试验包括使用波士顿动力公司(BD)的 Spot ®、7 DoF Kinova ® 机械臂和 Robotiq ® 2F-85 抓手(我们将其集成到了腿部机械手中)完成难度不同的多项导航和操作任务。我们从与成功的人机交互相关的多个维度对用户界面进行了比较。这些维度包括认知工作量、技术接受度、流畅度、系统可用性、沉浸感和信任感。我们的研究采用了因子实验设计,让参与者接受五种不同的条件,从而产生纵向数据。由于此类数据可能存在未知分布和异常值,因此使用参数方法进行分析值得商榷,虽然存在非参数替代方法,但它们可能会导致统计能力下降。因此,在分析实验数据时,我们选择了贝叶斯数据分析作为解决这些局限性的有效替代方法。我们的研究结果表明,当设计者在用户界面中加入免提交互和认知卸载原则时,AR 用户界面可以超越基于 HH 的控制方法,并降低认知要求。此外,将 AR 用户界面与我们的认知卸载功能结合使用,可获得更高的可用性评分,流畅度和技术接受模型(TAM)评分也显著提高。在沉浸感方面,我们的研究结果表明,与增强现实用户界面相关的增强现实沉浸感(ARI)问卷的回答值明显高于与增强现实用户界面相关的沉浸感,而与前者的主要交互方式(即手势或认知卸载)无关。根据参与者的定性回答,我们认为这是由于多种因素共同作用的结果,其中最重要的是使用 HMD 时双手的自由使用,以及无需将注意力转移到用户界面就能看到真实环境的能力。在信任度方面,我们的研究结果显示,不同用户界面选项的信任度得分并无明显差异。不过,在用户研究的操作阶段,参与者可以选择自己喜欢的用户界面,与导航类别相比,他们始终报告了更高的信任度。此外,在加入认知卸载功能后,选择 AR 用户界面来完成这一操作阶段的参与者比例发生了巨大变化。因此,信任似乎在一个不同于我们研究中考虑的维度(即委托和依赖)上对用户界面的使用和不使用起到了中介作用。因此,我们发现用于控制腿部机械手的 AR HMD 用户界面在多个相关维度上改善了人与机器人之间的交互,从而强调了用户界面设计在有效和值得信赖地使用机器人系统中的关键作用。
{"title":"Multi-Dimensional Evaluation of an Augmented Reality Head-Mounted Display User Interface for Controlling Legged Manipulators","authors":"Rodrigo Chacón Quesada, Y. Demiris","doi":"10.1145/3660649","DOIUrl":"https://doi.org/10.1145/3660649","url":null,"abstract":"\u0000 Controlling assistive robots can be challenging for some users, especially those lacking relevant experience. Augmented Reality (AR) User Interfaces (UIs) have the potential to facilitate this task. Although extensive research regarding legged manipulators exists, comparatively little is on their UIs. Most existing UIs leverage traditional control interfaces such as joysticks, Hand-held (HH) controllers, and 2D UIs. These interfaces not only risk being unintuitive, thus discouraging interaction with the robot partner, but also draw the operator’s focus away from the task and towards the UI. This shift in attention raises additional safety concerns, particularly in potentially hazardous environments where legged manipulators are frequently deployed. Moreover, traditional interfaces limit the operators’ availability to use their hands for other tasks. Towards overcoming these limitations, in this article, we provide a user study comparing an AR Head Mounted Display (HMD) UI we developed for controlling a legged manipulator against off-the-shelf control methods for such robots. This user study involved 27 participants and 135 trials, from which we gathered over 405 completed questionnaires. These trials involved multiple navigation and manipulation tasks with varying difficulty levels using a Boston Dynamics (BD) Spot\u0000 ®\u0000 , a 7 DoF Kinova\u0000 ®\u0000 robot arm, and a Robotiq\u0000 ®\u0000 2F-85 gripper that we integrated into a legged manipulator. We made the comparison between UIs across multiple dimensions relevant to a successful human-robot interaction. These dimensions include cognitive workload, technology acceptance, fluency, system usability, immersion and trust. Our study employed a factorial experimental design with participants undergoing five different conditions, generating longitudinal data. Due to potential unknown distributions and outliers in such data, using parametric methods for its analysis is questionable, and while non-parametric alternatives exist, they may lead to reduced statistical power. Therefore, to analyse the data that resulted from our experiment, we chose Bayesian data analysis as an effective alternative to address these limitations. Our results show that AR UIs can outpace HH-based control methods and reduce the cognitive requirements when designers include hands-free interactions and cognitive offloading principles into the UI. Furthermore, the use of the AR UI together with our cognitive offloading feature resulted in higher usability scores and significantly higher fluency and Technology Acceptance Model (TAM) scores. Regarding immersion, our results revealed that the response values for the Augmented Reality Immersion (ARI) questionnaire associated with the AR UI are significantly higher than those associated with the HH UI, regardless of the main interaction method with the former, i.e., hand gestures or cognitive offloading. Derived from the participants’ qualitative answers, we believe this is due to a combination of facto","PeriodicalId":504644,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140674546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Designing Socially Assistive Robots 设计具有社会辅助功能的机器人
Pub Date : 2024-04-11 DOI: 10.1145/3657646
Ela Liberman-Pincu, Oliver Korn, Jonas Grund, Elmer D. Van Grondelle, T. Oron-Gilad
Socially assistive robots (SARs) are becoming more prevalent in everyday life, emphasizing the need to make them socially acceptable and aligned with users' expectations. Robots' appearance impacts users' behaviors and attitudes towards them. Therefore, product designers choose visual qualities to give the robot a character and to imply its functionality and personality. In this work, we sought to investigate the effect of cultural differences on Israeli and German designers' perceptions of SARs' roles and appearance in four different contexts: a service robot for an assisted living/retirement residence facility, a medical assistant robot for a hospital environment, a COVID-19 officer robot, and a personal assistant robot for domestic use. The key insight is that although Israeli and German designers share similar perceptions of visual qualities for most of the robotics roles, we found differences in the perception of the COVID-19 officer robot's role and, by that, its most suitable visual design. This work indicates that context and culture play a role in users' perceptions and expectations; therefore, they should be taken into account when designing new SARs for diverse contexts.
社交辅助机器人(SARs)在日常生活中越来越普遍,这强调了使其为社会所接受并符合用户期望的必要性。机器人的外观会影响用户对其的行为和态度。因此,产品设计师会选择视觉特质来赋予机器人特征,并暗示其功能和个性。在这项工作中,我们试图研究文化差异对以色列和德国设计师在四种不同情境下对 SAR 的角色和外观的看法的影响:辅助生活/退休住宅设施的服务机器人、医院环境的医疗助理机器人、COVID-19 军官机器人和家用个人助理机器人。我们的主要发现是,虽然以色列和德国的设计师对大多数机器人角色的视觉品质有着相似的认知,但我们发现,他们对 COVID-19 军官机器人角色的认知存在差异,因此,对其最合适的视觉设计的认知也存在差异。这项工作表明,环境和文化对用户的感知和期望起着一定的作用;因此,在为不同环境设计新的合成孔径雷达时,应将环境和文化因素考虑在内。
{"title":"Designing Socially Assistive Robots","authors":"Ela Liberman-Pincu, Oliver Korn, Jonas Grund, Elmer D. Van Grondelle, T. Oron-Gilad","doi":"10.1145/3657646","DOIUrl":"https://doi.org/10.1145/3657646","url":null,"abstract":"Socially assistive robots (SARs) are becoming more prevalent in everyday life, emphasizing the need to make them socially acceptable and aligned with users' expectations. Robots' appearance impacts users' behaviors and attitudes towards them. Therefore, product designers choose visual qualities to give the robot a character and to imply its functionality and personality. In this work, we sought to investigate the effect of cultural differences on Israeli and German designers' perceptions of SARs' roles and appearance in four different contexts: a service robot for an assisted living/retirement residence facility, a medical assistant robot for a hospital environment, a COVID-19 officer robot, and a personal assistant robot for domestic use. The key insight is that although Israeli and German designers share similar perceptions of visual qualities for most of the robotics roles, we found differences in the perception of the COVID-19 officer robot's role and, by that, its most suitable visual design. This work indicates that context and culture play a role in users' perceptions and expectations; therefore, they should be taken into account when designing new SARs for diverse contexts.","PeriodicalId":504644,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140714718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Human-Robot Interaction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1