首页 > 最新文献

ACM Transactions on Human-Robot Interaction最新文献

英文 中文
Robots for Learning 7 (R4L): A Look from Stakeholders' Perspective 学习机器人7 (R4L):从利益相关者的角度看
IF 5.1 Q2 ROBOTICS Pub Date : 2023-03-13 DOI: 10.1145/3568294.3579958
D. Tozadore, Jauwairia Nasir, Sarah Gillet, Rianne van den Berghe, Arzu Guneysu, W. Johal
This year's conference theme "HRI for all" not just raises the importance of reflecting on how to promote inclusion for every type of user but also calls for careful consideration of the different layers of people potentially impacted by such systems. In educational setups, for instance, the users to be considered first and foremost are the learners. However, teachers, school directors, therapists and parents also form a more secondary layer of users in this ecosystem. The 7th edition of R4L focuses on the issues that HRI experiments in educational environments may cause to stakeholders and how we could improve on bringing the stakeholders' point of view into the loop. This goal is expected to be achieved in a very practical and dynamic way by the means of: (i) lightening talks from the participants; (ii) two discussion panels with special guests: One with active researchers from academia and industry about their experience and point of view regarding the inclusion of stakeholders; another panel with teacher, school directors, and parents that are/were involved in HRI experiments and will share their viewpoint; (iii) semi-structured group discussions and hands-on activities with participants and panellists to evaluate and propose guidelines for good practices regarding how to promote the inclusion of stakeholders, especially teachers, in educational HRI activities. By acquiring the viewpoint from the experimenters and stakeholders and analysing them in the same workshop, we expect to identify current gaps, propose practical solutions to bridge these gaps, and capitalise on existing synergies with the collective intelligence of the two communities.
今年的会议主题“人人享有人力资源研究所”不仅提出了思考如何促进各类用户的包容性的重要性,而且还呼吁仔细考虑可能受到此类系统影响的不同阶层的人。例如,在教育设置中,首先要考虑的用户是学习者。然而,教师、学校主管、治疗师和家长也在这个生态系统中形成了更二级的用户层。第7版R4L重点关注教育环境中HRI实验可能给利益相关者带来的问题,以及我们如何将利益相关者的观点纳入循环中进行改进。这一目标预计将通过以下方式以非常实际和充满活力的方式实现:(i)减轻与会者的谈话;(ii)两个由特别嘉宾组成的讨论小组:一个是由学术界和工业界的活跃研究人员讨论他们在纳入持份者方面的经验和观点;另一个小组,由教师、学校负责人和参与人力资源研究所实验的家长组成,分享他们的观点;(iii)与参与者和小组成员进行半结构化的小组讨论和实践活动,以评估和提出有关如何促进利益相关者(特别是教师)参与教育人力资源研究所活动的良好做法指南。通过获取实验者和利益相关者的观点,并在同一研讨会上对其进行分析,我们希望能够确定当前的差距,提出切实可行的解决方案来弥合这些差距,并利用两个社区的集体智慧利用现有的协同效应。
{"title":"Robots for Learning 7 (R4L): A Look from Stakeholders' Perspective","authors":"D. Tozadore, Jauwairia Nasir, Sarah Gillet, Rianne van den Berghe, Arzu Guneysu, W. Johal","doi":"10.1145/3568294.3579958","DOIUrl":"https://doi.org/10.1145/3568294.3579958","url":null,"abstract":"This year's conference theme \"HRI for all\" not just raises the importance of reflecting on how to promote inclusion for every type of user but also calls for careful consideration of the different layers of people potentially impacted by such systems. In educational setups, for instance, the users to be considered first and foremost are the learners. However, teachers, school directors, therapists and parents also form a more secondary layer of users in this ecosystem. The 7th edition of R4L focuses on the issues that HRI experiments in educational environments may cause to stakeholders and how we could improve on bringing the stakeholders' point of view into the loop. This goal is expected to be achieved in a very practical and dynamic way by the means of: (i) lightening talks from the participants; (ii) two discussion panels with special guests: One with active researchers from academia and industry about their experience and point of view regarding the inclusion of stakeholders; another panel with teacher, school directors, and parents that are/were involved in HRI experiments and will share their viewpoint; (iii) semi-structured group discussions and hands-on activities with participants and panellists to evaluate and propose guidelines for good practices regarding how to promote the inclusion of stakeholders, especially teachers, in educational HRI activities. By acquiring the viewpoint from the experimenters and stakeholders and analysing them in the same workshop, we expect to identify current gaps, propose practical solutions to bridge these gaps, and capitalise on existing synergies with the collective intelligence of the two communities.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"43 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79979988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Language Models for Human-Robot Interaction 人机交互的语言模型
IF 5.1 Q2 ROBOTICS Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580040
E. Billing, Julia Rosén, M. Lamb
Recent advances in large scale language models have significantly changed the landscape of automatic dialogue systems and chatbots. We believe that these models also have a great potential for changing the way we interact with robots. Here, we present the first integration of the OpenAI GPT-3 language model for the Aldebaran Pepper and Nao robots. The present work transforms the text-based API of GPT-3 into an open verbal dialogue with the robots. The system will be presented live during the HRI2023 conference and the source code of this integration is shared with the hope that it will serve the community in designing and evaluating new dialogue systems for robots.
大规模语言模型的最新进展极大地改变了自动对话系统和聊天机器人的面貌。我们相信,这些模型也有很大的潜力来改变我们与机器人互动的方式。在这里,我们将首次为Aldebaran Pepper和Nao机器人集成OpenAI GPT-3语言模型。目前的工作将GPT-3的基于文本的API转换为与机器人的开放式口头对话。该系统将在HRI2023会议期间现场展示,并分享此集成的源代码,希望它将在设计和评估机器人的新对话系统方面为社区服务。
{"title":"Language Models for Human-Robot Interaction","authors":"E. Billing, Julia Rosén, M. Lamb","doi":"10.1145/3568294.3580040","DOIUrl":"https://doi.org/10.1145/3568294.3580040","url":null,"abstract":"Recent advances in large scale language models have significantly changed the landscape of automatic dialogue systems and chatbots. We believe that these models also have a great potential for changing the way we interact with robots. Here, we present the first integration of the OpenAI GPT-3 language model for the Aldebaran Pepper and Nao robots. The present work transforms the text-based API of GPT-3 into an open verbal dialogue with the robots. The system will be presented live during the HRI2023 conference and the source code of this integration is shared with the hope that it will serve the community in designing and evaluating new dialogue systems for robots.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"25 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78011440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Robot Theory of Mind with Reverse Psychology 机器人心理理论与逆向心理学
IF 5.1 Q2 ROBOTICS Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580144
Chuang Yu, Baris Serhan, M. Romeo, A. Cangelosi
Theory of mind (ToM) corresponds to the human ability to infer other people's desires, beliefs, and intentions. Acquisition of ToM skills is crucial to obtain a natural interaction between robots and humans. A core component of ToM is the ability to attribute false beliefs. In this paper, a collaborative robot tries to assist a human partner who plays a trust-based card game against another human. The robot infers its partner's trust in the robot's decision system via reinforcement learning. Robot ToM refers to the ability to implicitly anticipate the human collaborator's strategy and inject the prediction into its optimal decision model for a better team performance. In our experiments, the robot learns when its human partner does not trust the robot and consequently gives recommendations in its optimal policy to ensure the effectiveness of team performance. The interesting finding is that the optimal robotic policy attempts to use reverse psychology on its human collaborator when trust is low. This finding will provide guidance for the study of a trustworthy robot decision model with a human partner in the loop.
心理理论(ToM)对应于人类推断他人欲望、信仰和意图的能力。习得ToM技能对于实现机器人与人之间的自然互动至关重要。ToM的一个核心组件是对错误信念进行归因的能力。在本文中,一个协作机器人试图帮助一个人类伙伴与另一个人进行基于信任的纸牌游戏。机器人通过强化学习来推断其伙伴对机器人决策系统的信任。机器人ToM指的是能够隐式地预测人类合作者的策略,并将预测注入其最优决策模型中,以获得更好的团队绩效。在我们的实验中,机器人在人类伙伴不信任它的时候进行学习,并给出最优策略建议,以确保团队绩效的有效性。有趣的发现是,当信任度较低时,最佳机器人策略试图对人类合作者使用逆向心理。这一发现将为具有人类伙伴的可信赖机器人决策模型的研究提供指导。
{"title":"Robot Theory of Mind with Reverse Psychology","authors":"Chuang Yu, Baris Serhan, M. Romeo, A. Cangelosi","doi":"10.1145/3568294.3580144","DOIUrl":"https://doi.org/10.1145/3568294.3580144","url":null,"abstract":"Theory of mind (ToM) corresponds to the human ability to infer other people's desires, beliefs, and intentions. Acquisition of ToM skills is crucial to obtain a natural interaction between robots and humans. A core component of ToM is the ability to attribute false beliefs. In this paper, a collaborative robot tries to assist a human partner who plays a trust-based card game against another human. The robot infers its partner's trust in the robot's decision system via reinforcement learning. Robot ToM refers to the ability to implicitly anticipate the human collaborator's strategy and inject the prediction into its optimal decision model for a better team performance. In our experiments, the robot learns when its human partner does not trust the robot and consequently gives recommendations in its optimal policy to ensure the effectiveness of team performance. The interesting finding is that the optimal robotic policy attempts to use reverse psychology on its human collaborator when trust is low. This finding will provide guidance for the study of a trustworthy robot decision model with a human partner in the loop.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"32 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85125818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
L2 Vocabulary Learning Through Lexical Inferencing Stories With a Social Robot 通过社交机器人的词汇推理故事学习第二语言词汇
IF 5.1 Q2 ROBOTICS Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580140
Hoi Ki Tang, Matthijs H. J. Smakman, M. De Haas, Rianne van den Berghe
Vocabulary is a crucial part of second language (L2) learning. Children learn new vocabulary by forming mental lexicon relations with their existing knowledge. This is called lexical inferencing: using the available clues and knowledge to guess the meaning of the unknown word. This study explored the potential of second language vocabulary acquisition through lexical inferencing in child-robot interaction. A storytelling robot read a book to Dutch kindergartners (N = 36, aged 4-6 years) in Dutch in which a few key words were translated into French (L2), and with a robot providing additional word explanation cues or not. The results showed that the children learned the key words successfully as a result of the reading session with the storytelling robot, but that there was no significant effect of additional word explanation cues by the robot. Overall, it seems promising that lexical inferencing can act as a new and different way to teach kindergartners a second language.
词汇是第二语言学习的重要组成部分。儿童通过与已有知识形成心理词汇关系来学习新词汇。这就是所谓的词汇推理:利用现有的线索和知识来猜测未知单词的意思。本研究探讨儿童机器人互动中词汇推理对二语词汇习得的影响。一个讲故事的机器人用荷兰语给36名4-6岁的荷兰幼儿园儿童朗读一本书,书中有几个关键词被翻译成法语(第二语言),机器人提供或不提供额外的单词解释线索。结果表明,儿童在与讲故事机器人的阅读过程中成功地学习了关键词,而机器人额外的单词解释提示没有显著的影响。总的来说,词汇推理似乎可以作为一种新的、不同的方式来教授幼儿园的第二语言。
{"title":"L2 Vocabulary Learning Through Lexical Inferencing Stories With a Social Robot","authors":"Hoi Ki Tang, Matthijs H. J. Smakman, M. De Haas, Rianne van den Berghe","doi":"10.1145/3568294.3580140","DOIUrl":"https://doi.org/10.1145/3568294.3580140","url":null,"abstract":"Vocabulary is a crucial part of second language (L2) learning. Children learn new vocabulary by forming mental lexicon relations with their existing knowledge. This is called lexical inferencing: using the available clues and knowledge to guess the meaning of the unknown word. This study explored the potential of second language vocabulary acquisition through lexical inferencing in child-robot interaction. A storytelling robot read a book to Dutch kindergartners (N = 36, aged 4-6 years) in Dutch in which a few key words were translated into French (L2), and with a robot providing additional word explanation cues or not. The results showed that the children learned the key words successfully as a result of the reading session with the storytelling robot, but that there was no significant effect of additional word explanation cues by the robot. Overall, it seems promising that lexical inferencing can act as a new and different way to teach kindergartners a second language.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"53 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84558779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robots in Real Life: Putting HRI to Work 现实生活中的机器人:将HRI应用于工作
IF 5.1 Q2 ROBOTICS Pub Date : 2023-03-13 DOI: 10.1145/3568162.3578810
A. Thomaz
This talk will be focused on the unique challenges in deploying a mobile manipulation robot into an environment where the robot is working closely with people on a daily basis. Diligent Robotics' first product, Moxi, is a mobile manipulation service robot that is at work in hospitals today assisting nurses and other front line staff with materials management tasks. This talk will dive into the computational complexity of developing a mobile manipulator with social intelligence. Dr. Thomaz will focus on how human-robot interaction theories and algorithms translate into the real-world and the impact on functionality and perception of robots that perform delivery tasks in a busy human environment. The talk will include many examples and data from the field, with commentary and discussion around both the expected and unexpected hard problems in building robots operating 24/7 as reliable teammates. BIO: Andrea Thomaz is the CEO and Co-Founder of Diligent Robotics. Her accolades include being recognized by the National Academy of Science as a Kavli Fellow, the US President's Council of Advisors on Science and Tech (PCAST), MIT Technology Review TR35 list, and TEDx as a featured keynote speaker on social robotics. Dr. Thomaz has received numerous research grants including the NSF CAREER award and the Office of Naval Research Young Investigator Award. Andrea has published in the areas of Artificial Intelligence, Robotics, and Human-Robot Interaction. Her research aims to computationally model mechanisms of human social learning and interaction, in order to build social robots and other machines that are intuitive for everyday people to teach. She earned her Ph.D. from MIT and B.S. in Electrical and Computer Engineering from UT Austin, and was a Robotics Professor at UT Austin and Georgia Tech (where she directed the Socially Intelligent Machines Lab). Andrea co-founded Diligent Robotics in 2018, to pursue her vision of creating socially intelligent robot assistants that collaborate with humans by doing their chores so humans can have more time for the work they care most about.
本次演讲将集中讨论将移动操作机器人部署到机器人每天与人密切合作的环境中的独特挑战。勤勉机器人公司的第一款产品Moxi是一款移动操作服务机器人,目前在医院帮助护士和其他一线工作人员完成材料管理任务。本讲座将深入探讨开发具有社会智能的移动机械手的计算复杂性。托马斯博士将专注于人机交互理论和算法如何转化为现实世界,以及在繁忙的人类环境中执行交付任务的机器人对功能和感知的影响。演讲将包括许多来自该领域的例子和数据,并围绕构建作为可靠队友的机器人的预期和意外难题进行评论和讨论。简介:安德里亚·托马斯是Diligent Robotics公司的首席执行官兼联合创始人。她的荣誉包括被美国国家科学院认可为Kavli研究员,美国总统科学技术顾问委员会(PCAST),麻省理工学院技术评论TR35名单,以及TEDx作为社交机器人的特色主题演讲者。托马斯博士获得了许多研究资助,包括美国国家科学基金会职业奖和海军研究办公室青年研究员奖。Andrea在人工智能、机器人技术和人机交互领域发表过文章。她的研究目标是计算人类社会学习和互动机制的模型,以便建立社会机器人和其他机器,这些机器对日常生活中的人们来说是直观的。她在麻省理工学院获得博士学位,在德克萨斯大学奥斯汀分校获得电气和计算机工程学士学位,并在德克萨斯大学奥斯汀分校和佐治亚理工学院担任机器人教授(在那里她领导了社会智能机器实验室)。安德里亚于2018年共同创立了Diligent Robotics,以实现她的愿景,即创造社交智能机器人助手,通过做家务与人类合作,让人类有更多的时间做他们最关心的工作。
{"title":"Robots in Real Life: Putting HRI to Work","authors":"A. Thomaz","doi":"10.1145/3568162.3578810","DOIUrl":"https://doi.org/10.1145/3568162.3578810","url":null,"abstract":"This talk will be focused on the unique challenges in deploying a mobile manipulation robot into an environment where the robot is working closely with people on a daily basis. Diligent Robotics' first product, Moxi, is a mobile manipulation service robot that is at work in hospitals today assisting nurses and other front line staff with materials management tasks. This talk will dive into the computational complexity of developing a mobile manipulator with social intelligence. Dr. Thomaz will focus on how human-robot interaction theories and algorithms translate into the real-world and the impact on functionality and perception of robots that perform delivery tasks in a busy human environment. The talk will include many examples and data from the field, with commentary and discussion around both the expected and unexpected hard problems in building robots operating 24/7 as reliable teammates. BIO: Andrea Thomaz is the CEO and Co-Founder of Diligent Robotics. Her accolades include being recognized by the National Academy of Science as a Kavli Fellow, the US President's Council of Advisors on Science and Tech (PCAST), MIT Technology Review TR35 list, and TEDx as a featured keynote speaker on social robotics. Dr. Thomaz has received numerous research grants including the NSF CAREER award and the Office of Naval Research Young Investigator Award. Andrea has published in the areas of Artificial Intelligence, Robotics, and Human-Robot Interaction. Her research aims to computationally model mechanisms of human social learning and interaction, in order to build social robots and other machines that are intuitive for everyday people to teach. She earned her Ph.D. from MIT and B.S. in Electrical and Computer Engineering from UT Austin, and was a Robotics Professor at UT Austin and Georgia Tech (where she directed the Socially Intelligent Machines Lab). Andrea co-founded Diligent Robotics in 2018, to pursue her vision of creating socially intelligent robot assistants that collaborate with humans by doing their chores so humans can have more time for the work they care most about.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"4 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87684195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of a Wearable Robot that Moves on the Arm to Support the Daily Life of the User 一种可穿戴机器人的开发,它可以在手臂上移动,以支持用户的日常生活
IF 5.1 Q2 ROBOTICS Pub Date : 2023-03-13 DOI: 10.1145/3568294.3579983
Koji Kimura, F. Tanaka
Wearable robots can maintain physical contact with the user and interact with them to assist in daily life. However, since most wearable robots operate at a single point on the user's body, the user must be constantly aware of their presence. This imposes a burden on the user, both physically and mentally, and prevents them from wearing the robot daily. One solution to this problem is for the robot to move around the user's body. When the user does not interact with the robot, it can move to an unobtrusive position and attract less attention from the user. This research aims to develop a wearable robot that reduces the burden by developing an arm movement mechanism for wearable robots and a self-localization method for autonomous movement and helps the user's daily life by providing supportive interactions.
可穿戴机器人可以与用户保持身体接触,并与用户互动,协助日常生活。然而,由于大多数可穿戴机器人在用户身体上的一个点上操作,用户必须时刻意识到它们的存在。这给使用者带来了身体和精神上的负担,使他们无法每天佩戴机器人。解决这个问题的一个办法是让机器人绕着用户的身体移动。当用户不与机器人互动时,机器人可以移动到一个不显眼的位置,从而减少用户的注意力。本研究旨在开发可穿戴机器人,通过开发可穿戴机器人的手臂运动机构和自主运动的自我定位方法来减轻负担,并通过提供支持性交互来帮助用户的日常生活。
{"title":"Development of a Wearable Robot that Moves on the Arm to Support the Daily Life of the User","authors":"Koji Kimura, F. Tanaka","doi":"10.1145/3568294.3579983","DOIUrl":"https://doi.org/10.1145/3568294.3579983","url":null,"abstract":"Wearable robots can maintain physical contact with the user and interact with them to assist in daily life. However, since most wearable robots operate at a single point on the user's body, the user must be constantly aware of their presence. This imposes a burden on the user, both physically and mentally, and prevents them from wearing the robot daily. One solution to this problem is for the robot to move around the user's body. When the user does not interact with the robot, it can move to an unobtrusive position and attract less attention from the user. This research aims to develop a wearable robot that reduces the burden by developing an arm movement mechanism for wearable robots and a self-localization method for autonomous movement and helps the user's daily life by providing supportive interactions.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"35 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89217416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Internet of Robotic Cat Toys to Deepen Bond and Elevate Mood 互联网上的机器猫玩具加深感情,提升情绪
IF 5.1 Q2 ROBOTICS Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580183
I. X. Han, Sarah Witzman
Pets provide important mental support for human beings. Recent advancements in robotics and HRI have led to research and commercial products providing smart solutions to enrich indoor pets' lives. However, most of these products focus on satisfying pets' basic needs, such as feeding and litter cleaning, rather than their mental well-being. In this paper, we present the internet of robotic cat toys, where a group of robotic agents connects to play with our furry friends. Through three iterations, we demonstrate an affordable and flexible design of clip-on robotic agents to transform a static household into an interactive wonderland for pets.
宠物为人类提供了重要的精神支持。机器人技术和人力资源研究的最新进展已经导致研究和商业产品提供智能解决方案,丰富室内宠物的生活。然而,这些产品大多专注于满足宠物的基本需求,比如喂养和清理垃圾,而不是它们的心理健康。在本文中,我们提出了机器猫玩具的互联网,其中一组机器人代理连接起来与我们毛茸茸的朋友一起玩。通过三次迭代,我们展示了一种价格合理且灵活的夹式机器人代理设计,可以将静态家庭转变为宠物的互动仙境。
{"title":"Internet of Robotic Cat Toys to Deepen Bond and Elevate Mood","authors":"I. X. Han, Sarah Witzman","doi":"10.1145/3568294.3580183","DOIUrl":"https://doi.org/10.1145/3568294.3580183","url":null,"abstract":"Pets provide important mental support for human beings. Recent advancements in robotics and HRI have led to research and commercial products providing smart solutions to enrich indoor pets' lives. However, most of these products focus on satisfying pets' basic needs, such as feeding and litter cleaning, rather than their mental well-being. In this paper, we present the internet of robotic cat toys, where a group of robotic agents connects to play with our furry friends. Through three iterations, we demonstrate an affordable and flexible design of clip-on robotic agents to transform a static household into an interactive wonderland for pets.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"11 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87881234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human Gesture Recognition with a Flow-based Model for Human Robot Interaction 基于流模型的人机交互人机手势识别
IF 5.1 Q2 ROBOTICS Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580145
Lanmiao Liu, Chuang Yu, Siyang Song, Zhidong Su, A. Tapus
Human skeleton-based gesture classification plays a dominant role in social robotics. Learning the variety of human skeleton-based gestures can help the robot to continuously interact in an appropriate manner in a natural human-robot interaction (HRI). In this paper, we proposed a Flow-based model to classify human gesture actions with skeletal data. Instead of inferring new human skeleton actions from noisy data using a retrained model, our end-to-end model can expand the diversity of labels for gesture recognition from noisy data without retraining the model. At first, our model focuses on detecting five human gesture actions (i.e., come on, right up, left up, hug, and noise-random action). The accuracy of our online human gesture recognition system is as well as the offline one. Meanwhile, both attain 100% accuracy among the first four actions. Our proposed method is more efficient for inference of new human gesture action without retraining, which acquires about 90% accuracy for noise-random action. The gesture recognition system has been applied to the robot's reaction toward the human gesture, which is promising to facilitate a natural human-robot interaction.
基于人体骨骼的手势分类在社交机器人中占有主导地位。学习各种基于人体骨骼的手势可以帮助机器人在自然的人机交互(HRI)中以适当的方式持续交互。在本文中,我们提出了一种基于flow的模型来对骨骼数据进行人体手势动作分类。我们的端到端模型可以在不重新训练模型的情况下从噪声数据中扩展手势识别标签的多样性,而不是使用重新训练的模型从噪声数据中推断新的人体骨骼动作。首先,我们的模型专注于检测五种人类手势动作(即,加油,右上,左上,拥抱和噪声随机动作)。我们的在线人体手势识别系统的准确率与离线系统一样高。同时,在前四个动作中,两者都达到了100%的准确率。我们提出的方法在不需要再训练的情况下对新的人体手势动作进行更有效的推断,对噪声随机动作的推断准确率达到90%左右。手势识别系统已被应用于机器人对人类手势的反应,有望促进自然的人机交互。
{"title":"Human Gesture Recognition with a Flow-based Model for Human Robot Interaction","authors":"Lanmiao Liu, Chuang Yu, Siyang Song, Zhidong Su, A. Tapus","doi":"10.1145/3568294.3580145","DOIUrl":"https://doi.org/10.1145/3568294.3580145","url":null,"abstract":"Human skeleton-based gesture classification plays a dominant role in social robotics. Learning the variety of human skeleton-based gestures can help the robot to continuously interact in an appropriate manner in a natural human-robot interaction (HRI). In this paper, we proposed a Flow-based model to classify human gesture actions with skeletal data. Instead of inferring new human skeleton actions from noisy data using a retrained model, our end-to-end model can expand the diversity of labels for gesture recognition from noisy data without retraining the model. At first, our model focuses on detecting five human gesture actions (i.e., come on, right up, left up, hug, and noise-random action). The accuracy of our online human gesture recognition system is as well as the offline one. Meanwhile, both attain 100% accuracy among the first four actions. Our proposed method is more efficient for inference of new human gesture action without retraining, which acquires about 90% accuracy for noise-random action. The gesture recognition system has been applied to the robot's reaction toward the human gesture, which is promising to facilitate a natural human-robot interaction.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"28 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87611736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sawarimōto
IF 5.1 Q2 ROBOTICS Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580131
Aidan Edward Fox-Tierney, Kurima Sakai, Masahiro Shiomi, Takashi Minato, Hiroshi Ishiguro
Although robot-to-human touch experiments have been performed, they have all used direct tele-operation with a remote controller, pre-programmed hand motions, or tracked the human with wearable trackers. This report introduces a project that aims to visually track and touch a person's face with a humanoid android using a single RGB-D camera for 3D pose estimation. There are three major components: 3D pose estimation, a touch sensor for the android's hand, and a controller that combines the pose and sensor information to direct the android's actions. The pose estimation is working and released under as open-source. A touch sensor glove has been built and we have begun work on creating an under-skin version. Finally, we have tested android face-touch control. These tests showed many hurdles that will need to be overcome, but also how convincing the experience already is for the potential of this technology to elicit strong emotional responses.
{"title":"Sawarimōto","authors":"Aidan Edward Fox-Tierney, Kurima Sakai, Masahiro Shiomi, Takashi Minato, Hiroshi Ishiguro","doi":"10.1145/3568294.3580131","DOIUrl":"https://doi.org/10.1145/3568294.3580131","url":null,"abstract":"Although robot-to-human touch experiments have been performed, they have all used direct tele-operation with a remote controller, pre-programmed hand motions, or tracked the human with wearable trackers. This report introduces a project that aims to visually track and touch a person's face with a humanoid android using a single RGB-D camera for 3D pose estimation. There are three major components: 3D pose estimation, a touch sensor for the android's hand, and a controller that combines the pose and sensor information to direct the android's actions. The pose estimation is working and released under as open-source. A touch sensor glove has been built and we have begun work on creating an under-skin version. Finally, we have tested android face-touch control. These tests showed many hurdles that will need to be overcome, but also how convincing the experience already is for the potential of this technology to elicit strong emotional responses.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"102 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74262536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can a Robot's Hand Bias Human Attention? 机器人的手会影响人类的注意力吗?
IF 5.1 Q2 ROBOTICS Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580074
Giulia Scorza Azzarà, Joshua Zonca, F. Rea, Joo-Hyun Song, A. Sciutti
Previous studies have revealed that humans prioritize attention to the space near their hands (the so-called near-hand effect). This effect may also occur towards a human partner's hand, but only after sharing a physical joint action. Hence, in human dyads, interaction leads to a shared body representation that may influence basic attentional mechanisms. Our project investigates whether a collaborative interaction with a robot might similarly influence attention. To this aim, we designed an experiment to assess whether the mere presence of a robot with an anthropomorphic hand could bias the human partner's attention. We replicated a classical psychological paradigm to measure this attentional bias (i.e., the near-hand effect) by adding a robotic condition. Preliminary results found the near-hand effect when performing the task with the self-hand near the screen, leading to shorter reaction times on the same side of the hand. On the contrary, we found no effect on the robot's hand in the absence of previous collaborative interaction with the robot, in line with studies involving human partners.
之前的研究表明,人类会优先关注手附近的空间(所谓的近手效应)。这种效应也可能发生在人类伴侣的手上,但只有在共同进行身体联合动作之后。因此,在人类二联体中,相互作用导致了共同的身体表征,这可能会影响基本的注意机制。我们的项目调查了与机器人的协作互动是否会同样影响注意力。为此,我们设计了一个实验来评估是否仅仅是一个拟人化的手机器人的存在就会使人类伴侣的注意力产生偏差。我们复制了一个经典的心理学范式,通过添加一个机器人条件来测量这种注意偏差(即近手效应)。初步结果发现,当自我手靠近屏幕执行任务时,近手效应会导致手同一侧的反应时间更短。相反,我们发现在之前没有与机器人进行协作互动的情况下,对机器人的手没有影响,这与涉及人类伴侣的研究一致。
{"title":"Can a Robot's Hand Bias Human Attention?","authors":"Giulia Scorza Azzarà, Joshua Zonca, F. Rea, Joo-Hyun Song, A. Sciutti","doi":"10.1145/3568294.3580074","DOIUrl":"https://doi.org/10.1145/3568294.3580074","url":null,"abstract":"Previous studies have revealed that humans prioritize attention to the space near their hands (the so-called near-hand effect). This effect may also occur towards a human partner's hand, but only after sharing a physical joint action. Hence, in human dyads, interaction leads to a shared body representation that may influence basic attentional mechanisms. Our project investigates whether a collaborative interaction with a robot might similarly influence attention. To this aim, we designed an experiment to assess whether the mere presence of a robot with an anthropomorphic hand could bias the human partner's attention. We replicated a classical psychological paradigm to measure this attentional bias (i.e., the near-hand effect) by adding a robotic condition. Preliminary results found the near-hand effect when performing the task with the self-hand near the screen, leading to shorter reaction times on the same side of the hand. On the contrary, we found no effect on the robot's hand in the absence of previous collaborative interaction with the robot, in line with studies involving human partners.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"12 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74615342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Human-Robot Interaction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1