首页 > 最新文献

2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)最新文献

英文 中文
Possibility, It's a Mystery: How Keepon's Video Brought Me Here 可能性,这是一个谜:Keepon的视频如何把我带到这里
Pub Date : 2019-03-11 DOI: 10.1109/HRI.2019.8673066
Jangwon Lee
After nearly ten years “Keepon” appeared in HRI, a question arises: what about the music used in the video “Keepon goes Seoul-searching?” The song “superfantastic” with the lyrics “possibility, it's a mystery” is written by peppertones in 2005, a Korean duo band celebrating their fifteenth anniversary in 2019. Superfantastic is a song of hope, inspired by the concerns and worries they had while starting a career in popular music, and conveys a message to “keep on dreaming,” as “your biggest dreams, they might come to reality.” This talk shares life stories of uncertainty: how the band started, what unexpected outcomes the band has witnessed, which decisions led them to this point, and what issues they are currently facing. As one member of the band is also involved in computer music research focusing on mobile music interaction, additionally this talk also covers current research topics and how it is to live as a multidisciplinary person spanning popular music, television, and computer music research.
《Keepon》在HRI出现近十年后,出现了一个问题:“Keepon去首尔”视频中使用的音乐如何?以“可能性,这是一个谜”为歌词的歌曲《superfantastic》是在2019年迎来成立15周年的韩国二人组合peppertones在2005年创作的。《Superfantastic》是一首充满希望的歌曲,灵感来自于他们在流行音乐事业开始时的担忧和担忧,并传达了“继续梦想”的信息,因为“你最大的梦想,它们可能会成真”。这次演讲将分享关于不确定性的生活故事:乐队是如何开始的,乐队见证了哪些意想不到的结果,哪些决定导致他们走到这一步,以及他们目前面临的问题。作为乐队的一员,他也参与了计算机音乐研究,专注于移动音乐的互动,此外,这次演讲还涵盖了当前的研究主题,以及作为一个跨越流行音乐、电视和计算机音乐研究的多学科人是如何生活的。
{"title":"Possibility, It's a Mystery: How Keepon's Video Brought Me Here","authors":"Jangwon Lee","doi":"10.1109/HRI.2019.8673066","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673066","url":null,"abstract":"After nearly ten years “Keepon” appeared in HRI, a question arises: what about the music used in the video “Keepon goes Seoul-searching?” The song “superfantastic” with the lyrics “possibility, it's a mystery” is written by peppertones in 2005, a Korean duo band celebrating their fifteenth anniversary in 2019. Superfantastic is a song of hope, inspired by the concerns and worries they had while starting a career in popular music, and conveys a message to “keep on dreaming,” as “your biggest dreams, they might come to reality.” This talk shares life stories of uncertainty: how the band started, what unexpected outcomes the band has witnessed, which decisions led them to this point, and what issues they are currently facing. As one member of the band is also involved in computer music research focusing on mobile music interaction, additionally this talk also covers current research topics and how it is to live as a multidisciplinary person spanning popular music, television, and computer music research.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"52 1","pages":"304-304"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80383212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Telesuit: An Immersive User-Centric Telepresence Control Suit 远程服装:沉浸式以用户为中心的远程控制套装
Pub Date : 2019-03-11 DOI: 10.1109/HRI.2019.8673228
I. Cardenas, Kelsey A. Vitullo, Michelle Park, Jong-Hoon Kim, Margarita Benitez, Chanjuan Chen, Linda Ohrn-McDaniels
Telepresence takes place when a user is afforded with the experience of being in a remote environment or virtual world, through the use of immersive technologies. A humanoid robot and a control apparatus that correlates the operator's movements whilst providing sufficient sensory feedback, encompass such immersive technologies. This paper considers the control mechanisms that afford telepresence, the requirements for continuous or extended telepresence control, and the health implications of engaging in complex time-constrained tasks. We present Telesuit - a full-body telepresence control system for operating a humanoid telepresence robot. The suit is part of a broader system that considers the constraints of controlling a dexterous bimanual robotic torso, and the need for modular hardware and software that allows for high-fidelity immersiveness. It incorporates a health-monitoring system that collects information such as respiratory effort, galvanic skin response, and heart rate. This information is consequently leveraged by the platform to adjust the telepresence experience and apply control modalities for autonomy. Furthermore, the design of the Telesuit garment considers both functionality and aesthetics.
当用户通过使用沉浸式技术获得在远程环境或虚拟世界中的体验时,远程呈现就发生了。一种人形机器人和一种控制装置,在提供足够的感官反馈的同时,将操作员的动作联系起来,包括这种沉浸式技术。本文考虑了提供远程呈现的控制机制,对持续或扩展远程呈现控制的要求,以及参与复杂的时间限制任务的健康影响。提出了一种用于操作仿人机器人的全身远程呈现控制系统Telesuit。这套套装是一个更广泛的系统的一部分,该系统考虑了控制灵巧的双手机器人躯干的限制,以及对模块化硬件和软件的需求,以实现高保真的沉浸感。它集成了一个健康监测系统,可以收集诸如呼吸力度、皮肤电反应和心率等信息。因此,平台利用这些信息来调整远程呈现体验并应用自主控制模式。此外,Telesuit服装的设计兼顾了功能性和美观性。
{"title":"Telesuit: An Immersive User-Centric Telepresence Control Suit","authors":"I. Cardenas, Kelsey A. Vitullo, Michelle Park, Jong-Hoon Kim, Margarita Benitez, Chanjuan Chen, Linda Ohrn-McDaniels","doi":"10.1109/HRI.2019.8673228","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673228","url":null,"abstract":"Telepresence takes place when a user is afforded with the experience of being in a remote environment or virtual world, through the use of immersive technologies. A humanoid robot and a control apparatus that correlates the operator's movements whilst providing sufficient sensory feedback, encompass such immersive technologies. This paper considers the control mechanisms that afford telepresence, the requirements for continuous or extended telepresence control, and the health implications of engaging in complex time-constrained tasks. We present Telesuit - a full-body telepresence control system for operating a humanoid telepresence robot. The suit is part of a broader system that considers the constraints of controlling a dexterous bimanual robotic torso, and the need for modular hardware and software that allows for high-fidelity immersiveness. It incorporates a health-monitoring system that collects information such as respiratory effort, galvanic skin response, and heart rate. This information is consequently leveraged by the platform to adjust the telepresence experience and apply control modalities for autonomy. Furthermore, the design of the Telesuit garment considers both functionality and aesthetics.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"25 1","pages":"654-655"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89136886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Human-Robot-Collaboration (HRC): Social Robots as Teaching Assistants for Training Activities in Small Groups 人-机器人协作(HRC):社会机器人作为小团体培训活动的教学助理
Pub Date : 2019-03-11 DOI: 10.1109/HRI.2019.8673103
Rinat B. Rosenberg-Kima, Yaacov Koren, Maya Yachini, Goren Gordon
Can we find real value for educational social robots in the very near future? We argue that the answer is yes. Specifically, in a classroom we observed, we identified a common gap: the instructor divided the class into small groups to work on a learning activity and could not address all their questions simultaneously. The purpose of this study was to examine whether social robots can assist in this scenario. In particular, we were interested to find whether a physical robot serves this purpose better than other technologies such as tablets. Benefits and drawbacks of the robot facilitator are discussed.
在不久的将来,我们能发现教育社交机器人的真正价值吗?我们认为答案是肯定的。具体来说,在我们观察的课堂中,我们发现了一个常见的差距:教师将班级分成小组进行学习活动,无法同时解决所有问题。本研究的目的是检验社交机器人是否可以在这种情况下提供帮助。我们特别感兴趣的是,物理机器人是否比平板电脑等其他技术更能达到这一目的。讨论了机器人引导员的优缺点。
{"title":"Human-Robot-Collaboration (HRC): Social Robots as Teaching Assistants for Training Activities in Small Groups","authors":"Rinat B. Rosenberg-Kima, Yaacov Koren, Maya Yachini, Goren Gordon","doi":"10.1109/HRI.2019.8673103","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673103","url":null,"abstract":"Can we find real value for educational social robots in the very near future? We argue that the answer is yes. Specifically, in a classroom we observed, we identified a common gap: the instructor divided the class into small groups to work on a learning activity and could not address all their questions simultaneously. The purpose of this study was to examine whether social robots can assist in this scenario. In particular, we were interested to find whether a physical robot serves this purpose better than other technologies such as tablets. Benefits and drawbacks of the robot facilitator are discussed.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"2 1","pages":"522-523"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75660749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Explanation-Based Reward Coaching to Improve Human Performance via Reinforcement Learning 基于解释的奖励训练,通过强化学习提高人的表现
Pub Date : 2019-03-11 DOI: 10.1109/HRI.2019.8673104
Aaquib Tabrez, Shivendra Agrawal, Bradley Hayes
For robots to effectively collaborate with humans, it is critical to establish a shared mental model amongst teammates. In the case of incongruous models, catastrophic failures may occur unless mitigating steps are taken. To identify and remedy these potential issues, we propose a novel mechanism for enabling an autonomous system to detect model disparity between itself and a human collaborator, infer the source of the disagreement within the model, evaluate potential consequences of this error, and finally, provide human-interpretable feedback to encourage model correction. This process effectively enables a robot to provide a human with a policy update based on perceived model disparity, reducing the likelihood of costly or dangerous failures during joint task execution. This paper makes two contributions at the intersection of explainable AI (xAI) and human-robot collaboration: 1) The Reward Augmentation and Repair through Explanation (RARE) framework for estimating task understanding and 2) A human subjects study illustrating the effectiveness of reward augmentation-based policy repair in a complex collaborative task.
为了使机器人有效地与人类合作,在团队成员之间建立共享的心智模型至关重要。在模型不协调的情况下,除非采取缓解措施,否则可能发生灾难性的故障。为了识别和纠正这些潜在问题,我们提出了一种新机制,使自主系统能够检测自身与人类合作者之间的模型差异,推断模型中分歧的来源,评估该错误的潜在后果,最后提供人类可解释的反馈以鼓励模型纠正。这个过程有效地使机器人能够根据感知到的模型差异向人类提供策略更新,从而减少在联合任务执行期间发生代价高昂或危险故障的可能性。本文在可解释人工智能(xAI)和人机协作的交叉点上做出了两项贡献:1)通过解释来评估任务理解的奖励增强和修复(RARE)框架;2)一项人类受试者研究说明了基于奖励增强的策略修复在复杂协作任务中的有效性。
{"title":"Explanation-Based Reward Coaching to Improve Human Performance via Reinforcement Learning","authors":"Aaquib Tabrez, Shivendra Agrawal, Bradley Hayes","doi":"10.1109/HRI.2019.8673104","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673104","url":null,"abstract":"For robots to effectively collaborate with humans, it is critical to establish a shared mental model amongst teammates. In the case of incongruous models, catastrophic failures may occur unless mitigating steps are taken. To identify and remedy these potential issues, we propose a novel mechanism for enabling an autonomous system to detect model disparity between itself and a human collaborator, infer the source of the disagreement within the model, evaluate potential consequences of this error, and finally, provide human-interpretable feedback to encourage model correction. This process effectively enables a robot to provide a human with a policy update based on perceived model disparity, reducing the likelihood of costly or dangerous failures during joint task execution. This paper makes two contributions at the intersection of explainable AI (xAI) and human-robot collaboration: 1) The Reward Augmentation and Repair through Explanation (RARE) framework for estimating task understanding and 2) A human subjects study illustrating the effectiveness of reward augmentation-based policy repair in a complex collaborative task.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"12 1","pages":"249-257"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81648453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
Human-Centered, Ergonomic Wearable Device with Computer Vision Augmented Intelligence for VR Multimodal Human-Smart Home Object Interaction 以人为本,符合人体工程学的可穿戴设备,具有计算机视觉增强智能,用于VR多模态人-智能家居对象交互
Pub Date : 2019-03-11 DOI: 10.1109/HRI.2019.8673156
Ker-Jiun Wang, C. Zheng, Zhihong Mao
In the future, Human-Robot Interaction should be enabled by a compact, human-centered and ergonomic wearable device that can merge human and machine altogether seamlessly by constantly identifying each other's intentions. In this paper, we will showcase the use of an ergonomic and lightweight wearable device that can identify human's eye/facial gestures with physiological signal measurements. Since human's intentions are usually coupled with eye movements and facial expressions, through proper design of interactions using these gestures, we can let people interact with the robots or smart home objects naturally. Combined with Computer Vision object recognition algorithms, we can allow people use very simple and straightforward communication strategies to operate telepresence robot and control smart home objects remotely, totally “Hands-Free”. People can wear a VR head-mounted display and see through the robot's eyes (the remote camera attached on the robot) and interact with the smart home devices intuitively by simple facial gestures or blink of the eyes. It is tremendous beneficial for the people with motor impairment as an assistive tool. For the normal people without disabilities, they can also free their hands to do other tasks and operate the smart home devices at the same time as multimodal control strategies.
未来,人机交互应该通过一种紧凑的、以人为本的、符合人体工程学的可穿戴设备来实现,这种设备可以通过不断识别彼此的意图,将人和机器无缝地融合在一起。在本文中,我们将展示一种符合人体工程学的轻量级可穿戴设备的使用,该设备可以通过生理信号测量来识别人类的眼睛/面部手势。由于人的意图通常伴随着眼球运动和面部表情,通过适当设计这些手势的交互,我们可以让人自然地与机器人或智能家居物品进行交互。结合计算机视觉物体识别算法,我们可以让人们使用非常简单直接的通信策略来操作远程呈现机器人和远程控制智能家居物体,完全“免提”。人们可以戴上VR头戴式显示器,通过机器人的眼睛(附着在机器人上的远程摄像头)看东西,通过简单的面部手势或眨眼,直观地与智能家居设备进行交互。它作为一种辅助工具,对运动障碍患者有极大的益处。对于没有残疾的正常人来说,他们也可以在多模态控制策略的同时解放双手去做其他的事情,操作智能家居设备。
{"title":"Human-Centered, Ergonomic Wearable Device with Computer Vision Augmented Intelligence for VR Multimodal Human-Smart Home Object Interaction","authors":"Ker-Jiun Wang, C. Zheng, Zhihong Mao","doi":"10.1109/HRI.2019.8673156","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673156","url":null,"abstract":"In the future, Human-Robot Interaction should be enabled by a compact, human-centered and ergonomic wearable device that can merge human and machine altogether seamlessly by constantly identifying each other's intentions. In this paper, we will showcase the use of an ergonomic and lightweight wearable device that can identify human's eye/facial gestures with physiological signal measurements. Since human's intentions are usually coupled with eye movements and facial expressions, through proper design of interactions using these gestures, we can let people interact with the robots or smart home objects naturally. Combined with Computer Vision object recognition algorithms, we can allow people use very simple and straightforward communication strategies to operate telepresence robot and control smart home objects remotely, totally “Hands-Free”. People can wear a VR head-mounted display and see through the robot's eyes (the remote camera attached on the robot) and interact with the smart home devices intuitively by simple facial gestures or blink of the eyes. It is tremendous beneficial for the people with motor impairment as an assistive tool. For the normal people without disabilities, they can also free their hands to do other tasks and operate the smart home devices at the same time as multimodal control strategies.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"5 1","pages":"767-768"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91377941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Balanced Information Gathering and Goal-Oriented Actions in Shared Autonomy 共享自治中的均衡信息收集与目标导向行为
Pub Date : 2019-03-11 DOI: 10.1109/HRI.2019.8673192
Connor Brooks, D. Szafir
Robotic teleoperation can be a complex task due to factors such as high degree-of-freedom manipulators, operator inexperience, and limited operator situational awareness. To reduce teleoperation complexity, researchers have developed the shared autonomy control paradigm that involves joint control of a robot by a human user and an autonomous control system. We introduce the concept of active learning into shared autonomy by developing a method for systems to leverage information gathering: minimizing the system's uncertainty about user goals by moving to information-rich states to observe user input. We create a framework for balancing information gathering actions, which help the system gain information about user goals, with goal-oriented actions, which move the robot towards the goal the system has inferred from the user. We conduct an evaluation within the context of users who are multitasking that compares pure teleoperation with two forms of shared autonomy: our balanced system and a traditional goal-oriented system. Our results show significant improvements for both shared autonomy systems over pure teleoperation in terms of belief convergence about the user's goal and task completion speed and reveal trade-offs across shared autonomy strategies that may inform future investigations in this space.
机器人远程操作可能是一项复杂的任务,因为诸如高自由度操纵器,操作员缺乏经验和有限的操作员态势感知等因素。为了降低遥操作的复杂性,研究人员开发了一种共享自主控制范式,该范式涉及人类用户和自主控制系统对机器人的联合控制。我们通过开发一种系统利用信息收集的方法,将主动学习的概念引入共享自治:通过移动到信息丰富的状态来观察用户输入,从而最大限度地减少系统对用户目标的不确定性。我们创建了一个框架来平衡信息收集行动,这有助于系统获得有关用户目标的信息,目标导向的行动,使机器人朝着系统从用户推断的目标移动。我们在多任务用户的背景下进行了评估,将纯远程操作与两种形式的共享自治进行了比较:我们的平衡系统和传统的目标导向系统。我们的研究结果表明,在用户目标和任务完成速度的信念收敛方面,共享自主系统都比纯远程操作有了显著的改进,并揭示了共享自主策略之间的权衡,这可能为该领域的未来研究提供信息。
{"title":"Balanced Information Gathering and Goal-Oriented Actions in Shared Autonomy","authors":"Connor Brooks, D. Szafir","doi":"10.1109/HRI.2019.8673192","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673192","url":null,"abstract":"Robotic teleoperation can be a complex task due to factors such as high degree-of-freedom manipulators, operator inexperience, and limited operator situational awareness. To reduce teleoperation complexity, researchers have developed the shared autonomy control paradigm that involves joint control of a robot by a human user and an autonomous control system. We introduce the concept of active learning into shared autonomy by developing a method for systems to leverage information gathering: minimizing the system's uncertainty about user goals by moving to information-rich states to observe user input. We create a framework for balancing information gathering actions, which help the system gain information about user goals, with goal-oriented actions, which move the robot towards the goal the system has inferred from the user. We conduct an evaluation within the context of users who are multitasking that compares pure teleoperation with two forms of shared autonomy: our balanced system and a traditional goal-oriented system. Our results show significant improvements for both shared autonomy systems over pure teleoperation in terms of belief convergence about the user's goal and task completion speed and reveal trade-offs across shared autonomy strategies that may inform future investigations in this space.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"57 1","pages":"85-94"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90952533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Design of a Human Multi-Robot Interaction Medium of Cognitive Perception 基于认知感知的人多机器人交互媒介设计
Pub Date : 2019-03-11 DOI: 10.1109/HRI.2019.8673188
Wonse Jo, J. Park, Sangjun Lee, Ahreum Lee, B. Min
We present a new multi-robot system as a means of creating a visual communication cue that can add dynamic illustration to static figures or diagrams to enhance the power of delivery and improve an audience's attention. The proposed idea is that when a presenter/speaker writes something such as a shape or letter on a whiteboard table, multiple mobile robots trace the shape or letter while dynamically expressing it. The dynamic movement of multi-robots will further stimulate the cognitive perception of the audience with handwriting, positively affecting the comprehension of content. To do this, we apply image processing algorithms to extract feature points from a handwritten shape or letter while a task allocation algorithm deploys multi-robots on the feature points to highlight the shape or letter. We present preliminary experiment results that verify the proposed system with various characters and letters such as the English alphabet.
我们提出了一种新的多机器人系统,作为一种创建视觉交流线索的手段,可以在静态数字或图表中添加动态插图,以增强交付能力并提高观众的注意力。提出的想法是,当演示者/演讲者在白板上写下诸如形状或字母之类的东西时,多个移动机器人在动态表达它的同时跟踪形状或字母。多机器人的动态运动将进一步激发观众对手写的认知感知,对内容的理解产生积极影响。为了做到这一点,我们应用图像处理算法从手写形状或字母中提取特征点,而任务分配算法在特征点上部署多机器人来突出显示形状或字母。我们给出了初步的实验结果,验证了所提出的系统与各种字符和字母,如英语字母表。
{"title":"Design of a Human Multi-Robot Interaction Medium of Cognitive Perception","authors":"Wonse Jo, J. Park, Sangjun Lee, Ahreum Lee, B. Min","doi":"10.1109/HRI.2019.8673188","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673188","url":null,"abstract":"We present a new multi-robot system as a means of creating a visual communication cue that can add dynamic illustration to static figures or diagrams to enhance the power of delivery and improve an audience's attention. The proposed idea is that when a presenter/speaker writes something such as a shape or letter on a whiteboard table, multiple mobile robots trace the shape or letter while dynamically expressing it. The dynamic movement of multi-robots will further stimulate the cognitive perception of the audience with handwriting, positively affecting the comprehension of content. To do this, we apply image processing algorithms to extract feature points from a handwritten shape or letter while a task allocation algorithm deploys multi-robots on the feature points to highlight the shape or letter. We present preliminary experiment results that verify the proposed system with various characters and letters such as the English alphabet.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"46 1","pages":"652-653"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86718644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Improving Human-Robot Interaction Through Explainable Reinforcement Learning 通过可解释的强化学习改善人机交互
Pub Date : 2019-03-11 DOI: 10.1109/HRI.2019.8673198
Aaquib Tabrez, Bradley Hayes
Gathering the most informative data from humans without overloading them remains an active research area in AI, and is closely coupled with the problems of determining how and when information should be communicated to others [12]. Current decision support systems (DSS) are still overly simple and static, and cannot adapt to changing environments we expect to deploy in modern systems [3], [4], [9], [11]. They are intrinsically limited in their ability to explain rationale versus merely listing their future behaviors, limiting a human's understanding of the system [2], [7]. Most probabilistic assessments of a task are conveyed after the task/skill is attempted rather than before [10], [14], [16]. This limits failure recovery and danger avoidance mechanisms. Existing work on predicting failures relies on sensors to accurately detect explicitly annotated and learned failure modes [13]. As such, important non-obvious pieces of information for assessing appropriate trust and/or course-of-action (COA) evaluation in collaborative scenarios can go overlooked, while irrelevant information may instead be provided that increases clutter and mental workload. Understanding how AI models arrive at specific decisions is a key principle of trust [8]. Therefore, it is critically important to develop new strategies for anticipating, communicating, and explaining justifications and rationale for AI driven behaviors via contextually appropriate semantics.
从人类那里收集最具信息量的数据而不使其超载仍然是人工智能的一个活跃研究领域,并且与确定如何以及何时将信息传达给他人的问题密切相关[12]。当前的决策支持系统(DSS)仍然过于简单和静态,无法适应我们期望在现代系统中部署的不断变化的环境[3],[4],[9],[11]。与仅仅列出未来的行为相比,它们解释基本原理的能力在本质上是有限的,这限制了人类对系统的理解[2],[7]。大多数对任务的概率评估是在任务/技能被尝试之后而不是之前进行的[10],[14],[16]。这限制了故障恢复和危险规避机制。现有的故障预测工作依赖于传感器来准确检测显式注释和学习的故障模式[13]。因此,用于评估协作场景中的适当信任和/或行动方案(COA)评估的重要的非明显信息片段可能会被忽略,而提供的不相关信息可能会增加混乱和心理工作量。理解人工智能模型如何做出具体决策是信任的一个关键原则[8]。因此,开发新的策略,通过上下文适当的语义来预测、交流和解释人工智能驱动行为的理由和基本原理,是至关重要的。
{"title":"Improving Human-Robot Interaction Through Explainable Reinforcement Learning","authors":"Aaquib Tabrez, Bradley Hayes","doi":"10.1109/HRI.2019.8673198","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673198","url":null,"abstract":"Gathering the most informative data from humans without overloading them remains an active research area in AI, and is closely coupled with the problems of determining how and when information should be communicated to others [12]. Current decision support systems (DSS) are still overly simple and static, and cannot adapt to changing environments we expect to deploy in modern systems [3], [4], [9], [11]. They are intrinsically limited in their ability to explain rationale versus merely listing their future behaviors, limiting a human's understanding of the system [2], [7]. Most probabilistic assessments of a task are conveyed after the task/skill is attempted rather than before [10], [14], [16]. This limits failure recovery and danger avoidance mechanisms. Existing work on predicting failures relies on sensors to accurately detect explicitly annotated and learned failure modes [13]. As such, important non-obvious pieces of information for assessing appropriate trust and/or course-of-action (COA) evaluation in collaborative scenarios can go overlooked, while irrelevant information may instead be provided that increases clutter and mental workload. Understanding how AI models arrive at specific decisions is a key principle of trust [8]. Therefore, it is critically important to develop new strategies for anticipating, communicating, and explaining justifications and rationale for AI driven behaviors via contextually appropriate semantics.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"51 1","pages":"751-753"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86156226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Lifespan Design of Conversational Agent with Growth and Regression Metaphor for the Natural Supervision on Robot Intelligence 基于生长和回归隐喻的会话智能体寿命设计及其对机器人智能的自然监督
Pub Date : 2019-03-01 DOI: 10.1109/HRI.2019.8673212
Chanmi Park, Jung Yeon Lee, Hyoung Woo Baek, Hae-Sung Lee, Jeehang Lee, Jinwoo Kim
Human's direct supervision on robot's erroneous behavior is crucial to enhance a robot intelligence for a ‘flawless’ human-robot interaction. Motivating humans to engage more actively for this purpose is however difficult. To alleviate such strain, this research proposes a novel approach, a growth and regression metaphoric interaction design inspired from human's communicative, intellectual, social competence aspect of developmental stages. We implemented the interaction design principle unto a conversational agent combined with a set of synthetic sensors. Within this context, we aim to show that the agent successfully encourages the online labeling activity in response to the faulty behavior of robots as a supervision process. The field study is going to be conducted to evaluate the efficacy of our proposal by measuring the annotation performance of real-time activity events in the wild. We expect to provide a more effective and practical means to supervise robot by real-time data labeling process for long-term usage in the human-robot interaction.
人类对机器人错误行为的直接监督是提高机器人智能以实现“完美”人机交互的关键。然而,激励人们更积极地参与这一目的是困难的。为了缓解这种压力,本研究提出了一种新的方法,即从人类发展阶段的交际、智力、社会能力方面启发的成长与回归隐喻交互设计。我们将交互设计原理实现到一个结合了一组合成传感器的会话代理上。在这种情况下,我们的目标是证明代理成功地鼓励在线标记活动,以响应机器人的错误行为作为监督过程。我们将进行实地研究,通过测量实时活动事件在野外的注释性能来评估我们的建议的有效性。我们期望通过实时数据标注过程为人机交互中长期使用的机器人监控提供一种更有效和实用的手段。
{"title":"Lifespan Design of Conversational Agent with Growth and Regression Metaphor for the Natural Supervision on Robot Intelligence","authors":"Chanmi Park, Jung Yeon Lee, Hyoung Woo Baek, Hae-Sung Lee, Jeehang Lee, Jinwoo Kim","doi":"10.1109/HRI.2019.8673212","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673212","url":null,"abstract":"Human's direct supervision on robot's erroneous behavior is crucial to enhance a robot intelligence for a ‘flawless’ human-robot interaction. Motivating humans to engage more actively for this purpose is however difficult. To alleviate such strain, this research proposes a novel approach, a growth and regression metaphoric interaction design inspired from human's communicative, intellectual, social competence aspect of developmental stages. We implemented the interaction design principle unto a conversational agent combined with a set of synthetic sensors. Within this context, we aim to show that the agent successfully encourages the online labeling activity in response to the faulty behavior of robots as a supervision process. The field study is going to be conducted to evaluate the efficacy of our proposal by measuring the annotation performance of real-time activity events in the wild. We expect to provide a more effective and practical means to supervise robot by real-time data labeling process for long-term usage in the human-robot interaction.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"29 1","pages":"646-647"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73886597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring Collaborative Interactions Between Robots and Blind People 探索机器人与盲人之间的协作互动
Pub Date : 2019-03-01 DOI: 10.1109/HRI.2019.8673312
Filipa Correia, Raquel Oliveira, Mayara Bonani, André Rodrigues, Tiago Guerreiro, Ana Paiva
Our goal is to disseminate an exploratory investigation that examined how physical presence and collaboration can be important factors in the development of assistive robots that can go beyond information-giving technologies. In particular, this video exhibits the setting and procedures of a user study that explored different types of collaborative interactions between robots and blind people.
我们的目标是传播一项探索性调查,研究物理存在和协作如何成为辅助机器人开发的重要因素,这些辅助机器人可以超越信息提供技术。特别是,这个视频展示了用户研究的设置和程序,探索了机器人和盲人之间不同类型的协作互动。
{"title":"Exploring Collaborative Interactions Between Robots and Blind People","authors":"Filipa Correia, Raquel Oliveira, Mayara Bonani, André Rodrigues, Tiago Guerreiro, Ana Paiva","doi":"10.1109/HRI.2019.8673312","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673312","url":null,"abstract":"Our goal is to disseminate an exploratory investigation that examined how physical presence and collaboration can be important factors in the development of assistive robots that can go beyond information-giving technologies. In particular, this video exhibits the setting and procedures of a user study that explored different types of collaborative interactions between robots and blind people.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"14 1","pages":"365-365"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74319463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1