首页 > 最新文献

2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)最新文献

英文 中文
An Antisocial Social Robot: Using Negative Affect to Reinforce Cooperation in Human-Robot Interactions 反社会型社交机器人:利用负面情感加强人机互动中的合作
Pub Date : 2019-03-11 DOI: 10.1109/HRI.2019.8673264
Hideki Garcia Goo, Jaime Alvarez Perez, Virginia Contreras
Inspired by prior work with robots that physically display positive emotion (e.g., [1]), we were interested to see how people might interact with a robot capable of communicating cues of negative affect such as anger. Based in particular on [2], we have prototyped an anti-social, zoomorphic robot equipped with a spike mechanism to nonverbally communicate anger. The robot's embodiment involves a simple dome-like morphology with a ring of inflatable spikes wrapped around its circumference. Ultrasonic sensors engage the robot's antisocial cuing (e.g., “spiking” when a person comes too close). To evaluate people's perceptions of the robot and the impact of the spike mechanism on their behavior, we plan to deploy the robot in social settings where it would be inappropriate for a person to approach (e.g., in front of a door with a “do not disturb” sign). We expect that exploration of robot antisociality, in addition to prosociality, will help inform the design of more socially complex human-robot interactions.
受先前机器人表现积极情绪的工作(例如,[1])的启发,我们有兴趣了解人们如何与能够传达消极情绪(如愤怒)线索的机器人互动。在[2]的基础上,我们制作了一个反社会的动物型机器人的原型,该机器人配备了一个刺突机制,以非语言方式表达愤怒。这个机器人的外形是一个简单的圆顶状的外形,在它的圆周上缠绕着一圈可充气的尖刺。超声波传感器利用机器人的反社会信号(例如,当有人靠得太近时,机器人会发出“脉冲”)。为了评估人们对机器人的看法以及刺突机制对他们行为的影响,我们计划将机器人部署在不适合人类接近的社交环境中(例如,在标有“请勿打扰”标志的门前)。我们期望对机器人反社会性的探索,除了亲社会性之外,将有助于为更复杂的人机交互设计提供信息。
{"title":"An Antisocial Social Robot: Using Negative Affect to Reinforce Cooperation in Human-Robot Interactions","authors":"Hideki Garcia Goo, Jaime Alvarez Perez, Virginia Contreras","doi":"10.1109/HRI.2019.8673264","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673264","url":null,"abstract":"Inspired by prior work with robots that physically display positive emotion (e.g., [1]), we were interested to see how people might interact with a robot capable of communicating cues of negative affect such as anger. Based in particular on [2], we have prototyped an anti-social, zoomorphic robot equipped with a spike mechanism to nonverbally communicate anger. The robot's embodiment involves a simple dome-like morphology with a ring of inflatable spikes wrapped around its circumference. Ultrasonic sensors engage the robot's antisocial cuing (e.g., “spiking” when a person comes too close). To evaluate people's perceptions of the robot and the impact of the spike mechanism on their behavior, we plan to deploy the robot in social settings where it would be inappropriate for a person to approach (e.g., in front of a door with a “do not disturb” sign). We expect that exploration of robot antisociality, in addition to prosociality, will help inform the design of more socially complex human-robot interactions.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"29 1","pages":"763-764"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89345302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Infrasound for HRI: A Robot Using Low-Frequency Vibrations to Impact How People Perceive its Actions HRI的次声:一个使用低频振动来影响人们如何感知其行为的机器人
Pub Date : 2019-03-11 DOI: 10.1109/HRI.2019.8673172
Raquel Thiessen, Daniel J. Rea, Diljot S. Garcha, Cheng Cheng, J. Young
We investigate robots using infrasound, low-frequency vibrational energy at or near the human hearing threshold, as an interaction tool for working with people. Research in psychology suggests that the presence of infrasound can impact a person's emotional state and mood, even when the person is not acutely aware of the infrasound. Although often not noticed, infrasound is commonly present in many situations including factories, airports, or near motor vehicles. Further, a robot itself can produce infrasound. Thus, we examine if infrasound may impact how people interpret a robot's social communication: if the presence of infrasound makes a robot seem more or less happy, energetic, etc., as a result of impacting a person's mood. We present the results from a series of experiments that investigate how people rate a social robot's emotionally-charged gestures, and how varied levels and sources of infrasound impact these ratings. Our results show that infrasound does have a psychological effect on the person's perception of the robot's behaviors, supporting this as a technique that a robot can use as part of its interaction design toolkit. We further provide a comparison of infrasound generation methods.
我们研究了使用次声的机器人,低频振动能量达到或接近人类的听力阈值,作为与人合作的互动工具。心理学研究表明,次声的存在会影响一个人的情绪状态和情绪,即使这个人没有敏锐地意识到次声。虽然通常不被注意到,但次声通常存在于许多场合,包括工厂、机场或机动车附近。此外,机器人本身可以产生次声。因此,我们研究次声是否会影响人们如何理解机器人的社交交流:次声的存在是否会使机器人看起来更快乐、更有活力等,从而影响人的情绪。我们展示了一系列实验的结果,这些实验调查了人们如何评价社交机器人充满情感的手势,以及不同水平和来源的次声如何影响这些评分。我们的研究结果表明,次声确实会对人对机器人行为的感知产生心理影响,这支持了这种技术,机器人可以将其作为交互设计工具包的一部分。我们进一步提供了次声产生方法的比较。
{"title":"Infrasound for HRI: A Robot Using Low-Frequency Vibrations to Impact How People Perceive its Actions","authors":"Raquel Thiessen, Daniel J. Rea, Diljot S. Garcha, Cheng Cheng, J. Young","doi":"10.1109/HRI.2019.8673172","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673172","url":null,"abstract":"We investigate robots using infrasound, low-frequency vibrational energy at or near the human hearing threshold, as an interaction tool for working with people. Research in psychology suggests that the presence of infrasound can impact a person's emotional state and mood, even when the person is not acutely aware of the infrasound. Although often not noticed, infrasound is commonly present in many situations including factories, airports, or near motor vehicles. Further, a robot itself can produce infrasound. Thus, we examine if infrasound may impact how people interpret a robot's social communication: if the presence of infrasound makes a robot seem more or less happy, energetic, etc., as a result of impacting a person's mood. We present the results from a series of experiments that investigate how people rate a social robot's emotionally-charged gestures, and how varied levels and sources of infrasound impact these ratings. Our results show that infrasound does have a psychological effect on the person's perception of the robot's behaviors, supporting this as a technique that a robot can use as part of its interaction design toolkit. We further provide a comparison of infrasound generation methods.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"29 1","pages":"11-18"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81959010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Human-Robot-Collaboration (HRC): Social Robots as Teaching Assistants for Training Activities in Small Groups 人-机器人协作(HRC):社会机器人作为小团体培训活动的教学助理
Pub Date : 2019-03-11 DOI: 10.1109/HRI.2019.8673103
Rinat B. Rosenberg-Kima, Yaacov Koren, Maya Yachini, Goren Gordon
Can we find real value for educational social robots in the very near future? We argue that the answer is yes. Specifically, in a classroom we observed, we identified a common gap: the instructor divided the class into small groups to work on a learning activity and could not address all their questions simultaneously. The purpose of this study was to examine whether social robots can assist in this scenario. In particular, we were interested to find whether a physical robot serves this purpose better than other technologies such as tablets. Benefits and drawbacks of the robot facilitator are discussed.
在不久的将来,我们能发现教育社交机器人的真正价值吗?我们认为答案是肯定的。具体来说,在我们观察的课堂中,我们发现了一个常见的差距:教师将班级分成小组进行学习活动,无法同时解决所有问题。本研究的目的是检验社交机器人是否可以在这种情况下提供帮助。我们特别感兴趣的是,物理机器人是否比平板电脑等其他技术更能达到这一目的。讨论了机器人引导员的优缺点。
{"title":"Human-Robot-Collaboration (HRC): Social Robots as Teaching Assistants for Training Activities in Small Groups","authors":"Rinat B. Rosenberg-Kima, Yaacov Koren, Maya Yachini, Goren Gordon","doi":"10.1109/HRI.2019.8673103","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673103","url":null,"abstract":"Can we find real value for educational social robots in the very near future? We argue that the answer is yes. Specifically, in a classroom we observed, we identified a common gap: the instructor divided the class into small groups to work on a learning activity and could not address all their questions simultaneously. The purpose of this study was to examine whether social robots can assist in this scenario. In particular, we were interested to find whether a physical robot serves this purpose better than other technologies such as tablets. Benefits and drawbacks of the robot facilitator are discussed.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"2 1","pages":"522-523"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75660749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Explanation-Based Reward Coaching to Improve Human Performance via Reinforcement Learning 基于解释的奖励训练,通过强化学习提高人的表现
Pub Date : 2019-03-11 DOI: 10.1109/HRI.2019.8673104
Aaquib Tabrez, Shivendra Agrawal, Bradley Hayes
For robots to effectively collaborate with humans, it is critical to establish a shared mental model amongst teammates. In the case of incongruous models, catastrophic failures may occur unless mitigating steps are taken. To identify and remedy these potential issues, we propose a novel mechanism for enabling an autonomous system to detect model disparity between itself and a human collaborator, infer the source of the disagreement within the model, evaluate potential consequences of this error, and finally, provide human-interpretable feedback to encourage model correction. This process effectively enables a robot to provide a human with a policy update based on perceived model disparity, reducing the likelihood of costly or dangerous failures during joint task execution. This paper makes two contributions at the intersection of explainable AI (xAI) and human-robot collaboration: 1) The Reward Augmentation and Repair through Explanation (RARE) framework for estimating task understanding and 2) A human subjects study illustrating the effectiveness of reward augmentation-based policy repair in a complex collaborative task.
为了使机器人有效地与人类合作,在团队成员之间建立共享的心智模型至关重要。在模型不协调的情况下,除非采取缓解措施,否则可能发生灾难性的故障。为了识别和纠正这些潜在问题,我们提出了一种新机制,使自主系统能够检测自身与人类合作者之间的模型差异,推断模型中分歧的来源,评估该错误的潜在后果,最后提供人类可解释的反馈以鼓励模型纠正。这个过程有效地使机器人能够根据感知到的模型差异向人类提供策略更新,从而减少在联合任务执行期间发生代价高昂或危险故障的可能性。本文在可解释人工智能(xAI)和人机协作的交叉点上做出了两项贡献:1)通过解释来评估任务理解的奖励增强和修复(RARE)框架;2)一项人类受试者研究说明了基于奖励增强的策略修复在复杂协作任务中的有效性。
{"title":"Explanation-Based Reward Coaching to Improve Human Performance via Reinforcement Learning","authors":"Aaquib Tabrez, Shivendra Agrawal, Bradley Hayes","doi":"10.1109/HRI.2019.8673104","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673104","url":null,"abstract":"For robots to effectively collaborate with humans, it is critical to establish a shared mental model amongst teammates. In the case of incongruous models, catastrophic failures may occur unless mitigating steps are taken. To identify and remedy these potential issues, we propose a novel mechanism for enabling an autonomous system to detect model disparity between itself and a human collaborator, infer the source of the disagreement within the model, evaluate potential consequences of this error, and finally, provide human-interpretable feedback to encourage model correction. This process effectively enables a robot to provide a human with a policy update based on perceived model disparity, reducing the likelihood of costly or dangerous failures during joint task execution. This paper makes two contributions at the intersection of explainable AI (xAI) and human-robot collaboration: 1) The Reward Augmentation and Repair through Explanation (RARE) framework for estimating task understanding and 2) A human subjects study illustrating the effectiveness of reward augmentation-based policy repair in a complex collaborative task.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"12 1","pages":"249-257"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81648453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
Human-Centered, Ergonomic Wearable Device with Computer Vision Augmented Intelligence for VR Multimodal Human-Smart Home Object Interaction 以人为本,符合人体工程学的可穿戴设备,具有计算机视觉增强智能,用于VR多模态人-智能家居对象交互
Pub Date : 2019-03-11 DOI: 10.1109/HRI.2019.8673156
Ker-Jiun Wang, C. Zheng, Zhihong Mao
In the future, Human-Robot Interaction should be enabled by a compact, human-centered and ergonomic wearable device that can merge human and machine altogether seamlessly by constantly identifying each other's intentions. In this paper, we will showcase the use of an ergonomic and lightweight wearable device that can identify human's eye/facial gestures with physiological signal measurements. Since human's intentions are usually coupled with eye movements and facial expressions, through proper design of interactions using these gestures, we can let people interact with the robots or smart home objects naturally. Combined with Computer Vision object recognition algorithms, we can allow people use very simple and straightforward communication strategies to operate telepresence robot and control smart home objects remotely, totally “Hands-Free”. People can wear a VR head-mounted display and see through the robot's eyes (the remote camera attached on the robot) and interact with the smart home devices intuitively by simple facial gestures or blink of the eyes. It is tremendous beneficial for the people with motor impairment as an assistive tool. For the normal people without disabilities, they can also free their hands to do other tasks and operate the smart home devices at the same time as multimodal control strategies.
未来,人机交互应该通过一种紧凑的、以人为本的、符合人体工程学的可穿戴设备来实现,这种设备可以通过不断识别彼此的意图,将人和机器无缝地融合在一起。在本文中,我们将展示一种符合人体工程学的轻量级可穿戴设备的使用,该设备可以通过生理信号测量来识别人类的眼睛/面部手势。由于人的意图通常伴随着眼球运动和面部表情,通过适当设计这些手势的交互,我们可以让人自然地与机器人或智能家居物品进行交互。结合计算机视觉物体识别算法,我们可以让人们使用非常简单直接的通信策略来操作远程呈现机器人和远程控制智能家居物体,完全“免提”。人们可以戴上VR头戴式显示器,通过机器人的眼睛(附着在机器人上的远程摄像头)看东西,通过简单的面部手势或眨眼,直观地与智能家居设备进行交互。它作为一种辅助工具,对运动障碍患者有极大的益处。对于没有残疾的正常人来说,他们也可以在多模态控制策略的同时解放双手去做其他的事情,操作智能家居设备。
{"title":"Human-Centered, Ergonomic Wearable Device with Computer Vision Augmented Intelligence for VR Multimodal Human-Smart Home Object Interaction","authors":"Ker-Jiun Wang, C. Zheng, Zhihong Mao","doi":"10.1109/HRI.2019.8673156","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673156","url":null,"abstract":"In the future, Human-Robot Interaction should be enabled by a compact, human-centered and ergonomic wearable device that can merge human and machine altogether seamlessly by constantly identifying each other's intentions. In this paper, we will showcase the use of an ergonomic and lightweight wearable device that can identify human's eye/facial gestures with physiological signal measurements. Since human's intentions are usually coupled with eye movements and facial expressions, through proper design of interactions using these gestures, we can let people interact with the robots or smart home objects naturally. Combined with Computer Vision object recognition algorithms, we can allow people use very simple and straightforward communication strategies to operate telepresence robot and control smart home objects remotely, totally “Hands-Free”. People can wear a VR head-mounted display and see through the robot's eyes (the remote camera attached on the robot) and interact with the smart home devices intuitively by simple facial gestures or blink of the eyes. It is tremendous beneficial for the people with motor impairment as an assistive tool. For the normal people without disabilities, they can also free their hands to do other tasks and operate the smart home devices at the same time as multimodal control strategies.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"5 1","pages":"767-768"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91377941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Balanced Information Gathering and Goal-Oriented Actions in Shared Autonomy 共享自治中的均衡信息收集与目标导向行为
Pub Date : 2019-03-11 DOI: 10.1109/HRI.2019.8673192
Connor Brooks, D. Szafir
Robotic teleoperation can be a complex task due to factors such as high degree-of-freedom manipulators, operator inexperience, and limited operator situational awareness. To reduce teleoperation complexity, researchers have developed the shared autonomy control paradigm that involves joint control of a robot by a human user and an autonomous control system. We introduce the concept of active learning into shared autonomy by developing a method for systems to leverage information gathering: minimizing the system's uncertainty about user goals by moving to information-rich states to observe user input. We create a framework for balancing information gathering actions, which help the system gain information about user goals, with goal-oriented actions, which move the robot towards the goal the system has inferred from the user. We conduct an evaluation within the context of users who are multitasking that compares pure teleoperation with two forms of shared autonomy: our balanced system and a traditional goal-oriented system. Our results show significant improvements for both shared autonomy systems over pure teleoperation in terms of belief convergence about the user's goal and task completion speed and reveal trade-offs across shared autonomy strategies that may inform future investigations in this space.
机器人远程操作可能是一项复杂的任务,因为诸如高自由度操纵器,操作员缺乏经验和有限的操作员态势感知等因素。为了降低遥操作的复杂性,研究人员开发了一种共享自主控制范式,该范式涉及人类用户和自主控制系统对机器人的联合控制。我们通过开发一种系统利用信息收集的方法,将主动学习的概念引入共享自治:通过移动到信息丰富的状态来观察用户输入,从而最大限度地减少系统对用户目标的不确定性。我们创建了一个框架来平衡信息收集行动,这有助于系统获得有关用户目标的信息,目标导向的行动,使机器人朝着系统从用户推断的目标移动。我们在多任务用户的背景下进行了评估,将纯远程操作与两种形式的共享自治进行了比较:我们的平衡系统和传统的目标导向系统。我们的研究结果表明,在用户目标和任务完成速度的信念收敛方面,共享自主系统都比纯远程操作有了显著的改进,并揭示了共享自主策略之间的权衡,这可能为该领域的未来研究提供信息。
{"title":"Balanced Information Gathering and Goal-Oriented Actions in Shared Autonomy","authors":"Connor Brooks, D. Szafir","doi":"10.1109/HRI.2019.8673192","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673192","url":null,"abstract":"Robotic teleoperation can be a complex task due to factors such as high degree-of-freedom manipulators, operator inexperience, and limited operator situational awareness. To reduce teleoperation complexity, researchers have developed the shared autonomy control paradigm that involves joint control of a robot by a human user and an autonomous control system. We introduce the concept of active learning into shared autonomy by developing a method for systems to leverage information gathering: minimizing the system's uncertainty about user goals by moving to information-rich states to observe user input. We create a framework for balancing information gathering actions, which help the system gain information about user goals, with goal-oriented actions, which move the robot towards the goal the system has inferred from the user. We conduct an evaluation within the context of users who are multitasking that compares pure teleoperation with two forms of shared autonomy: our balanced system and a traditional goal-oriented system. Our results show significant improvements for both shared autonomy systems over pure teleoperation in terms of belief convergence about the user's goal and task completion speed and reveal trade-offs across shared autonomy strategies that may inform future investigations in this space.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"57 1","pages":"85-94"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90952533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Design of a Human Multi-Robot Interaction Medium of Cognitive Perception 基于认知感知的人多机器人交互媒介设计
Pub Date : 2019-03-11 DOI: 10.1109/HRI.2019.8673188
Wonse Jo, J. Park, Sangjun Lee, Ahreum Lee, B. Min
We present a new multi-robot system as a means of creating a visual communication cue that can add dynamic illustration to static figures or diagrams to enhance the power of delivery and improve an audience's attention. The proposed idea is that when a presenter/speaker writes something such as a shape or letter on a whiteboard table, multiple mobile robots trace the shape or letter while dynamically expressing it. The dynamic movement of multi-robots will further stimulate the cognitive perception of the audience with handwriting, positively affecting the comprehension of content. To do this, we apply image processing algorithms to extract feature points from a handwritten shape or letter while a task allocation algorithm deploys multi-robots on the feature points to highlight the shape or letter. We present preliminary experiment results that verify the proposed system with various characters and letters such as the English alphabet.
我们提出了一种新的多机器人系统,作为一种创建视觉交流线索的手段,可以在静态数字或图表中添加动态插图,以增强交付能力并提高观众的注意力。提出的想法是,当演示者/演讲者在白板上写下诸如形状或字母之类的东西时,多个移动机器人在动态表达它的同时跟踪形状或字母。多机器人的动态运动将进一步激发观众对手写的认知感知,对内容的理解产生积极影响。为了做到这一点,我们应用图像处理算法从手写形状或字母中提取特征点,而任务分配算法在特征点上部署多机器人来突出显示形状或字母。我们给出了初步的实验结果,验证了所提出的系统与各种字符和字母,如英语字母表。
{"title":"Design of a Human Multi-Robot Interaction Medium of Cognitive Perception","authors":"Wonse Jo, J. Park, Sangjun Lee, Ahreum Lee, B. Min","doi":"10.1109/HRI.2019.8673188","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673188","url":null,"abstract":"We present a new multi-robot system as a means of creating a visual communication cue that can add dynamic illustration to static figures or diagrams to enhance the power of delivery and improve an audience's attention. The proposed idea is that when a presenter/speaker writes something such as a shape or letter on a whiteboard table, multiple mobile robots trace the shape or letter while dynamically expressing it. The dynamic movement of multi-robots will further stimulate the cognitive perception of the audience with handwriting, positively affecting the comprehension of content. To do this, we apply image processing algorithms to extract feature points from a handwritten shape or letter while a task allocation algorithm deploys multi-robots on the feature points to highlight the shape or letter. We present preliminary experiment results that verify the proposed system with various characters and letters such as the English alphabet.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"46 1","pages":"652-653"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86718644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Improving Human-Robot Interaction Through Explainable Reinforcement Learning 通过可解释的强化学习改善人机交互
Pub Date : 2019-03-11 DOI: 10.1109/HRI.2019.8673198
Aaquib Tabrez, Bradley Hayes
Gathering the most informative data from humans without overloading them remains an active research area in AI, and is closely coupled with the problems of determining how and when information should be communicated to others [12]. Current decision support systems (DSS) are still overly simple and static, and cannot adapt to changing environments we expect to deploy in modern systems [3], [4], [9], [11]. They are intrinsically limited in their ability to explain rationale versus merely listing their future behaviors, limiting a human's understanding of the system [2], [7]. Most probabilistic assessments of a task are conveyed after the task/skill is attempted rather than before [10], [14], [16]. This limits failure recovery and danger avoidance mechanisms. Existing work on predicting failures relies on sensors to accurately detect explicitly annotated and learned failure modes [13]. As such, important non-obvious pieces of information for assessing appropriate trust and/or course-of-action (COA) evaluation in collaborative scenarios can go overlooked, while irrelevant information may instead be provided that increases clutter and mental workload. Understanding how AI models arrive at specific decisions is a key principle of trust [8]. Therefore, it is critically important to develop new strategies for anticipating, communicating, and explaining justifications and rationale for AI driven behaviors via contextually appropriate semantics.
从人类那里收集最具信息量的数据而不使其超载仍然是人工智能的一个活跃研究领域,并且与确定如何以及何时将信息传达给他人的问题密切相关[12]。当前的决策支持系统(DSS)仍然过于简单和静态,无法适应我们期望在现代系统中部署的不断变化的环境[3],[4],[9],[11]。与仅仅列出未来的行为相比,它们解释基本原理的能力在本质上是有限的,这限制了人类对系统的理解[2],[7]。大多数对任务的概率评估是在任务/技能被尝试之后而不是之前进行的[10],[14],[16]。这限制了故障恢复和危险规避机制。现有的故障预测工作依赖于传感器来准确检测显式注释和学习的故障模式[13]。因此,用于评估协作场景中的适当信任和/或行动方案(COA)评估的重要的非明显信息片段可能会被忽略,而提供的不相关信息可能会增加混乱和心理工作量。理解人工智能模型如何做出具体决策是信任的一个关键原则[8]。因此,开发新的策略,通过上下文适当的语义来预测、交流和解释人工智能驱动行为的理由和基本原理,是至关重要的。
{"title":"Improving Human-Robot Interaction Through Explainable Reinforcement Learning","authors":"Aaquib Tabrez, Bradley Hayes","doi":"10.1109/HRI.2019.8673198","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673198","url":null,"abstract":"Gathering the most informative data from humans without overloading them remains an active research area in AI, and is closely coupled with the problems of determining how and when information should be communicated to others [12]. Current decision support systems (DSS) are still overly simple and static, and cannot adapt to changing environments we expect to deploy in modern systems [3], [4], [9], [11]. They are intrinsically limited in their ability to explain rationale versus merely listing their future behaviors, limiting a human's understanding of the system [2], [7]. Most probabilistic assessments of a task are conveyed after the task/skill is attempted rather than before [10], [14], [16]. This limits failure recovery and danger avoidance mechanisms. Existing work on predicting failures relies on sensors to accurately detect explicitly annotated and learned failure modes [13]. As such, important non-obvious pieces of information for assessing appropriate trust and/or course-of-action (COA) evaluation in collaborative scenarios can go overlooked, while irrelevant information may instead be provided that increases clutter and mental workload. Understanding how AI models arrive at specific decisions is a key principle of trust [8]. Therefore, it is critically important to develop new strategies for anticipating, communicating, and explaining justifications and rationale for AI driven behaviors via contextually appropriate semantics.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"51 1","pages":"751-753"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86156226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Lifespan Design of Conversational Agent with Growth and Regression Metaphor for the Natural Supervision on Robot Intelligence 基于生长和回归隐喻的会话智能体寿命设计及其对机器人智能的自然监督
Pub Date : 2019-03-01 DOI: 10.1109/HRI.2019.8673212
Chanmi Park, Jung Yeon Lee, Hyoung Woo Baek, Hae-Sung Lee, Jeehang Lee, Jinwoo Kim
Human's direct supervision on robot's erroneous behavior is crucial to enhance a robot intelligence for a ‘flawless’ human-robot interaction. Motivating humans to engage more actively for this purpose is however difficult. To alleviate such strain, this research proposes a novel approach, a growth and regression metaphoric interaction design inspired from human's communicative, intellectual, social competence aspect of developmental stages. We implemented the interaction design principle unto a conversational agent combined with a set of synthetic sensors. Within this context, we aim to show that the agent successfully encourages the online labeling activity in response to the faulty behavior of robots as a supervision process. The field study is going to be conducted to evaluate the efficacy of our proposal by measuring the annotation performance of real-time activity events in the wild. We expect to provide a more effective and practical means to supervise robot by real-time data labeling process for long-term usage in the human-robot interaction.
人类对机器人错误行为的直接监督是提高机器人智能以实现“完美”人机交互的关键。然而,激励人们更积极地参与这一目的是困难的。为了缓解这种压力,本研究提出了一种新的方法,即从人类发展阶段的交际、智力、社会能力方面启发的成长与回归隐喻交互设计。我们将交互设计原理实现到一个结合了一组合成传感器的会话代理上。在这种情况下,我们的目标是证明代理成功地鼓励在线标记活动,以响应机器人的错误行为作为监督过程。我们将进行实地研究,通过测量实时活动事件在野外的注释性能来评估我们的建议的有效性。我们期望通过实时数据标注过程为人机交互中长期使用的机器人监控提供一种更有效和实用的手段。
{"title":"Lifespan Design of Conversational Agent with Growth and Regression Metaphor for the Natural Supervision on Robot Intelligence","authors":"Chanmi Park, Jung Yeon Lee, Hyoung Woo Baek, Hae-Sung Lee, Jeehang Lee, Jinwoo Kim","doi":"10.1109/HRI.2019.8673212","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673212","url":null,"abstract":"Human's direct supervision on robot's erroneous behavior is crucial to enhance a robot intelligence for a ‘flawless’ human-robot interaction. Motivating humans to engage more actively for this purpose is however difficult. To alleviate such strain, this research proposes a novel approach, a growth and regression metaphoric interaction design inspired from human's communicative, intellectual, social competence aspect of developmental stages. We implemented the interaction design principle unto a conversational agent combined with a set of synthetic sensors. Within this context, we aim to show that the agent successfully encourages the online labeling activity in response to the faulty behavior of robots as a supervision process. The field study is going to be conducted to evaluate the efficacy of our proposal by measuring the annotation performance of real-time activity events in the wild. We expect to provide a more effective and practical means to supervise robot by real-time data labeling process for long-term usage in the human-robot interaction.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"29 1","pages":"646-647"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73886597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring Collaborative Interactions Between Robots and Blind People 探索机器人与盲人之间的协作互动
Pub Date : 2019-03-01 DOI: 10.1109/HRI.2019.8673312
Filipa Correia, Raquel Oliveira, Mayara Bonani, André Rodrigues, Tiago Guerreiro, Ana Paiva
Our goal is to disseminate an exploratory investigation that examined how physical presence and collaboration can be important factors in the development of assistive robots that can go beyond information-giving technologies. In particular, this video exhibits the setting and procedures of a user study that explored different types of collaborative interactions between robots and blind people.
我们的目标是传播一项探索性调查,研究物理存在和协作如何成为辅助机器人开发的重要因素,这些辅助机器人可以超越信息提供技术。特别是,这个视频展示了用户研究的设置和程序,探索了机器人和盲人之间不同类型的协作互动。
{"title":"Exploring Collaborative Interactions Between Robots and Blind People","authors":"Filipa Correia, Raquel Oliveira, Mayara Bonani, André Rodrigues, Tiago Guerreiro, Ana Paiva","doi":"10.1109/HRI.2019.8673312","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673312","url":null,"abstract":"Our goal is to disseminate an exploratory investigation that examined how physical presence and collaboration can be important factors in the development of assistive robots that can go beyond information-giving technologies. In particular, this video exhibits the setting and procedures of a user study that explored different types of collaborative interactions between robots and blind people.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"14 1","pages":"365-365"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74319463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1