首页 > 最新文献

2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)最新文献

英文 中文
Joint action perception to enable fluent human-robot teamwork 联合动作感知,实现流畅的人机协作
T. Iqbal, Michael J. Gonzales, L. Riek
To be effective team members, it is important for robots to understand the high-level behaviors of collocated humans. This is a challenging perceptual task when both the robots and people are in motion. In this paper, we describe an event-based model for multiple robots to automatically measure synchronous joint action of a group while both the robots and co-present humans are moving. We validated our model through an experiment where two people marched both synchronously and asynchronously, while being followed by two mobile robots. Our results suggest that our model accurately identifies synchronous motion, which can enable more adept human-robot collaboration.
为了成为有效的团队成员,机器人必须理解与之共存的人类的高级行为。当机器人和人都在运动时,这是一项具有挑战性的感知任务。在本文中,我们描述了一个基于事件的多机器人模型,用于在机器人和共同在场的人移动时自动测量一组机器人的同步关节动作。我们通过一个实验验证了我们的模型,在这个实验中,两个人同时同步和异步行进,同时被两个移动机器人跟随。我们的研究结果表明,我们的模型可以准确地识别同步运动,这可以使更熟练的人机协作。
{"title":"Joint action perception to enable fluent human-robot teamwork","authors":"T. Iqbal, Michael J. Gonzales, L. Riek","doi":"10.1109/ROMAN.2015.7333671","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333671","url":null,"abstract":"To be effective team members, it is important for robots to understand the high-level behaviors of collocated humans. This is a challenging perceptual task when both the robots and people are in motion. In this paper, we describe an event-based model for multiple robots to automatically measure synchronous joint action of a group while both the robots and co-present humans are moving. We validated our model through an experiment where two people marched both synchronously and asynchronously, while being followed by two mobile robots. Our results suggest that our model accurately identifies synchronous motion, which can enable more adept human-robot collaboration.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":" 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113949559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Interface design and usability analysis for a robotic telepresence platform 机器人远程呈现平台的界面设计与可用性分析
Sina Radmard, AJung Moon, E. Croft
With the rise in popularity of robot-mediated teleconference (telepresence) systems, there is an increased demand for user interfaces that simplify control of the systems' mobility. This is especially true if the display/camera is to be controlled by users while remotely collaborating with another person. In this work, we compare the efficacy of a conventional keyboard and a non-contact, gesture-based, Leap interface in controlling the display/camera of a 7-DoF (degrees of freedom) telepresence platform for remote collaboration. Twenty subjects participated in our usability study where performance, ease of use, and workload were compared between the interfaces. While Leap allowed smoother and more continuous control of the platform, our results indicate that the keyboard provided superior performance in terms of task completion time, ease of use, and workload. We discuss the implications of novel interface designs for telepresence applications.
随着机器人介导的远程会议(网真)系统的普及,对简化系统移动性控制的用户界面的需求不断增加。如果显示器/摄像头是由用户在与另一个人远程协作时控制的,这一点尤其正确。在这项工作中,我们比较了传统键盘和非接触式、基于手势的Leap界面在控制用于远程协作的7-DoF(自由度)远程呈现平台的显示器/摄像头方面的功效。20名受试者参与了我们的可用性研究,其中性能、易用性和工作量在界面之间进行了比较。虽然Leap可以更流畅、更连续地控制平台,但我们的结果表明,键盘在任务完成时间、易用性和工作量方面提供了卓越的性能。我们讨论了新型界面设计对远程呈现应用的影响。
{"title":"Interface design and usability analysis for a robotic telepresence platform","authors":"Sina Radmard, AJung Moon, E. Croft","doi":"10.1109/ROMAN.2015.7333643","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333643","url":null,"abstract":"With the rise in popularity of robot-mediated teleconference (telepresence) systems, there is an increased demand for user interfaces that simplify control of the systems' mobility. This is especially true if the display/camera is to be controlled by users while remotely collaborating with another person. In this work, we compare the efficacy of a conventional keyboard and a non-contact, gesture-based, Leap interface in controlling the display/camera of a 7-DoF (degrees of freedom) telepresence platform for remote collaboration. Twenty subjects participated in our usability study where performance, ease of use, and workload were compared between the interfaces. While Leap allowed smoother and more continuous control of the platform, our results indicate that the keyboard provided superior performance in terms of task completion time, ease of use, and workload. We discuss the implications of novel interface designs for telepresence applications.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"475 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116521344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Perceived robot capability 感知机器人能力
Elizabeth Cha, A. Dragan, S. Srinivasa
Robotics research often focuses on increasing robot capability. If end users do not perceive these increases, however, user acceptance may not improve. In this work, we explore the idea of perceived capability and how it relates to true capability, differentiating between physical and social capabilities. We present a framework that outlines their potential relationships, along with two user studies, on robot speed and speech, exploring these relationships. Our studies identify two possible consequences of the disconnect between the true and perceived capability: (1) under-perception: true improvements in capability may not lead to perceived improvements and (2) over-perception: true improvements in capability may lead to additional perceived improvements that do not actually exist.
机器人研究的重点往往是提高机器人的能力。然而,如果最终用户没有意识到这些增长,用户接受度可能不会提高。在这项工作中,我们探讨了感知能力的概念及其与真实能力的关系,区分了身体能力和社会能力。我们提出了一个框架,概述了它们之间的潜在关系,以及两个关于机器人速度和语音的用户研究,探索了这些关系。我们的研究确定了真实能力和感知能力之间脱节的两种可能的后果:(1)感知不足:能力的真正改进可能不会带来感知的改进;(2)感知过度:能力的真正改进可能会导致实际不存在的额外感知改进。
{"title":"Perceived robot capability","authors":"Elizabeth Cha, A. Dragan, S. Srinivasa","doi":"10.1109/ROMAN.2015.7333656","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333656","url":null,"abstract":"Robotics research often focuses on increasing robot capability. If end users do not perceive these increases, however, user acceptance may not improve. In this work, we explore the idea of perceived capability and how it relates to true capability, differentiating between physical and social capabilities. We present a framework that outlines their potential relationships, along with two user studies, on robot speed and speech, exploring these relationships. Our studies identify two possible consequences of the disconnect between the true and perceived capability: (1) under-perception: true improvements in capability may not lead to perceived improvements and (2) over-perception: true improvements in capability may lead to additional perceived improvements that do not actually exist.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134320584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 60
The influence of head size in mobile remote presence (MRP) educational robots 头部尺寸对移动远程存在(MRP)教育机器人的影响
G. Gweon, Donghee Hong, Sunghee Kwon, Jeonghye Han
In this paper, we examined how the presentation of a remote participant (in our context the remote teacher) in a mobile remote presence (MRP) system affects social interaction, such as closeness and engagement. Using ROBOSEM, a MRP robot, we explored the effect of the presentation of the remote teacher's head size shown on ROBOSEM's screen at three different levels: small, medium, and large. We hypothesized that a medium sized head of the remote teacher shown on the MRP system would be better than a small or large sized head in terms of closeness, engagement, and learning. Our preliminary study results suggest that the size of a remote teacher's head may have an impact on “students' perception of the remote teacher's closeness” and on “students' engagement”. However, we did not observe any difference in terms of “learning”.
在本文中,我们研究了远程参与者(在我们的语境中是远程教师)在移动远程呈现(MRP)系统中的呈现如何影响社会互动,如亲密度和参与度。使用MRP机器人ROBOSEM,我们探索了在ROBOSEM屏幕上显示远程教师头部尺寸的三个不同级别的效果:小、中、大。我们假设,在MRP系统中显示的远程教师的中等大小的头部在亲密度、参与度和学习方面比小或大的头部更好。我们的初步研究结果表明,远程教师头部的大小可能会影响“学生对远程教师亲密感的感知”和“学生的参与”。然而,在“学习”方面,我们没有观察到任何差异。
{"title":"The influence of head size in mobile remote presence (MRP) educational robots","authors":"G. Gweon, Donghee Hong, Sunghee Kwon, Jeonghye Han","doi":"10.1109/ROMAN.2015.7333564","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333564","url":null,"abstract":"In this paper, we examined how the presentation of a remote participant (in our context the remote teacher) in a mobile remote presence (MRP) system affects social interaction, such as closeness and engagement. Using ROBOSEM, a MRP robot, we explored the effect of the presentation of the remote teacher's head size shown on ROBOSEM's screen at three different levels: small, medium, and large. We hypothesized that a medium sized head of the remote teacher shown on the MRP system would be better than a small or large sized head in terms of closeness, engagement, and learning. Our preliminary study results suggest that the size of a remote teacher's head may have an impact on “students' perception of the remote teacher's closeness” and on “students' engagement”. However, we did not observe any difference in terms of “learning”.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115088928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Talking-Ally: What is the future of robot's utterance generation? Talking-Ally:机器人语音生成的未来是什么?
Hitomi Matsushita, Yohei Kurata, P. R. D. De Silva, M. Okada
It is still an enormous challenge within the HRI community to make a significant contribution to the development of a robot's utterance generation mechanism. How does one actually go about contributing and predicting the future of robot utterance generation? Since, our motivation to propose a robot's utterance generation approach by utilizing both addressivity and hearership. Novel platform of Talking-Ally is capable of producing an utterance (toward addressivity) by utilizing the state of the hearer's behaviors (eye-gaze information) to persuade the user (states of hearership) through dynamic interaction. Moreover, the robot has the potential to manipulate modality, turn-initial, and entrust behaviors to increase the liveliness of conversations, which are facilitated by shifting the direction of the conversation and maintaining the hearer's engagement in the conversation. Our experiment focuses on evaluating how interactive users engage with an utterance generation approach (performance) and the persuasive power of robot's communication within dynamic interactions.
在机器人话语生成机制的开发上做出重大贡献仍然是HRI领域面临的巨大挑战。一个人如何去贡献和预测机器人语音生成的未来?因此,我们的动机是通过利用寻址性和倾听性来提出机器人的话语生成方法。新型的Talking-Ally平台能够通过动态交互,利用听者的行为状态(目光信息)来说服用户(听者状态),从而产生一个话语(向寻址性)。此外,机器人还具有操纵情态、转向初始和委托行为的潜力,可以通过改变对话的方向和保持听者对对话的参与来促进对话的活跃度。我们的实验侧重于评估交互式用户如何使用话语生成方法(性能)以及机器人在动态交互中的通信说服力。
{"title":"Talking-Ally: What is the future of robot's utterance generation?","authors":"Hitomi Matsushita, Yohei Kurata, P. R. D. De Silva, M. Okada","doi":"10.1109/ROMAN.2015.7333603","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333603","url":null,"abstract":"It is still an enormous challenge within the HRI community to make a significant contribution to the development of a robot's utterance generation mechanism. How does one actually go about contributing and predicting the future of robot utterance generation? Since, our motivation to propose a robot's utterance generation approach by utilizing both addressivity and hearership. Novel platform of Talking-Ally is capable of producing an utterance (toward addressivity) by utilizing the state of the hearer's behaviors (eye-gaze information) to persuade the user (states of hearership) through dynamic interaction. Moreover, the robot has the potential to manipulate modality, turn-initial, and entrust behaviors to increase the liveliness of conversations, which are facilitated by shifting the direction of the conversation and maintaining the hearer's engagement in the conversation. Our experiment focuses on evaluating how interactive users engage with an utterance generation approach (performance) and the persuasive power of robot's communication within dynamic interactions.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114377969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Robot watchfulness hinders learning performance 机器人的监视会阻碍学习表现
Jonathan S. Herberg, S. Feller, Ilker Yengin, Martin Saerbeck
Educational technological applications, such as computerized learning environments and robot tutors, are often programmed to provide social cues for the purposes of facilitating natural interaction and enhancing productive outcomes. However, there can be potential costs to social interactions that could run counter to such goals. Here, we present an experiment testing the impact of a watchful versus non-watchful robot tutor on children's language-learning effort and performance. Across two interaction sessions, children learned French and Latin rules from a robot tutor and filled in worksheets applying the rules to translate phrases. Results indicate better performance on the worksheets in the session in which the robot looked away from, as compared to the session it looked toward the child, as the child was filling in the worksheets. This was the case in particular for the more difficult worksheet items. These findings highlight the need for careful implementation of social robot behaviors to avoid counterproductive effects.
教育技术应用,如计算机化学习环境和机器人导师,通常被编程为提供社会线索,以促进自然互动和提高生产成果。然而,社交互动可能会有与这些目标背道而驰的潜在成本。在这里,我们提出了一项实验,测试观察与非观察机器人家教对儿童语言学习努力和表现的影响。在两个互动环节中,孩子们从机器人导师那里学习了法语和拉丁语规则,并填写了应用这些规则来翻译短语的工作表。结果表明,在孩子填写工作表时,机器人看向别处的那一组比看向孩子的那一组表现更好。对于较难的工作表项,情况尤其如此。这些发现强调了谨慎实施社交机器人行为的必要性,以避免适得其反的影响。
{"title":"Robot watchfulness hinders learning performance","authors":"Jonathan S. Herberg, S. Feller, Ilker Yengin, Martin Saerbeck","doi":"10.1109/ROMAN.2015.7333620","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333620","url":null,"abstract":"Educational technological applications, such as computerized learning environments and robot tutors, are often programmed to provide social cues for the purposes of facilitating natural interaction and enhancing productive outcomes. However, there can be potential costs to social interactions that could run counter to such goals. Here, we present an experiment testing the impact of a watchful versus non-watchful robot tutor on children's language-learning effort and performance. Across two interaction sessions, children learned French and Latin rules from a robot tutor and filled in worksheets applying the rules to translate phrases. Results indicate better performance on the worksheets in the session in which the robot looked away from, as compared to the session it looked toward the child, as the child was filling in the worksheets. This was the case in particular for the more difficult worksheet items. These findings highlight the need for careful implementation of social robot behaviors to avoid counterproductive effects.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114507216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Online speech-driven head motion generating system and evaluation on a tele-operated robot 远程操作机器人在线语音驱动头部运动生成系统及评价
Kurima Sakai, C. Ishi, T. Minato, H. Ishiguro
We developed a tele-operated robot system where the head motions of the robot are controlled by combining those of the operator with the ones which are automatically generated from the operator's voice. The head motion generation is based on dialogue act functions which are estimated from linguistic and prosodic information extracted from the speech signal. The proposed system was evaluated through an experiment where participants interact with a tele-operated robot. Subjective scores indicated the effectiveness of the proposed head motion generation system, even under limitations in the dialogue act estimation.
我们开发了一种遥控机器人系统,通过将操作员的头部运动与操作员的声音自动产生的头部运动相结合来控制机器人的头部运动。头部运动生成是基于从语音信号中提取的语言和韵律信息估计的对话行为函数。通过参与者与远程操作机器人互动的实验,对所提出的系统进行了评估。主观得分表明所提出的头部运动生成系统的有效性,即使在对话行为估计的限制下。
{"title":"Online speech-driven head motion generating system and evaluation on a tele-operated robot","authors":"Kurima Sakai, C. Ishi, T. Minato, H. Ishiguro","doi":"10.1109/ROMAN.2015.7333610","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333610","url":null,"abstract":"We developed a tele-operated robot system where the head motions of the robot are controlled by combining those of the operator with the ones which are automatically generated from the operator's voice. The head motion generation is based on dialogue act functions which are estimated from linguistic and prosodic information extracted from the speech signal. The proposed system was evaluated through an experiment where participants interact with a tele-operated robot. Subjective scores indicated the effectiveness of the proposed head motion generation system, even under limitations in the dialogue act estimation.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123136425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Inferring affective states from observation of a robot's simple movements 通过观察机器人的简单动作推断情感状态
Genta Yoshioka, Takafumi Sakamoto, Yugo Takeuchi
This paper reports an analytic finding in which humans inferred the emotional states of a simple, flat robot that only moves autonomously on a floor in all directions based on Russell's circumplex model of affect that depends on human's spatial position. We observed the physical interaction between humans and a robot through an experiment where our participants seek a treasure in the given field, and the robot expresses its affective state by movements. This result will contribute to the basic design of HRI. The robot only showed its internal state using its simple movements.
这篇论文报告了一项分析发现,人类推断出一个简单的、扁平的机器人的情绪状态,这个机器人只能在地板上向各个方向自主移动,这是基于罗素的依赖于人类空间位置的情感循环模型。我们通过一个实验,观察人与机器人之间的物理互动,我们的参与者在给定的领域中寻找宝藏,机器人通过动作表达其情感状态。这一结果将有助于HRI的基本设计。机器人只通过简单的动作来显示其内部状态。
{"title":"Inferring affective states from observation of a robot's simple movements","authors":"Genta Yoshioka, Takafumi Sakamoto, Yugo Takeuchi","doi":"10.1109/ROMAN.2015.7333582","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333582","url":null,"abstract":"This paper reports an analytic finding in which humans inferred the emotional states of a simple, flat robot that only moves autonomously on a floor in all directions based on Russell's circumplex model of affect that depends on human's spatial position. We observed the physical interaction between humans and a robot through an experiment where our participants seek a treasure in the given field, and the robot expresses its affective state by movements. This result will contribute to the basic design of HRI. The robot only showed its internal state using its simple movements.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124966321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Visual pointing gestures for bi-directional human robot interaction in a pick-and-place task 在拾取和放置任务中双向人机交互的视觉指向手势
C. P. Quintero, R. T. Fomena, Mona Gridseth, Martin Jägersand
This paper explores visual pointing gestures for two-way nonverbal communication for interacting with a robot arm. Such non-verbal instruction is common when humans communicate spatial directions and actions while collaboratively performing manipulation tasks. Using 3D RGBD we compare human-human and human-robot interaction for solving a pick-and-place task. In the human-human interaction we study both pointing and other types of gestures, performed by humans in a collaborative task. For the human-robot interaction we design a system that allows the user to interact with a 7DOF robot arm using gestures for selecting, picking and dropping objects at different locations. Bi-directional confirmation gestures allow the robot (or human) to verify that the right object is selected. We perform experiments where 8 human subjects collaborate with the robot to manipulate ordinary household objects on a tabletop. Without confirmation feedback selection accuracy was 70-90% for both humans and the robot. With feedback through confirmation gestures both humans and our vision-robotic system could perform the task accurately every time (100%). Finally to illustrate our gesture interface in a real application, we let a human instruct our robot to make a pizza by selecting different ingredients.
本文探讨了与机械臂交互的双向非语言交流的视觉指向手势。当人类在协作执行操作任务时交流空间方向和动作时,这种非语言指令很常见。使用3D RGBD,我们比较了解决拾取任务的人机交互和人机交互。在人与人之间的互动中,我们研究了人类在协作任务中所做的指向和其他类型的手势。对于人机交互,我们设计了一个系统,允许用户使用手势与7DOF机械臂进行交互,以在不同位置选择,拾取和丢弃物体。双向确认手势允许机器人(或人类)验证是否选择了正确的对象。我们进行了实验,8名人类受试者与机器人合作,在桌面上操作普通的家用物品。在没有确认的情况下,人类和机器人的反馈选择准确率都是70-90%。通过确认手势的反馈,人类和我们的视觉机器人系统每次都能准确地执行任务(100%)。最后,为了在实际应用中说明我们的手势界面,我们让一个人指导我们的机器人通过选择不同的原料来制作披萨。
{"title":"Visual pointing gestures for bi-directional human robot interaction in a pick-and-place task","authors":"C. P. Quintero, R. T. Fomena, Mona Gridseth, Martin Jägersand","doi":"10.1109/ROMAN.2015.7333604","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333604","url":null,"abstract":"This paper explores visual pointing gestures for two-way nonverbal communication for interacting with a robot arm. Such non-verbal instruction is common when humans communicate spatial directions and actions while collaboratively performing manipulation tasks. Using 3D RGBD we compare human-human and human-robot interaction for solving a pick-and-place task. In the human-human interaction we study both pointing and other types of gestures, performed by humans in a collaborative task. For the human-robot interaction we design a system that allows the user to interact with a 7DOF robot arm using gestures for selecting, picking and dropping objects at different locations. Bi-directional confirmation gestures allow the robot (or human) to verify that the right object is selected. We perform experiments where 8 human subjects collaborate with the robot to manipulate ordinary household objects on a tabletop. Without confirmation feedback selection accuracy was 70-90% for both humans and the robot. With feedback through confirmation gestures both humans and our vision-robotic system could perform the task accurately every time (100%). Finally to illustrate our gesture interface in a real application, we let a human instruct our robot to make a pizza by selecting different ingredients.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125408470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Investigating the effects of robot behavior and attitude towards technology on social human-robot interactions 研究机器人的行为和对技术的态度对人类与机器人社会互动的影响
V. Nitsch, Thomas Glassen
Many envision a future in which personal service robots share our homes and take part in our daily lives. These robots should possess a certain “social intelligence”, so that people are willing, if not eager, to interact with them. In this endeavor, applied psychologists and roboticists have conducted numerous studies to identify the factors that affect social interactions between humans and robots, both positively and negatively. In order to ascertain the extent to which the social human-robot interaction might be influenced by robot behavior and a person's attitude towards technology, an experiment was conducted using the UG paradigm, in which participants (N=48) interacted with a robot, which displayed either animated or apathetic behavior. The results suggest that although the interaction with a robot displaying animated behavior is overall rated more favorably, people may nevertheless act differently towards such robots, depending on their perceived technological competence and their enthusiasm for technology.
许多人设想,在未来,个人服务机器人将与我们共享家园,参与我们的日常生活。这些机器人应该具有一定的“社交智能”,这样人们即使不渴望,也愿意与它们互动。在这一努力中,应用心理学家和机器人专家进行了大量的研究,以确定影响人类和机器人之间社会互动的因素,无论是积极的还是消极的。为了确定机器人行为和人对技术的态度对人机社会互动的影响程度,我们使用UG范式进行了一项实验,在该实验中,参与者(N=48)与机器人互动,机器人表现出活跃或冷漠的行为。结果表明,尽管与表现出动画行为的机器人的互动总体上更受欢迎,但人们可能会对这些机器人采取不同的行动,这取决于他们对技术能力的感知和对技术的热情。
{"title":"Investigating the effects of robot behavior and attitude towards technology on social human-robot interactions","authors":"V. Nitsch, Thomas Glassen","doi":"10.1109/ROMAN.2015.7333560","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333560","url":null,"abstract":"Many envision a future in which personal service robots share our homes and take part in our daily lives. These robots should possess a certain “social intelligence”, so that people are willing, if not eager, to interact with them. In this endeavor, applied psychologists and roboticists have conducted numerous studies to identify the factors that affect social interactions between humans and robots, both positively and negatively. In order to ascertain the extent to which the social human-robot interaction might be influenced by robot behavior and a person's attitude towards technology, an experiment was conducted using the UG paradigm, in which participants (N=48) interacted with a robot, which displayed either animated or apathetic behavior. The results suggest that although the interaction with a robot displaying animated behavior is overall rated more favorably, people may nevertheless act differently towards such robots, depending on their perceived technological competence and their enthusiasm for technology.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126196399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
期刊
2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1