首页 > 最新文献

2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)最新文献

英文 中文
Joint action perception to enable fluent human-robot teamwork 联合动作感知,实现流畅的人机协作
T. Iqbal, Michael J. Gonzales, L. Riek
To be effective team members, it is important for robots to understand the high-level behaviors of collocated humans. This is a challenging perceptual task when both the robots and people are in motion. In this paper, we describe an event-based model for multiple robots to automatically measure synchronous joint action of a group while both the robots and co-present humans are moving. We validated our model through an experiment where two people marched both synchronously and asynchronously, while being followed by two mobile robots. Our results suggest that our model accurately identifies synchronous motion, which can enable more adept human-robot collaboration.
为了成为有效的团队成员,机器人必须理解与之共存的人类的高级行为。当机器人和人都在运动时,这是一项具有挑战性的感知任务。在本文中,我们描述了一个基于事件的多机器人模型,用于在机器人和共同在场的人移动时自动测量一组机器人的同步关节动作。我们通过一个实验验证了我们的模型,在这个实验中,两个人同时同步和异步行进,同时被两个移动机器人跟随。我们的研究结果表明,我们的模型可以准确地识别同步运动,这可以使更熟练的人机协作。
{"title":"Joint action perception to enable fluent human-robot teamwork","authors":"T. Iqbal, Michael J. Gonzales, L. Riek","doi":"10.1109/ROMAN.2015.7333671","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333671","url":null,"abstract":"To be effective team members, it is important for robots to understand the high-level behaviors of collocated humans. This is a challenging perceptual task when both the robots and people are in motion. In this paper, we describe an event-based model for multiple robots to automatically measure synchronous joint action of a group while both the robots and co-present humans are moving. We validated our model through an experiment where two people marched both synchronously and asynchronously, while being followed by two mobile robots. Our results suggest that our model accurately identifies synchronous motion, which can enable more adept human-robot collaboration.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":" 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113949559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Interface design and usability analysis for a robotic telepresence platform 机器人远程呈现平台的界面设计与可用性分析
Sina Radmard, AJung Moon, E. Croft
With the rise in popularity of robot-mediated teleconference (telepresence) systems, there is an increased demand for user interfaces that simplify control of the systems' mobility. This is especially true if the display/camera is to be controlled by users while remotely collaborating with another person. In this work, we compare the efficacy of a conventional keyboard and a non-contact, gesture-based, Leap interface in controlling the display/camera of a 7-DoF (degrees of freedom) telepresence platform for remote collaboration. Twenty subjects participated in our usability study where performance, ease of use, and workload were compared between the interfaces. While Leap allowed smoother and more continuous control of the platform, our results indicate that the keyboard provided superior performance in terms of task completion time, ease of use, and workload. We discuss the implications of novel interface designs for telepresence applications.
随着机器人介导的远程会议(网真)系统的普及,对简化系统移动性控制的用户界面的需求不断增加。如果显示器/摄像头是由用户在与另一个人远程协作时控制的,这一点尤其正确。在这项工作中,我们比较了传统键盘和非接触式、基于手势的Leap界面在控制用于远程协作的7-DoF(自由度)远程呈现平台的显示器/摄像头方面的功效。20名受试者参与了我们的可用性研究,其中性能、易用性和工作量在界面之间进行了比较。虽然Leap可以更流畅、更连续地控制平台,但我们的结果表明,键盘在任务完成时间、易用性和工作量方面提供了卓越的性能。我们讨论了新型界面设计对远程呈现应用的影响。
{"title":"Interface design and usability analysis for a robotic telepresence platform","authors":"Sina Radmard, AJung Moon, E. Croft","doi":"10.1109/ROMAN.2015.7333643","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333643","url":null,"abstract":"With the rise in popularity of robot-mediated teleconference (telepresence) systems, there is an increased demand for user interfaces that simplify control of the systems' mobility. This is especially true if the display/camera is to be controlled by users while remotely collaborating with another person. In this work, we compare the efficacy of a conventional keyboard and a non-contact, gesture-based, Leap interface in controlling the display/camera of a 7-DoF (degrees of freedom) telepresence platform for remote collaboration. Twenty subjects participated in our usability study where performance, ease of use, and workload were compared between the interfaces. While Leap allowed smoother and more continuous control of the platform, our results indicate that the keyboard provided superior performance in terms of task completion time, ease of use, and workload. We discuss the implications of novel interface designs for telepresence applications.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"475 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116521344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Perceived robot capability 感知机器人能力
Elizabeth Cha, A. Dragan, S. Srinivasa
Robotics research often focuses on increasing robot capability. If end users do not perceive these increases, however, user acceptance may not improve. In this work, we explore the idea of perceived capability and how it relates to true capability, differentiating between physical and social capabilities. We present a framework that outlines their potential relationships, along with two user studies, on robot speed and speech, exploring these relationships. Our studies identify two possible consequences of the disconnect between the true and perceived capability: (1) under-perception: true improvements in capability may not lead to perceived improvements and (2) over-perception: true improvements in capability may lead to additional perceived improvements that do not actually exist.
机器人研究的重点往往是提高机器人的能力。然而,如果最终用户没有意识到这些增长,用户接受度可能不会提高。在这项工作中,我们探讨了感知能力的概念及其与真实能力的关系,区分了身体能力和社会能力。我们提出了一个框架,概述了它们之间的潜在关系,以及两个关于机器人速度和语音的用户研究,探索了这些关系。我们的研究确定了真实能力和感知能力之间脱节的两种可能的后果:(1)感知不足:能力的真正改进可能不会带来感知的改进;(2)感知过度:能力的真正改进可能会导致实际不存在的额外感知改进。
{"title":"Perceived robot capability","authors":"Elizabeth Cha, A. Dragan, S. Srinivasa","doi":"10.1109/ROMAN.2015.7333656","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333656","url":null,"abstract":"Robotics research often focuses on increasing robot capability. If end users do not perceive these increases, however, user acceptance may not improve. In this work, we explore the idea of perceived capability and how it relates to true capability, differentiating between physical and social capabilities. We present a framework that outlines their potential relationships, along with two user studies, on robot speed and speech, exploring these relationships. Our studies identify two possible consequences of the disconnect between the true and perceived capability: (1) under-perception: true improvements in capability may not lead to perceived improvements and (2) over-perception: true improvements in capability may lead to additional perceived improvements that do not actually exist.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134320584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 60
Visual pointing gestures for bi-directional human robot interaction in a pick-and-place task 在拾取和放置任务中双向人机交互的视觉指向手势
C. P. Quintero, R. T. Fomena, Mona Gridseth, Martin Jägersand
This paper explores visual pointing gestures for two-way nonverbal communication for interacting with a robot arm. Such non-verbal instruction is common when humans communicate spatial directions and actions while collaboratively performing manipulation tasks. Using 3D RGBD we compare human-human and human-robot interaction for solving a pick-and-place task. In the human-human interaction we study both pointing and other types of gestures, performed by humans in a collaborative task. For the human-robot interaction we design a system that allows the user to interact with a 7DOF robot arm using gestures for selecting, picking and dropping objects at different locations. Bi-directional confirmation gestures allow the robot (or human) to verify that the right object is selected. We perform experiments where 8 human subjects collaborate with the robot to manipulate ordinary household objects on a tabletop. Without confirmation feedback selection accuracy was 70-90% for both humans and the robot. With feedback through confirmation gestures both humans and our vision-robotic system could perform the task accurately every time (100%). Finally to illustrate our gesture interface in a real application, we let a human instruct our robot to make a pizza by selecting different ingredients.
本文探讨了与机械臂交互的双向非语言交流的视觉指向手势。当人类在协作执行操作任务时交流空间方向和动作时,这种非语言指令很常见。使用3D RGBD,我们比较了解决拾取任务的人机交互和人机交互。在人与人之间的互动中,我们研究了人类在协作任务中所做的指向和其他类型的手势。对于人机交互,我们设计了一个系统,允许用户使用手势与7DOF机械臂进行交互,以在不同位置选择,拾取和丢弃物体。双向确认手势允许机器人(或人类)验证是否选择了正确的对象。我们进行了实验,8名人类受试者与机器人合作,在桌面上操作普通的家用物品。在没有确认的情况下,人类和机器人的反馈选择准确率都是70-90%。通过确认手势的反馈,人类和我们的视觉机器人系统每次都能准确地执行任务(100%)。最后,为了在实际应用中说明我们的手势界面,我们让一个人指导我们的机器人通过选择不同的原料来制作披萨。
{"title":"Visual pointing gestures for bi-directional human robot interaction in a pick-and-place task","authors":"C. P. Quintero, R. T. Fomena, Mona Gridseth, Martin Jägersand","doi":"10.1109/ROMAN.2015.7333604","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333604","url":null,"abstract":"This paper explores visual pointing gestures for two-way nonverbal communication for interacting with a robot arm. Such non-verbal instruction is common when humans communicate spatial directions and actions while collaboratively performing manipulation tasks. Using 3D RGBD we compare human-human and human-robot interaction for solving a pick-and-place task. In the human-human interaction we study both pointing and other types of gestures, performed by humans in a collaborative task. For the human-robot interaction we design a system that allows the user to interact with a 7DOF robot arm using gestures for selecting, picking and dropping objects at different locations. Bi-directional confirmation gestures allow the robot (or human) to verify that the right object is selected. We perform experiments where 8 human subjects collaborate with the robot to manipulate ordinary household objects on a tabletop. Without confirmation feedback selection accuracy was 70-90% for both humans and the robot. With feedback through confirmation gestures both humans and our vision-robotic system could perform the task accurately every time (100%). Finally to illustrate our gesture interface in a real application, we let a human instruct our robot to make a pizza by selecting different ingredients.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125408470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Computational analysis of human-robot interactions through first-person vision: Personality and interaction experience 基于第一人称视觉的人机交互计算分析:个性与交互体验
Oya Celiktutan, H. Gunes
In this paper, we analyse interactions with Nao, a small humanoid robot, from the viewpoint of human participants through an ego-centric camera placed on their forehead. We focus on human participants' and robot's personalities and their impact on the human-robot interactions. We automatically extract nonverbal cues (e.g., head movement) from first-person perspective and explore the relationship of nonverbal cues with participants' self-reported personality and their interaction experience. We generate two types of behaviours for the robot (i.e., extroverted vs. introverted) and examine how robot's personality and behaviour affect the findings. Significant correlations are obtained between the extroversion and agreeable-ness traits of the participants and the perceived enjoyment with the extroverted robot. Plausible relationships are also found between the measures of interaction experience and personality and the first-person vision features. We then use computational models to automatically predict the participants' personality traits from these features. Promising results are achieved for the traits of agreeableness, conscientiousness and extroversion.
在本文中,我们通过放置在人类参与者额头上的以自我为中心的摄像头,从人类参与者的角度分析了与小型人形机器人Nao的互动。我们关注人类参与者和机器人的个性及其对人机交互的影响。我们从第一人称视角自动提取非语言线索(如头部运动),并探索非语言线索与参与者自我报告的人格和互动经验的关系。我们为机器人生成了两种类型的行为(即外向与内向),并研究了机器人的个性和行为如何影响研究结果。被试的外向性和亲和性特征与与外向型机器人的感知享受之间存在显著相关。在互动经验和个性的测量与第一人称视觉特征之间也发现了似是而非的关系。然后,我们使用计算模型从这些特征中自动预测参与者的性格特征。在亲和性、严谨性和外向性方面取得了可喜的结果。
{"title":"Computational analysis of human-robot interactions through first-person vision: Personality and interaction experience","authors":"Oya Celiktutan, H. Gunes","doi":"10.1109/ROMAN.2015.7333602","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333602","url":null,"abstract":"In this paper, we analyse interactions with Nao, a small humanoid robot, from the viewpoint of human participants through an ego-centric camera placed on their forehead. We focus on human participants' and robot's personalities and their impact on the human-robot interactions. We automatically extract nonverbal cues (e.g., head movement) from first-person perspective and explore the relationship of nonverbal cues with participants' self-reported personality and their interaction experience. We generate two types of behaviours for the robot (i.e., extroverted vs. introverted) and examine how robot's personality and behaviour affect the findings. Significant correlations are obtained between the extroversion and agreeable-ness traits of the participants and the perceived enjoyment with the extroverted robot. Plausible relationships are also found between the measures of interaction experience and personality and the first-person vision features. We then use computational models to automatically predict the participants' personality traits from these features. Promising results are achieved for the traits of agreeableness, conscientiousness and extroversion.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134185964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Toward a better understanding of the communication cues involved in a human-robot object transfer 为了更好地理解人-机器人物体转移中涉及的交流线索
Mamoun Gharbi, Pierre-Vincent Paubel, A. Clodic, O. Carreras, R. Alami, J. Cellier
Handing-over objects to humans (or taking objects from them) is a key capability for a service robot. Humans are efficient and natural while performing this action and the purpose of the studies on this topic is to bring human-robot handovers to an acceptable, efficient and natural level. This paper deals with the cues that allow to make a handover look as natural as possible, and more precisely we focus on where the robot should look while performing it. In this context we propose a user study, involving 33 volunteers, who judged video sequences where they see either a human or a robot giving them an object. They were presented with different sequences where the agents (robot or human) have different gaze behaviours, and were asked to give their feeling about the sequence naturalness. In addition to this subjective measure, the volunteers were equipped with an eye tracker which enabled us to have more accurate objective measures.
将物品交给人类(或从人类手中拿走物品)是服务机器人的一项关键能力。人类在进行这一动作时是高效和自然的,本课题研究的目的是使人与机器人的交接达到可接受的、高效的和自然的水平。这篇论文处理的线索使交接看起来尽可能自然,更准确地说,我们关注的是机器人在执行交接时应该看的地方。在此背景下,我们提出了一项用户研究,涉及33名志愿者,他们判断视频序列,他们看到一个人或一个机器人给他们一个物体。他们被展示了不同的序列,其中代理(机器人或人类)有不同的凝视行为,并被要求给出他们对序列自然性的感觉。除了这种主观测量之外,志愿者们还配备了眼动仪,使我们能够有更准确的客观测量。
{"title":"Toward a better understanding of the communication cues involved in a human-robot object transfer","authors":"Mamoun Gharbi, Pierre-Vincent Paubel, A. Clodic, O. Carreras, R. Alami, J. Cellier","doi":"10.1109/ROMAN.2015.7333626","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333626","url":null,"abstract":"Handing-over objects to humans (or taking objects from them) is a key capability for a service robot. Humans are efficient and natural while performing this action and the purpose of the studies on this topic is to bring human-robot handovers to an acceptable, efficient and natural level. This paper deals with the cues that allow to make a handover look as natural as possible, and more precisely we focus on where the robot should look while performing it. In this context we propose a user study, involving 33 volunteers, who judged video sequences where they see either a human or a robot giving them an object. They were presented with different sequences where the agents (robot or human) have different gaze behaviours, and were asked to give their feeling about the sequence naturalness. In addition to this subjective measure, the volunteers were equipped with an eye tracker which enabled us to have more accurate objective measures.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132705570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Modeling dynamic scenes by one-shot 3D acquisition system for moving humanoid robot 基于单镜头三维采集系统的运动人形机器人动态场景建模
R. Sagawa, Charles Malleson, M. Morisawa, K. Kaneko, F. Kanehiro, Y. Matsumoto, A. Hilton
For mobile robots, 3D acquisition is required to model the environment. Particularly for humanoid robots, a modeled environment is necessary to plan the walking control. This environment can include both static objects, such as a ground surface with obstacles, and dynamic objects, such as a person moving around the robot. This paper proposes a system for a robot to obtain a sufficiently accurate shape of the environment for walking on a ground surface with obstacles and a method to detect dynamic objects in the modeled environment, which is necessary for the robot to react to sudden changes in the scene. The 3D acquisition is achieved by a projector-camera system mounted on the robot head that uses a structured-light method to reconstruct the shapes of moving objects from a single frame. The acquired shapes are aligned and merged into a common coordinate system using the simultaneous localization and mapping method. Dynamic objects are detected as shapes that are inconsistent with the previous frames. Experiments were performed to evaluate the accuracy of the 3D acquisition and the robustness with regard to detecting dynamic objects when serving as the vision system of a humanoid robot.
对于移动机器人,需要三维采集来对环境进行建模。特别是对于类人机器人,需要一个建模的环境来规划其行走控制。这种环境既可以包括静态对象,比如有障碍物的地面,也可以包括动态对象,比如在机器人周围移动的人。本文提出了一种机器人在有障碍物的地面上获得足够精确的环境形状的系统,以及一种机器人在建模环境中检测动态物体的方法,这是机器人对场景突然变化做出反应所必需的。3D采集是通过安装在机器人头上的投影相机系统实现的,该系统使用结构光方法从单帧重建运动物体的形状。利用同步定位和映射的方法,将获取的形状对齐并合并到一个共同的坐标系中。动态对象被检测为与前一帧不一致的形状。通过实验验证了该系统作为仿人机器人视觉系统的三维采集精度和动态目标检测的鲁棒性。
{"title":"Modeling dynamic scenes by one-shot 3D acquisition system for moving humanoid robot","authors":"R. Sagawa, Charles Malleson, M. Morisawa, K. Kaneko, F. Kanehiro, Y. Matsumoto, A. Hilton","doi":"10.1109/ROMAN.2015.7333628","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333628","url":null,"abstract":"For mobile robots, 3D acquisition is required to model the environment. Particularly for humanoid robots, a modeled environment is necessary to plan the walking control. This environment can include both static objects, such as a ground surface with obstacles, and dynamic objects, such as a person moving around the robot. This paper proposes a system for a robot to obtain a sufficiently accurate shape of the environment for walking on a ground surface with obstacles and a method to detect dynamic objects in the modeled environment, which is necessary for the robot to react to sudden changes in the scene. The 3D acquisition is achieved by a projector-camera system mounted on the robot head that uses a structured-light method to reconstruct the shapes of moving objects from a single frame. The acquired shapes are aligned and merged into a common coordinate system using the simultaneous localization and mapping method. Dynamic objects are detected as shapes that are inconsistent with the previous frames. Experiments were performed to evaluate the accuracy of the 3D acquisition and the robustness with regard to detecting dynamic objects when serving as the vision system of a humanoid robot.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133484644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The influence of head size in mobile remote presence (MRP) educational robots 头部尺寸对移动远程存在(MRP)教育机器人的影响
G. Gweon, Donghee Hong, Sunghee Kwon, Jeonghye Han
In this paper, we examined how the presentation of a remote participant (in our context the remote teacher) in a mobile remote presence (MRP) system affects social interaction, such as closeness and engagement. Using ROBOSEM, a MRP robot, we explored the effect of the presentation of the remote teacher's head size shown on ROBOSEM's screen at three different levels: small, medium, and large. We hypothesized that a medium sized head of the remote teacher shown on the MRP system would be better than a small or large sized head in terms of closeness, engagement, and learning. Our preliminary study results suggest that the size of a remote teacher's head may have an impact on “students' perception of the remote teacher's closeness” and on “students' engagement”. However, we did not observe any difference in terms of “learning”.
在本文中,我们研究了远程参与者(在我们的语境中是远程教师)在移动远程呈现(MRP)系统中的呈现如何影响社会互动,如亲密度和参与度。使用MRP机器人ROBOSEM,我们探索了在ROBOSEM屏幕上显示远程教师头部尺寸的三个不同级别的效果:小、中、大。我们假设,在MRP系统中显示的远程教师的中等大小的头部在亲密度、参与度和学习方面比小或大的头部更好。我们的初步研究结果表明,远程教师头部的大小可能会影响“学生对远程教师亲密感的感知”和“学生的参与”。然而,在“学习”方面,我们没有观察到任何差异。
{"title":"The influence of head size in mobile remote presence (MRP) educational robots","authors":"G. Gweon, Donghee Hong, Sunghee Kwon, Jeonghye Han","doi":"10.1109/ROMAN.2015.7333564","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333564","url":null,"abstract":"In this paper, we examined how the presentation of a remote participant (in our context the remote teacher) in a mobile remote presence (MRP) system affects social interaction, such as closeness and engagement. Using ROBOSEM, a MRP robot, we explored the effect of the presentation of the remote teacher's head size shown on ROBOSEM's screen at three different levels: small, medium, and large. We hypothesized that a medium sized head of the remote teacher shown on the MRP system would be better than a small or large sized head in terms of closeness, engagement, and learning. Our preliminary study results suggest that the size of a remote teacher's head may have an impact on “students' perception of the remote teacher's closeness” and on “students' engagement”. However, we did not observe any difference in terms of “learning”.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115088928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Online speech-driven head motion generating system and evaluation on a tele-operated robot 远程操作机器人在线语音驱动头部运动生成系统及评价
Kurima Sakai, C. Ishi, T. Minato, H. Ishiguro
We developed a tele-operated robot system where the head motions of the robot are controlled by combining those of the operator with the ones which are automatically generated from the operator's voice. The head motion generation is based on dialogue act functions which are estimated from linguistic and prosodic information extracted from the speech signal. The proposed system was evaluated through an experiment where participants interact with a tele-operated robot. Subjective scores indicated the effectiveness of the proposed head motion generation system, even under limitations in the dialogue act estimation.
我们开发了一种遥控机器人系统,通过将操作员的头部运动与操作员的声音自动产生的头部运动相结合来控制机器人的头部运动。头部运动生成是基于从语音信号中提取的语言和韵律信息估计的对话行为函数。通过参与者与远程操作机器人互动的实验,对所提出的系统进行了评估。主观得分表明所提出的头部运动生成系统的有效性,即使在对话行为估计的限制下。
{"title":"Online speech-driven head motion generating system and evaluation on a tele-operated robot","authors":"Kurima Sakai, C. Ishi, T. Minato, H. Ishiguro","doi":"10.1109/ROMAN.2015.7333610","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333610","url":null,"abstract":"We developed a tele-operated robot system where the head motions of the robot are controlled by combining those of the operator with the ones which are automatically generated from the operator's voice. The head motion generation is based on dialogue act functions which are estimated from linguistic and prosodic information extracted from the speech signal. The proposed system was evaluated through an experiment where participants interact with a tele-operated robot. Subjective scores indicated the effectiveness of the proposed head motion generation system, even under limitations in the dialogue act estimation.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123136425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
The robot engine — Making the unity 3D game engine work for HRI 机器人引擎-使unity 3D游戏引擎为HRI工作
C. Bartneck, Marius Soucy, Kevin Fleuret, E. B. Sandoval
HRI is a multi-disciplinary research field and integrating the range of expertise into a single project can be challenging. Enabling experts on human behavior to design fluent animations and behaviors for advanced robots is problematic, since the tools available for such robots are often in their prototype stage. We have built The Robot Engine (TRE) based on the Unity 3D Game Engine to control robots with Unity 3D. Unity 3D allows non-programmers to use a set of powerful animation and interaction design tools to visually program and animate robots. We review several animation techniques that are common in computer games and that could make the movements of robots more natural and convincing. We demonstrate the use of TRE with two different Arduino based robot platforms and believe that it can easily be extended for use with other robots. We further believe that this unconventional integration of technologies has the potential to fully bring the expertise of interaction designers into the process of advanced human-robot interaction projects.
人力资源研究所是一个多学科的研究领域,将各种专业知识整合到一个项目中可能具有挑战性。使人类行为专家能够为高级机器人设计流畅的动画和行为是有问题的,因为用于此类机器人的工具通常处于原型阶段。我们已经建立了机器人引擎(TRE)基于Unity 3D游戏引擎来控制机器人与Unity 3D。Unity 3D允许非程序员使用一套强大的动画和交互设计工具来可视化地编程和动画机器人。我们回顾了电脑游戏中常见的几种动画技术,这些技术可以使机器人的动作更加自然和令人信服。我们演示了在两种不同的基于Arduino的机器人平台上使用TRE,并相信它可以很容易地扩展到其他机器人上使用。我们进一步相信,这种非常规的技术集成有潜力将交互设计师的专业知识充分引入先进的人机交互项目的过程中。
{"title":"The robot engine — Making the unity 3D game engine work for HRI","authors":"C. Bartneck, Marius Soucy, Kevin Fleuret, E. B. Sandoval","doi":"10.1109/ROMAN.2015.7333561","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333561","url":null,"abstract":"HRI is a multi-disciplinary research field and integrating the range of expertise into a single project can be challenging. Enabling experts on human behavior to design fluent animations and behaviors for advanced robots is problematic, since the tools available for such robots are often in their prototype stage. We have built The Robot Engine (TRE) based on the Unity 3D Game Engine to control robots with Unity 3D. Unity 3D allows non-programmers to use a set of powerful animation and interaction design tools to visually program and animate robots. We review several animation techniques that are common in computer games and that could make the movements of robots more natural and convincing. We demonstrate the use of TRE with two different Arduino based robot platforms and believe that it can easily be extended for use with other robots. We further believe that this unconventional integration of technologies has the potential to fully bring the expertise of interaction designers into the process of advanced human-robot interaction projects.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128414468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
期刊
2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1