Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333671
T. Iqbal, Michael J. Gonzales, L. Riek
To be effective team members, it is important for robots to understand the high-level behaviors of collocated humans. This is a challenging perceptual task when both the robots and people are in motion. In this paper, we describe an event-based model for multiple robots to automatically measure synchronous joint action of a group while both the robots and co-present humans are moving. We validated our model through an experiment where two people marched both synchronously and asynchronously, while being followed by two mobile robots. Our results suggest that our model accurately identifies synchronous motion, which can enable more adept human-robot collaboration.
{"title":"Joint action perception to enable fluent human-robot teamwork","authors":"T. Iqbal, Michael J. Gonzales, L. Riek","doi":"10.1109/ROMAN.2015.7333671","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333671","url":null,"abstract":"To be effective team members, it is important for robots to understand the high-level behaviors of collocated humans. This is a challenging perceptual task when both the robots and people are in motion. In this paper, we describe an event-based model for multiple robots to automatically measure synchronous joint action of a group while both the robots and co-present humans are moving. We validated our model through an experiment where two people marched both synchronously and asynchronously, while being followed by two mobile robots. Our results suggest that our model accurately identifies synchronous motion, which can enable more adept human-robot collaboration.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":" 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113949559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333643
Sina Radmard, AJung Moon, E. Croft
With the rise in popularity of robot-mediated teleconference (telepresence) systems, there is an increased demand for user interfaces that simplify control of the systems' mobility. This is especially true if the display/camera is to be controlled by users while remotely collaborating with another person. In this work, we compare the efficacy of a conventional keyboard and a non-contact, gesture-based, Leap interface in controlling the display/camera of a 7-DoF (degrees of freedom) telepresence platform for remote collaboration. Twenty subjects participated in our usability study where performance, ease of use, and workload were compared between the interfaces. While Leap allowed smoother and more continuous control of the platform, our results indicate that the keyboard provided superior performance in terms of task completion time, ease of use, and workload. We discuss the implications of novel interface designs for telepresence applications.
{"title":"Interface design and usability analysis for a robotic telepresence platform","authors":"Sina Radmard, AJung Moon, E. Croft","doi":"10.1109/ROMAN.2015.7333643","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333643","url":null,"abstract":"With the rise in popularity of robot-mediated teleconference (telepresence) systems, there is an increased demand for user interfaces that simplify control of the systems' mobility. This is especially true if the display/camera is to be controlled by users while remotely collaborating with another person. In this work, we compare the efficacy of a conventional keyboard and a non-contact, gesture-based, Leap interface in controlling the display/camera of a 7-DoF (degrees of freedom) telepresence platform for remote collaboration. Twenty subjects participated in our usability study where performance, ease of use, and workload were compared between the interfaces. While Leap allowed smoother and more continuous control of the platform, our results indicate that the keyboard provided superior performance in terms of task completion time, ease of use, and workload. We discuss the implications of novel interface designs for telepresence applications.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"475 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116521344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333656
Elizabeth Cha, A. Dragan, S. Srinivasa
Robotics research often focuses on increasing robot capability. If end users do not perceive these increases, however, user acceptance may not improve. In this work, we explore the idea of perceived capability and how it relates to true capability, differentiating between physical and social capabilities. We present a framework that outlines their potential relationships, along with two user studies, on robot speed and speech, exploring these relationships. Our studies identify two possible consequences of the disconnect between the true and perceived capability: (1) under-perception: true improvements in capability may not lead to perceived improvements and (2) over-perception: true improvements in capability may lead to additional perceived improvements that do not actually exist.
{"title":"Perceived robot capability","authors":"Elizabeth Cha, A. Dragan, S. Srinivasa","doi":"10.1109/ROMAN.2015.7333656","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333656","url":null,"abstract":"Robotics research often focuses on increasing robot capability. If end users do not perceive these increases, however, user acceptance may not improve. In this work, we explore the idea of perceived capability and how it relates to true capability, differentiating between physical and social capabilities. We present a framework that outlines their potential relationships, along with two user studies, on robot speed and speech, exploring these relationships. Our studies identify two possible consequences of the disconnect between the true and perceived capability: (1) under-perception: true improvements in capability may not lead to perceived improvements and (2) over-perception: true improvements in capability may lead to additional perceived improvements that do not actually exist.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134320584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333604
C. P. Quintero, R. T. Fomena, Mona Gridseth, Martin Jägersand
This paper explores visual pointing gestures for two-way nonverbal communication for interacting with a robot arm. Such non-verbal instruction is common when humans communicate spatial directions and actions while collaboratively performing manipulation tasks. Using 3D RGBD we compare human-human and human-robot interaction for solving a pick-and-place task. In the human-human interaction we study both pointing and other types of gestures, performed by humans in a collaborative task. For the human-robot interaction we design a system that allows the user to interact with a 7DOF robot arm using gestures for selecting, picking and dropping objects at different locations. Bi-directional confirmation gestures allow the robot (or human) to verify that the right object is selected. We perform experiments where 8 human subjects collaborate with the robot to manipulate ordinary household objects on a tabletop. Without confirmation feedback selection accuracy was 70-90% for both humans and the robot. With feedback through confirmation gestures both humans and our vision-robotic system could perform the task accurately every time (100%). Finally to illustrate our gesture interface in a real application, we let a human instruct our robot to make a pizza by selecting different ingredients.
{"title":"Visual pointing gestures for bi-directional human robot interaction in a pick-and-place task","authors":"C. P. Quintero, R. T. Fomena, Mona Gridseth, Martin Jägersand","doi":"10.1109/ROMAN.2015.7333604","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333604","url":null,"abstract":"This paper explores visual pointing gestures for two-way nonverbal communication for interacting with a robot arm. Such non-verbal instruction is common when humans communicate spatial directions and actions while collaboratively performing manipulation tasks. Using 3D RGBD we compare human-human and human-robot interaction for solving a pick-and-place task. In the human-human interaction we study both pointing and other types of gestures, performed by humans in a collaborative task. For the human-robot interaction we design a system that allows the user to interact with a 7DOF robot arm using gestures for selecting, picking and dropping objects at different locations. Bi-directional confirmation gestures allow the robot (or human) to verify that the right object is selected. We perform experiments where 8 human subjects collaborate with the robot to manipulate ordinary household objects on a tabletop. Without confirmation feedback selection accuracy was 70-90% for both humans and the robot. With feedback through confirmation gestures both humans and our vision-robotic system could perform the task accurately every time (100%). Finally to illustrate our gesture interface in a real application, we let a human instruct our robot to make a pizza by selecting different ingredients.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125408470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333602
Oya Celiktutan, H. Gunes
In this paper, we analyse interactions with Nao, a small humanoid robot, from the viewpoint of human participants through an ego-centric camera placed on their forehead. We focus on human participants' and robot's personalities and their impact on the human-robot interactions. We automatically extract nonverbal cues (e.g., head movement) from first-person perspective and explore the relationship of nonverbal cues with participants' self-reported personality and their interaction experience. We generate two types of behaviours for the robot (i.e., extroverted vs. introverted) and examine how robot's personality and behaviour affect the findings. Significant correlations are obtained between the extroversion and agreeable-ness traits of the participants and the perceived enjoyment with the extroverted robot. Plausible relationships are also found between the measures of interaction experience and personality and the first-person vision features. We then use computational models to automatically predict the participants' personality traits from these features. Promising results are achieved for the traits of agreeableness, conscientiousness and extroversion.
{"title":"Computational analysis of human-robot interactions through first-person vision: Personality and interaction experience","authors":"Oya Celiktutan, H. Gunes","doi":"10.1109/ROMAN.2015.7333602","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333602","url":null,"abstract":"In this paper, we analyse interactions with Nao, a small humanoid robot, from the viewpoint of human participants through an ego-centric camera placed on their forehead. We focus on human participants' and robot's personalities and their impact on the human-robot interactions. We automatically extract nonverbal cues (e.g., head movement) from first-person perspective and explore the relationship of nonverbal cues with participants' self-reported personality and their interaction experience. We generate two types of behaviours for the robot (i.e., extroverted vs. introverted) and examine how robot's personality and behaviour affect the findings. Significant correlations are obtained between the extroversion and agreeable-ness traits of the participants and the perceived enjoyment with the extroverted robot. Plausible relationships are also found between the measures of interaction experience and personality and the first-person vision features. We then use computational models to automatically predict the participants' personality traits from these features. Promising results are achieved for the traits of agreeableness, conscientiousness and extroversion.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134185964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333626
Mamoun Gharbi, Pierre-Vincent Paubel, A. Clodic, O. Carreras, R. Alami, J. Cellier
Handing-over objects to humans (or taking objects from them) is a key capability for a service robot. Humans are efficient and natural while performing this action and the purpose of the studies on this topic is to bring human-robot handovers to an acceptable, efficient and natural level. This paper deals with the cues that allow to make a handover look as natural as possible, and more precisely we focus on where the robot should look while performing it. In this context we propose a user study, involving 33 volunteers, who judged video sequences where they see either a human or a robot giving them an object. They were presented with different sequences where the agents (robot or human) have different gaze behaviours, and were asked to give their feeling about the sequence naturalness. In addition to this subjective measure, the volunteers were equipped with an eye tracker which enabled us to have more accurate objective measures.
{"title":"Toward a better understanding of the communication cues involved in a human-robot object transfer","authors":"Mamoun Gharbi, Pierre-Vincent Paubel, A. Clodic, O. Carreras, R. Alami, J. Cellier","doi":"10.1109/ROMAN.2015.7333626","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333626","url":null,"abstract":"Handing-over objects to humans (or taking objects from them) is a key capability for a service robot. Humans are efficient and natural while performing this action and the purpose of the studies on this topic is to bring human-robot handovers to an acceptable, efficient and natural level. This paper deals with the cues that allow to make a handover look as natural as possible, and more precisely we focus on where the robot should look while performing it. In this context we propose a user study, involving 33 volunteers, who judged video sequences where they see either a human or a robot giving them an object. They were presented with different sequences where the agents (robot or human) have different gaze behaviours, and were asked to give their feeling about the sequence naturalness. In addition to this subjective measure, the volunteers were equipped with an eye tracker which enabled us to have more accurate objective measures.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132705570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333628
R. Sagawa, Charles Malleson, M. Morisawa, K. Kaneko, F. Kanehiro, Y. Matsumoto, A. Hilton
For mobile robots, 3D acquisition is required to model the environment. Particularly for humanoid robots, a modeled environment is necessary to plan the walking control. This environment can include both static objects, such as a ground surface with obstacles, and dynamic objects, such as a person moving around the robot. This paper proposes a system for a robot to obtain a sufficiently accurate shape of the environment for walking on a ground surface with obstacles and a method to detect dynamic objects in the modeled environment, which is necessary for the robot to react to sudden changes in the scene. The 3D acquisition is achieved by a projector-camera system mounted on the robot head that uses a structured-light method to reconstruct the shapes of moving objects from a single frame. The acquired shapes are aligned and merged into a common coordinate system using the simultaneous localization and mapping method. Dynamic objects are detected as shapes that are inconsistent with the previous frames. Experiments were performed to evaluate the accuracy of the 3D acquisition and the robustness with regard to detecting dynamic objects when serving as the vision system of a humanoid robot.
{"title":"Modeling dynamic scenes by one-shot 3D acquisition system for moving humanoid robot","authors":"R. Sagawa, Charles Malleson, M. Morisawa, K. Kaneko, F. Kanehiro, Y. Matsumoto, A. Hilton","doi":"10.1109/ROMAN.2015.7333628","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333628","url":null,"abstract":"For mobile robots, 3D acquisition is required to model the environment. Particularly for humanoid robots, a modeled environment is necessary to plan the walking control. This environment can include both static objects, such as a ground surface with obstacles, and dynamic objects, such as a person moving around the robot. This paper proposes a system for a robot to obtain a sufficiently accurate shape of the environment for walking on a ground surface with obstacles and a method to detect dynamic objects in the modeled environment, which is necessary for the robot to react to sudden changes in the scene. The 3D acquisition is achieved by a projector-camera system mounted on the robot head that uses a structured-light method to reconstruct the shapes of moving objects from a single frame. The acquired shapes are aligned and merged into a common coordinate system using the simultaneous localization and mapping method. Dynamic objects are detected as shapes that are inconsistent with the previous frames. Experiments were performed to evaluate the accuracy of the 3D acquisition and the robustness with regard to detecting dynamic objects when serving as the vision system of a humanoid robot.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133484644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333564
G. Gweon, Donghee Hong, Sunghee Kwon, Jeonghye Han
In this paper, we examined how the presentation of a remote participant (in our context the remote teacher) in a mobile remote presence (MRP) system affects social interaction, such as closeness and engagement. Using ROBOSEM, a MRP robot, we explored the effect of the presentation of the remote teacher's head size shown on ROBOSEM's screen at three different levels: small, medium, and large. We hypothesized that a medium sized head of the remote teacher shown on the MRP system would be better than a small or large sized head in terms of closeness, engagement, and learning. Our preliminary study results suggest that the size of a remote teacher's head may have an impact on “students' perception of the remote teacher's closeness” and on “students' engagement”. However, we did not observe any difference in terms of “learning”.
{"title":"The influence of head size in mobile remote presence (MRP) educational robots","authors":"G. Gweon, Donghee Hong, Sunghee Kwon, Jeonghye Han","doi":"10.1109/ROMAN.2015.7333564","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333564","url":null,"abstract":"In this paper, we examined how the presentation of a remote participant (in our context the remote teacher) in a mobile remote presence (MRP) system affects social interaction, such as closeness and engagement. Using ROBOSEM, a MRP robot, we explored the effect of the presentation of the remote teacher's head size shown on ROBOSEM's screen at three different levels: small, medium, and large. We hypothesized that a medium sized head of the remote teacher shown on the MRP system would be better than a small or large sized head in terms of closeness, engagement, and learning. Our preliminary study results suggest that the size of a remote teacher's head may have an impact on “students' perception of the remote teacher's closeness” and on “students' engagement”. However, we did not observe any difference in terms of “learning”.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115088928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333610
Kurima Sakai, C. Ishi, T. Minato, H. Ishiguro
We developed a tele-operated robot system where the head motions of the robot are controlled by combining those of the operator with the ones which are automatically generated from the operator's voice. The head motion generation is based on dialogue act functions which are estimated from linguistic and prosodic information extracted from the speech signal. The proposed system was evaluated through an experiment where participants interact with a tele-operated robot. Subjective scores indicated the effectiveness of the proposed head motion generation system, even under limitations in the dialogue act estimation.
{"title":"Online speech-driven head motion generating system and evaluation on a tele-operated robot","authors":"Kurima Sakai, C. Ishi, T. Minato, H. Ishiguro","doi":"10.1109/ROMAN.2015.7333610","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333610","url":null,"abstract":"We developed a tele-operated robot system where the head motions of the robot are controlled by combining those of the operator with the ones which are automatically generated from the operator's voice. The head motion generation is based on dialogue act functions which are estimated from linguistic and prosodic information extracted from the speech signal. The proposed system was evaluated through an experiment where participants interact with a tele-operated robot. Subjective scores indicated the effectiveness of the proposed head motion generation system, even under limitations in the dialogue act estimation.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123136425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333561
C. Bartneck, Marius Soucy, Kevin Fleuret, E. B. Sandoval
HRI is a multi-disciplinary research field and integrating the range of expertise into a single project can be challenging. Enabling experts on human behavior to design fluent animations and behaviors for advanced robots is problematic, since the tools available for such robots are often in their prototype stage. We have built The Robot Engine (TRE) based on the Unity 3D Game Engine to control robots with Unity 3D. Unity 3D allows non-programmers to use a set of powerful animation and interaction design tools to visually program and animate robots. We review several animation techniques that are common in computer games and that could make the movements of robots more natural and convincing. We demonstrate the use of TRE with two different Arduino based robot platforms and believe that it can easily be extended for use with other robots. We further believe that this unconventional integration of technologies has the potential to fully bring the expertise of interaction designers into the process of advanced human-robot interaction projects.
{"title":"The robot engine — Making the unity 3D game engine work for HRI","authors":"C. Bartneck, Marius Soucy, Kevin Fleuret, E. B. Sandoval","doi":"10.1109/ROMAN.2015.7333561","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333561","url":null,"abstract":"HRI is a multi-disciplinary research field and integrating the range of expertise into a single project can be challenging. Enabling experts on human behavior to design fluent animations and behaviors for advanced robots is problematic, since the tools available for such robots are often in their prototype stage. We have built The Robot Engine (TRE) based on the Unity 3D Game Engine to control robots with Unity 3D. Unity 3D allows non-programmers to use a set of powerful animation and interaction design tools to visually program and animate robots. We review several animation techniques that are common in computer games and that could make the movements of robots more natural and convincing. We demonstrate the use of TRE with two different Arduino based robot platforms and believe that it can easily be extended for use with other robots. We further believe that this unconventional integration of technologies has the potential to fully bring the expertise of interaction designers into the process of advanced human-robot interaction projects.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128414468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}