Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333671
T. Iqbal, Michael J. Gonzales, L. Riek
To be effective team members, it is important for robots to understand the high-level behaviors of collocated humans. This is a challenging perceptual task when both the robots and people are in motion. In this paper, we describe an event-based model for multiple robots to automatically measure synchronous joint action of a group while both the robots and co-present humans are moving. We validated our model through an experiment where two people marched both synchronously and asynchronously, while being followed by two mobile robots. Our results suggest that our model accurately identifies synchronous motion, which can enable more adept human-robot collaboration.
{"title":"Joint action perception to enable fluent human-robot teamwork","authors":"T. Iqbal, Michael J. Gonzales, L. Riek","doi":"10.1109/ROMAN.2015.7333671","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333671","url":null,"abstract":"To be effective team members, it is important for robots to understand the high-level behaviors of collocated humans. This is a challenging perceptual task when both the robots and people are in motion. In this paper, we describe an event-based model for multiple robots to automatically measure synchronous joint action of a group while both the robots and co-present humans are moving. We validated our model through an experiment where two people marched both synchronously and asynchronously, while being followed by two mobile robots. Our results suggest that our model accurately identifies synchronous motion, which can enable more adept human-robot collaboration.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":" 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113949559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333643
Sina Radmard, AJung Moon, E. Croft
With the rise in popularity of robot-mediated teleconference (telepresence) systems, there is an increased demand for user interfaces that simplify control of the systems' mobility. This is especially true if the display/camera is to be controlled by users while remotely collaborating with another person. In this work, we compare the efficacy of a conventional keyboard and a non-contact, gesture-based, Leap interface in controlling the display/camera of a 7-DoF (degrees of freedom) telepresence platform for remote collaboration. Twenty subjects participated in our usability study where performance, ease of use, and workload were compared between the interfaces. While Leap allowed smoother and more continuous control of the platform, our results indicate that the keyboard provided superior performance in terms of task completion time, ease of use, and workload. We discuss the implications of novel interface designs for telepresence applications.
{"title":"Interface design and usability analysis for a robotic telepresence platform","authors":"Sina Radmard, AJung Moon, E. Croft","doi":"10.1109/ROMAN.2015.7333643","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333643","url":null,"abstract":"With the rise in popularity of robot-mediated teleconference (telepresence) systems, there is an increased demand for user interfaces that simplify control of the systems' mobility. This is especially true if the display/camera is to be controlled by users while remotely collaborating with another person. In this work, we compare the efficacy of a conventional keyboard and a non-contact, gesture-based, Leap interface in controlling the display/camera of a 7-DoF (degrees of freedom) telepresence platform for remote collaboration. Twenty subjects participated in our usability study where performance, ease of use, and workload were compared between the interfaces. While Leap allowed smoother and more continuous control of the platform, our results indicate that the keyboard provided superior performance in terms of task completion time, ease of use, and workload. We discuss the implications of novel interface designs for telepresence applications.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"475 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116521344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333656
Elizabeth Cha, A. Dragan, S. Srinivasa
Robotics research often focuses on increasing robot capability. If end users do not perceive these increases, however, user acceptance may not improve. In this work, we explore the idea of perceived capability and how it relates to true capability, differentiating between physical and social capabilities. We present a framework that outlines their potential relationships, along with two user studies, on robot speed and speech, exploring these relationships. Our studies identify two possible consequences of the disconnect between the true and perceived capability: (1) under-perception: true improvements in capability may not lead to perceived improvements and (2) over-perception: true improvements in capability may lead to additional perceived improvements that do not actually exist.
{"title":"Perceived robot capability","authors":"Elizabeth Cha, A. Dragan, S. Srinivasa","doi":"10.1109/ROMAN.2015.7333656","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333656","url":null,"abstract":"Robotics research often focuses on increasing robot capability. If end users do not perceive these increases, however, user acceptance may not improve. In this work, we explore the idea of perceived capability and how it relates to true capability, differentiating between physical and social capabilities. We present a framework that outlines their potential relationships, along with two user studies, on robot speed and speech, exploring these relationships. Our studies identify two possible consequences of the disconnect between the true and perceived capability: (1) under-perception: true improvements in capability may not lead to perceived improvements and (2) over-perception: true improvements in capability may lead to additional perceived improvements that do not actually exist.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134320584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333564
G. Gweon, Donghee Hong, Sunghee Kwon, Jeonghye Han
In this paper, we examined how the presentation of a remote participant (in our context the remote teacher) in a mobile remote presence (MRP) system affects social interaction, such as closeness and engagement. Using ROBOSEM, a MRP robot, we explored the effect of the presentation of the remote teacher's head size shown on ROBOSEM's screen at three different levels: small, medium, and large. We hypothesized that a medium sized head of the remote teacher shown on the MRP system would be better than a small or large sized head in terms of closeness, engagement, and learning. Our preliminary study results suggest that the size of a remote teacher's head may have an impact on “students' perception of the remote teacher's closeness” and on “students' engagement”. However, we did not observe any difference in terms of “learning”.
{"title":"The influence of head size in mobile remote presence (MRP) educational robots","authors":"G. Gweon, Donghee Hong, Sunghee Kwon, Jeonghye Han","doi":"10.1109/ROMAN.2015.7333564","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333564","url":null,"abstract":"In this paper, we examined how the presentation of a remote participant (in our context the remote teacher) in a mobile remote presence (MRP) system affects social interaction, such as closeness and engagement. Using ROBOSEM, a MRP robot, we explored the effect of the presentation of the remote teacher's head size shown on ROBOSEM's screen at three different levels: small, medium, and large. We hypothesized that a medium sized head of the remote teacher shown on the MRP system would be better than a small or large sized head in terms of closeness, engagement, and learning. Our preliminary study results suggest that the size of a remote teacher's head may have an impact on “students' perception of the remote teacher's closeness” and on “students' engagement”. However, we did not observe any difference in terms of “learning”.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115088928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333603
Hitomi Matsushita, Yohei Kurata, P. R. D. De Silva, M. Okada
It is still an enormous challenge within the HRI community to make a significant contribution to the development of a robot's utterance generation mechanism. How does one actually go about contributing and predicting the future of robot utterance generation? Since, our motivation to propose a robot's utterance generation approach by utilizing both addressivity and hearership. Novel platform of Talking-Ally is capable of producing an utterance (toward addressivity) by utilizing the state of the hearer's behaviors (eye-gaze information) to persuade the user (states of hearership) through dynamic interaction. Moreover, the robot has the potential to manipulate modality, turn-initial, and entrust behaviors to increase the liveliness of conversations, which are facilitated by shifting the direction of the conversation and maintaining the hearer's engagement in the conversation. Our experiment focuses on evaluating how interactive users engage with an utterance generation approach (performance) and the persuasive power of robot's communication within dynamic interactions.
{"title":"Talking-Ally: What is the future of robot's utterance generation?","authors":"Hitomi Matsushita, Yohei Kurata, P. R. D. De Silva, M. Okada","doi":"10.1109/ROMAN.2015.7333603","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333603","url":null,"abstract":"It is still an enormous challenge within the HRI community to make a significant contribution to the development of a robot's utterance generation mechanism. How does one actually go about contributing and predicting the future of robot utterance generation? Since, our motivation to propose a robot's utterance generation approach by utilizing both addressivity and hearership. Novel platform of Talking-Ally is capable of producing an utterance (toward addressivity) by utilizing the state of the hearer's behaviors (eye-gaze information) to persuade the user (states of hearership) through dynamic interaction. Moreover, the robot has the potential to manipulate modality, turn-initial, and entrust behaviors to increase the liveliness of conversations, which are facilitated by shifting the direction of the conversation and maintaining the hearer's engagement in the conversation. Our experiment focuses on evaluating how interactive users engage with an utterance generation approach (performance) and the persuasive power of robot's communication within dynamic interactions.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114377969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333620
Jonathan S. Herberg, S. Feller, Ilker Yengin, Martin Saerbeck
Educational technological applications, such as computerized learning environments and robot tutors, are often programmed to provide social cues for the purposes of facilitating natural interaction and enhancing productive outcomes. However, there can be potential costs to social interactions that could run counter to such goals. Here, we present an experiment testing the impact of a watchful versus non-watchful robot tutor on children's language-learning effort and performance. Across two interaction sessions, children learned French and Latin rules from a robot tutor and filled in worksheets applying the rules to translate phrases. Results indicate better performance on the worksheets in the session in which the robot looked away from, as compared to the session it looked toward the child, as the child was filling in the worksheets. This was the case in particular for the more difficult worksheet items. These findings highlight the need for careful implementation of social robot behaviors to avoid counterproductive effects.
{"title":"Robot watchfulness hinders learning performance","authors":"Jonathan S. Herberg, S. Feller, Ilker Yengin, Martin Saerbeck","doi":"10.1109/ROMAN.2015.7333620","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333620","url":null,"abstract":"Educational technological applications, such as computerized learning environments and robot tutors, are often programmed to provide social cues for the purposes of facilitating natural interaction and enhancing productive outcomes. However, there can be potential costs to social interactions that could run counter to such goals. Here, we present an experiment testing the impact of a watchful versus non-watchful robot tutor on children's language-learning effort and performance. Across two interaction sessions, children learned French and Latin rules from a robot tutor and filled in worksheets applying the rules to translate phrases. Results indicate better performance on the worksheets in the session in which the robot looked away from, as compared to the session it looked toward the child, as the child was filling in the worksheets. This was the case in particular for the more difficult worksheet items. These findings highlight the need for careful implementation of social robot behaviors to avoid counterproductive effects.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114507216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333610
Kurima Sakai, C. Ishi, T. Minato, H. Ishiguro
We developed a tele-operated robot system where the head motions of the robot are controlled by combining those of the operator with the ones which are automatically generated from the operator's voice. The head motion generation is based on dialogue act functions which are estimated from linguistic and prosodic information extracted from the speech signal. The proposed system was evaluated through an experiment where participants interact with a tele-operated robot. Subjective scores indicated the effectiveness of the proposed head motion generation system, even under limitations in the dialogue act estimation.
{"title":"Online speech-driven head motion generating system and evaluation on a tele-operated robot","authors":"Kurima Sakai, C. Ishi, T. Minato, H. Ishiguro","doi":"10.1109/ROMAN.2015.7333610","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333610","url":null,"abstract":"We developed a tele-operated robot system where the head motions of the robot are controlled by combining those of the operator with the ones which are automatically generated from the operator's voice. The head motion generation is based on dialogue act functions which are estimated from linguistic and prosodic information extracted from the speech signal. The proposed system was evaluated through an experiment where participants interact with a tele-operated robot. Subjective scores indicated the effectiveness of the proposed head motion generation system, even under limitations in the dialogue act estimation.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123136425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333582
Genta Yoshioka, Takafumi Sakamoto, Yugo Takeuchi
This paper reports an analytic finding in which humans inferred the emotional states of a simple, flat robot that only moves autonomously on a floor in all directions based on Russell's circumplex model of affect that depends on human's spatial position. We observed the physical interaction between humans and a robot through an experiment where our participants seek a treasure in the given field, and the robot expresses its affective state by movements. This result will contribute to the basic design of HRI. The robot only showed its internal state using its simple movements.
{"title":"Inferring affective states from observation of a robot's simple movements","authors":"Genta Yoshioka, Takafumi Sakamoto, Yugo Takeuchi","doi":"10.1109/ROMAN.2015.7333582","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333582","url":null,"abstract":"This paper reports an analytic finding in which humans inferred the emotional states of a simple, flat robot that only moves autonomously on a floor in all directions based on Russell's circumplex model of affect that depends on human's spatial position. We observed the physical interaction between humans and a robot through an experiment where our participants seek a treasure in the given field, and the robot expresses its affective state by movements. This result will contribute to the basic design of HRI. The robot only showed its internal state using its simple movements.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124966321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333604
C. P. Quintero, R. T. Fomena, Mona Gridseth, Martin Jägersand
This paper explores visual pointing gestures for two-way nonverbal communication for interacting with a robot arm. Such non-verbal instruction is common when humans communicate spatial directions and actions while collaboratively performing manipulation tasks. Using 3D RGBD we compare human-human and human-robot interaction for solving a pick-and-place task. In the human-human interaction we study both pointing and other types of gestures, performed by humans in a collaborative task. For the human-robot interaction we design a system that allows the user to interact with a 7DOF robot arm using gestures for selecting, picking and dropping objects at different locations. Bi-directional confirmation gestures allow the robot (or human) to verify that the right object is selected. We perform experiments where 8 human subjects collaborate with the robot to manipulate ordinary household objects on a tabletop. Without confirmation feedback selection accuracy was 70-90% for both humans and the robot. With feedback through confirmation gestures both humans and our vision-robotic system could perform the task accurately every time (100%). Finally to illustrate our gesture interface in a real application, we let a human instruct our robot to make a pizza by selecting different ingredients.
{"title":"Visual pointing gestures for bi-directional human robot interaction in a pick-and-place task","authors":"C. P. Quintero, R. T. Fomena, Mona Gridseth, Martin Jägersand","doi":"10.1109/ROMAN.2015.7333604","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333604","url":null,"abstract":"This paper explores visual pointing gestures for two-way nonverbal communication for interacting with a robot arm. Such non-verbal instruction is common when humans communicate spatial directions and actions while collaboratively performing manipulation tasks. Using 3D RGBD we compare human-human and human-robot interaction for solving a pick-and-place task. In the human-human interaction we study both pointing and other types of gestures, performed by humans in a collaborative task. For the human-robot interaction we design a system that allows the user to interact with a 7DOF robot arm using gestures for selecting, picking and dropping objects at different locations. Bi-directional confirmation gestures allow the robot (or human) to verify that the right object is selected. We perform experiments where 8 human subjects collaborate with the robot to manipulate ordinary household objects on a tabletop. Without confirmation feedback selection accuracy was 70-90% for both humans and the robot. With feedback through confirmation gestures both humans and our vision-robotic system could perform the task accurately every time (100%). Finally to illustrate our gesture interface in a real application, we let a human instruct our robot to make a pizza by selecting different ingredients.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125408470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-11-23DOI: 10.1109/ROMAN.2015.7333560
V. Nitsch, Thomas Glassen
Many envision a future in which personal service robots share our homes and take part in our daily lives. These robots should possess a certain “social intelligence”, so that people are willing, if not eager, to interact with them. In this endeavor, applied psychologists and roboticists have conducted numerous studies to identify the factors that affect social interactions between humans and robots, both positively and negatively. In order to ascertain the extent to which the social human-robot interaction might be influenced by robot behavior and a person's attitude towards technology, an experiment was conducted using the UG paradigm, in which participants (N=48) interacted with a robot, which displayed either animated or apathetic behavior. The results suggest that although the interaction with a robot displaying animated behavior is overall rated more favorably, people may nevertheless act differently towards such robots, depending on their perceived technological competence and their enthusiasm for technology.
{"title":"Investigating the effects of robot behavior and attitude towards technology on social human-robot interactions","authors":"V. Nitsch, Thomas Glassen","doi":"10.1109/ROMAN.2015.7333560","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333560","url":null,"abstract":"Many envision a future in which personal service robots share our homes and take part in our daily lives. These robots should possess a certain “social intelligence”, so that people are willing, if not eager, to interact with them. In this endeavor, applied psychologists and roboticists have conducted numerous studies to identify the factors that affect social interactions between humans and robots, both positively and negatively. In order to ascertain the extent to which the social human-robot interaction might be influenced by robot behavior and a person's attitude towards technology, an experiment was conducted using the UG paradigm, in which participants (N=48) interacted with a robot, which displayed either animated or apathetic behavior. The results suggest that although the interaction with a robot displaying animated behavior is overall rated more favorably, people may nevertheless act differently towards such robots, depending on their perceived technological competence and their enthusiasm for technology.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126196399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}