Pub Date : 2019-03-01DOI: 10.1109/HRI.2019.8673191
M. Faizan, Hassan Amel, A. Cleaver, J. Sinapov
This paper outlines the system design, capabilities and potential applications of an Augmented Reality (AR) framework developed for Robot Operating System (ROS) powered robots. The goal of this framework is to enable high-level human-robot collaboration and interaction. It allows the users to visualize the robot's state in intuitive modalities overlaid onto the real world and interact with AR objects as a means of communication with the robot. Thereby creating a shared environment in which humans and robots can interact and collaborate.
{"title":"Creating a Shared Reality with Robots","authors":"M. Faizan, Hassan Amel, A. Cleaver, J. Sinapov","doi":"10.1109/HRI.2019.8673191","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673191","url":null,"abstract":"This paper outlines the system design, capabilities and potential applications of an Augmented Reality (AR) framework developed for Robot Operating System (ROS) powered robots. The goal of this framework is to enable high-level human-robot collaboration and interaction. It allows the users to visualize the robot's state in intuitive modalities overlaid onto the real world and interact with AR objects as a means of communication with the robot. Thereby creating a shared environment in which humans and robots can interact and collaborate.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"45 1","pages":"614-615"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76113698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/HRI.2019.8673205
Ahnjae Shin, J. Oh, Joonhwan Lee
Conversational robots that exhibit human-level abilities in physical and verbal conversation are widely used in human-robot interaction studies, along with the Wizard of Oz protocol. However, even with the protocol, manipulating the robot to move and talk is cognitively demanding. A preliminary study with a humanoid was conducted to observe difficulties wizards experienced in each of four subtasks: attention, decision, execution, and reflection. Apprentice of Oz is a human-in-the-loop Wizard of Oz system designed to reduce the wizard's cognitive load in each subtask. Each task is co-performed by the wizard and the system. This paper describes the system design from the view of each subtask.
{"title":"Apprentice of Oz: Human in the Loop System for Conversational Robot Wizard of Oz","authors":"Ahnjae Shin, J. Oh, Joonhwan Lee","doi":"10.1109/HRI.2019.8673205","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673205","url":null,"abstract":"Conversational robots that exhibit human-level abilities in physical and verbal conversation are widely used in human-robot interaction studies, along with the Wizard of Oz protocol. However, even with the protocol, manipulating the robot to move and talk is cognitively demanding. A preliminary study with a humanoid was conducted to observe difficulties wizards experienced in each of four subtasks: attention, decision, execution, and reflection. Apprentice of Oz is a human-in-the-loop Wizard of Oz system designed to reduce the wizard's cognitive load in each subtask. Each task is co-performed by the wizard and the system. This paper describes the system design from the view of each subtask.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"22 1","pages":"516-517"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79042697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/HRI.2019.8673278
Mitsuhiro Goto, Tatsuya Ishino, Keisuke Inazawa, N. Matsumura, Tadashi Nunobiki, A. Kashihara
In presentation, presenters are required to conduct non-verbal behavior involving face direction and gesture, which is so important for promoting audience's understanding. However, it is not simple for the presenters to use appropriate non-verbal behavior depending on the presentation contexts. In order to address this issue, this paper proposes a robot presentation system that allows presenters to reflect on their presentation through authoring the presentation scenario used by the robot. Features of the proposed system are that the presenters can easily and quickly author and modify their presentation, and that they can become aware of points to be modified. In addition, this paper reports a case study using the system with six participants, whose purpose was to compare the proposed system with the convention system in terms of complication for authoring the scenario. The results suggest that our system allows presenters to easily and quickly modify the presentation.
{"title":"Authoring Robot Presentation for Promoting Reflection on Presentation Scenario","authors":"Mitsuhiro Goto, Tatsuya Ishino, Keisuke Inazawa, N. Matsumura, Tadashi Nunobiki, A. Kashihara","doi":"10.1109/HRI.2019.8673278","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673278","url":null,"abstract":"In presentation, presenters are required to conduct non-verbal behavior involving face direction and gesture, which is so important for promoting audience's understanding. However, it is not simple for the presenters to use appropriate non-verbal behavior depending on the presentation contexts. In order to address this issue, this paper proposes a robot presentation system that allows presenters to reflect on their presentation through authoring the presentation scenario used by the robot. Features of the proposed system are that the presenters can easily and quickly author and modify their presentation, and that they can become aware of points to be modified. In addition, this paper reports a case study using the system with six participants, whose purpose was to compare the proposed system with the convention system in terms of complication for authoring the scenario. The results suggest that our system allows presenters to easily and quickly modify the presentation.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"88 1","pages":"660-661"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76402259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/HRI.2019.8673176
M. Novitzky, P. Robinette, M. Benjamin, Caileigh Fitzgerald, Henrik Schmidt
In this paper, we introduce publicly available human-robot teaming datasets captured during the summer 2018 season using our Aquaticus testbed. Our Aquaticus testbed is designed to examine the interactions between human-human and human-robot teammates while situated in the marine environment in their own vehicles. In particular, we assess these interactions while humans and fully autonomous robots play a competitive game of capture the flag on the water. Our testbed is unique in that the humans are situated in the field with their fully autonomous robot teammates in vehicles that have similar dynamics. Having a competition on the water reduces the safety concerns and cost of performing similar experiments in the air or on the ground. By having the competitions on the water, we create a complex, dynamic, and partially observable view of the world for participants while in their motorized kayak. The main modality for teammate interaction is audio to better simulate the experience of real-world tactical situations – ie fighter pilots talking to each other over radios. We have released our complete datasets publicly so that we can enable researchers throughout the HRI community that do not have access to such a testbed and may have expertise other than our own to leverage our datasets to perform their own analysis and contribute to the HRI community.
{"title":"Aquaticus: Publicly Available Datasets from a Marine Human-Robot Teaming Testbed","authors":"M. Novitzky, P. Robinette, M. Benjamin, Caileigh Fitzgerald, Henrik Schmidt","doi":"10.1109/HRI.2019.8673176","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673176","url":null,"abstract":"In this paper, we introduce publicly available human-robot teaming datasets captured during the summer 2018 season using our Aquaticus testbed. Our Aquaticus testbed is designed to examine the interactions between human-human and human-robot teammates while situated in the marine environment in their own vehicles. In particular, we assess these interactions while humans and fully autonomous robots play a competitive game of capture the flag on the water. Our testbed is unique in that the humans are situated in the field with their fully autonomous robot teammates in vehicles that have similar dynamics. Having a competition on the water reduces the safety concerns and cost of performing similar experiments in the air or on the ground. By having the competitions on the water, we create a complex, dynamic, and partially observable view of the world for participants while in their motorized kayak. The main modality for teammate interaction is audio to better simulate the experience of real-world tactical situations – ie fighter pilots talking to each other over radios. We have released our complete datasets publicly so that we can enable researchers throughout the HRI community that do not have access to such a testbed and may have expertise other than our own to leverage our datasets to perform their own analysis and contribute to the HRI community.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"18 1","pages":"392-400"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86203408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/HRI.2019.8673107
D. Fisicaro, Francesco Pozzi, M. Gelsomini, F. Garzotto
The use of social robots in interventions for persons with Neuro-Developmental Disorder (NDD) has been explored in several studies. This paper describes a plush social robot with an elephant appearance called ELE that acts as conversational companion and has been designed to promote NDD persons engagement persons during interventions. We also present the initial evaluation of ELE and the preliminary results in terms of visual attention improvement in a storytelling context.
{"title":"Engaging Persons with Neuro-Developmental Disorder with a Plush Social Robot","authors":"D. Fisicaro, Francesco Pozzi, M. Gelsomini, F. Garzotto","doi":"10.1109/HRI.2019.8673107","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673107","url":null,"abstract":"The use of social robots in interventions for persons with Neuro-Developmental Disorder (NDD) has been explored in several studies. This paper describes a plush social robot with an elephant appearance called ELE that acts as conversational companion and has been designed to promote NDD persons engagement persons during interventions. We also present the initial evaluation of ELE and the preliminary results in terms of visual attention improvement in a storytelling context.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"13 1","pages":"610-611"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77270061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/HRI.2019.8673233
Hooman Hedayati, D. Szafir, Sean Andrist
A key skill for social robots in the wild will be to understand the structure and dynamics of conversational groups in order to fluidly participate in them. Social scientists have long studied the rich complexity underlying such focused encounters, or $pmb{F}$-formations. However, current state-of-the-art algorithms that robots might use to recognize F-formations are highly heuristic and quite brittle. In this report, we explore a data-driven approach to detect F-formations from sets of tracked human positions and orientations, trained and evaluated on two openly available human-only datasets and a small human-robot dataset that we collected. We also discuss the potential for further computational characterization of F-formations beyond simply detecting their occurrence.
{"title":"Recognizing F-Formations in the Open World","authors":"Hooman Hedayati, D. Szafir, Sean Andrist","doi":"10.1109/HRI.2019.8673233","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673233","url":null,"abstract":"A key skill for social robots in the wild will be to understand the structure and dynamics of conversational groups in order to fluidly participate in them. Social scientists have long studied the rich complexity underlying such focused encounters, or $pmb{F}$-formations. However, current state-of-the-art algorithms that robots might use to recognize F-formations are highly heuristic and quite brittle. In this report, we explore a data-driven approach to detect F-formations from sets of tracked human positions and orientations, trained and evaluated on two openly available human-only datasets and a small human-robot dataset that we collected. We also discuss the potential for further computational characterization of F-formations beyond simply detecting their occurrence.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"19 1","pages":"558-559"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91532779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/HRI.2019.8673179
Kotaro Funakoshi, Hideaki Shimazaki, T. Kumada, H. Tsujino
We advocate cooperative intelligence (CI) that achieves its goals in cooperating with other agents, particularly human beings, with limited resources but in complex and dynamic environments. CI is important because it delivers better performances in achieving a broad range of tasks; furthermore, cooperativeness is key to human intelligence, and the processes of cooperation can contribute to help people gain several life values. This paper discusses elements in CI and our research approach to CI. We identify the four aspects of CI: adaptive intelligence, collective intelligence, coordinative intelligence, and collaborative intelligence. We also take an approach that focuses on the implementation of coordinative intelligence in the form of personal partner agents (PPAs) and consider the design of our robotic research platform to physically realize PPAs.
{"title":"Personal Partner Agents for Cooperative Intelligence","authors":"Kotaro Funakoshi, Hideaki Shimazaki, T. Kumada, H. Tsujino","doi":"10.1109/HRI.2019.8673179","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673179","url":null,"abstract":"We advocate cooperative intelligence (CI) that achieves its goals in cooperating with other agents, particularly human beings, with limited resources but in complex and dynamic environments. CI is important because it delivers better performances in achieving a broad range of tasks; furthermore, cooperativeness is key to human intelligence, and the processes of cooperation can contribute to help people gain several life values. This paper discusses elements in CI and our research approach to CI. We identify the four aspects of CI: adaptive intelligence, collective intelligence, coordinative intelligence, and collaborative intelligence. We also take an approach that focuses on the implementation of coordinative intelligence in the form of personal partner agents (PPAs) and consider the design of our robotic research platform to physically realize PPAs.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"41 1","pages":"570-571"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91219617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/HRI.2019.8673281
Michael Suguitan, Mason Bretan, Guy Hoffman
Social robots use gestures to express internal and affective states, but their interactive capabilities are hindered by relying on preprogrammed or hand-animated behaviors, which can be repetitive and predictable. We propose a method for automatically synthesizing affective robot movements given manually-generated examples. Our approach is based on techniques adapted from deep learning, specifically generative adversarial neural networks (GANs).
{"title":"Affective Robot Movement Generation Using CycleGANs","authors":"Michael Suguitan, Mason Bretan, Guy Hoffman","doi":"10.1109/HRI.2019.8673281","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673281","url":null,"abstract":"Social robots use gestures to express internal and affective states, but their interactive capabilities are hindered by relying on preprogrammed or hand-animated behaviors, which can be repetitive and predictable. We propose a method for automatically synthesizing affective robot movements given manually-generated examples. Our approach is based on techniques adapted from deep learning, specifically generative adversarial neural networks (GANs).","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"02 1","pages":"534-535"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86057537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/HRI.2019.8673019
Elaine Schaertl Short, Adam Allevato, A. Thomaz
Robots in real-world environments may need to adapt context-specific behaviors learned in one environment to new environments with new constraints. In many cases, copresent humans can provide the robot with information, but it may not be safe for them to provide hands-on demonstrations and there may not be a dedicated supervisor to provide constant feedback. In this work we present the SAIL (Simulation-Informed Active In-the-Wild Learning) algorithm for learning new approaches to manipulation skills starting from a single demonstration. In this three-step algorithm, the robot simulates task execution to choose new potential approaches; collects unsupervised data on task execution in the target environment; and finally, chooses informative actions to show to co-present humans and obtain labels. Our approach enables a robot to learn new ways of executing two different tasks by using success/failure labels obtained from naïve users in a public space, performing 496 manipulation actions and collecting 163 labels from users in the wild over six 45-minute to 1-hour deployments. We show that classifiers based low-level sensor data can be used to accurately distinguish between successful and unsuccessful motions in a multi-step task ($mathbf{p} < 0.005$), even when trained in the wild. We also show that using the sensor data to choose which actions to sample is more effective than choosing the least-sampled action.
{"title":"SAIL: Simulation-Informed Active In-the-Wild Learning","authors":"Elaine Schaertl Short, Adam Allevato, A. Thomaz","doi":"10.1109/HRI.2019.8673019","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673019","url":null,"abstract":"Robots in real-world environments may need to adapt context-specific behaviors learned in one environment to new environments with new constraints. In many cases, copresent humans can provide the robot with information, but it may not be safe for them to provide hands-on demonstrations and there may not be a dedicated supervisor to provide constant feedback. In this work we present the SAIL (Simulation-Informed Active In-the-Wild Learning) algorithm for learning new approaches to manipulation skills starting from a single demonstration. In this three-step algorithm, the robot simulates task execution to choose new potential approaches; collects unsupervised data on task execution in the target environment; and finally, chooses informative actions to show to co-present humans and obtain labels. Our approach enables a robot to learn new ways of executing two different tasks by using success/failure labels obtained from naïve users in a public space, performing 496 manipulation actions and collecting 163 labels from users in the wild over six 45-minute to 1-hour deployments. We show that classifiers based low-level sensor data can be used to accurately distinguish between successful and unsuccessful motions in a multi-step task ($mathbf{p} < 0.005$), even when trained in the wild. We also show that using the sensor data to choose which actions to sample is more effective than choosing the least-sampled action.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"8 1","pages":"468-477"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81681955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/HRI.2019.8673235
Anders B. H. Christensen, C. R. Dam, Corentin Rasle, Jacob E. Bauer, Ramlo A. Mohamed, L. Jensen
In general, people tend to place too much trust in robotic systems, also in emergency situations. Our study attempts to discover ways of reducing this overtrust, by adding vocal warnings of error from a robot that guides participants blindfolded through a maze. The results indicate that the tested vocal warnings have no effect in reducing overtrust, but we encourage further testing of similar warnings to fully explore its potential effects.
{"title":"Reducing Overtrust in Failing Robotic Systems","authors":"Anders B. H. Christensen, C. R. Dam, Corentin Rasle, Jacob E. Bauer, Ramlo A. Mohamed, L. Jensen","doi":"10.1109/HRI.2019.8673235","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673235","url":null,"abstract":"In general, people tend to place too much trust in robotic systems, also in emergency situations. Our study attempts to discover ways of reducing this overtrust, by adding vocal warnings of error from a robot that guides participants blindfolded through a maze. The results indicate that the tested vocal warnings have no effect in reducing overtrust, but we encourage further testing of similar warnings to fully explore its potential effects.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"19 1","pages":"542-543"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87142979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}