Pub Date : 2019-03-11DOI: 10.1109/HRI.2019.8673264
Hideki Garcia Goo, Jaime Alvarez Perez, Virginia Contreras
Inspired by prior work with robots that physically display positive emotion (e.g., [1]), we were interested to see how people might interact with a robot capable of communicating cues of negative affect such as anger. Based in particular on [2], we have prototyped an anti-social, zoomorphic robot equipped with a spike mechanism to nonverbally communicate anger. The robot's embodiment involves a simple dome-like morphology with a ring of inflatable spikes wrapped around its circumference. Ultrasonic sensors engage the robot's antisocial cuing (e.g., “spiking” when a person comes too close). To evaluate people's perceptions of the robot and the impact of the spike mechanism on their behavior, we plan to deploy the robot in social settings where it would be inappropriate for a person to approach (e.g., in front of a door with a “do not disturb” sign). We expect that exploration of robot antisociality, in addition to prosociality, will help inform the design of more socially complex human-robot interactions.
{"title":"An Antisocial Social Robot: Using Negative Affect to Reinforce Cooperation in Human-Robot Interactions","authors":"Hideki Garcia Goo, Jaime Alvarez Perez, Virginia Contreras","doi":"10.1109/HRI.2019.8673264","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673264","url":null,"abstract":"Inspired by prior work with robots that physically display positive emotion (e.g., [1]), we were interested to see how people might interact with a robot capable of communicating cues of negative affect such as anger. Based in particular on [2], we have prototyped an anti-social, zoomorphic robot equipped with a spike mechanism to nonverbally communicate anger. The robot's embodiment involves a simple dome-like morphology with a ring of inflatable spikes wrapped around its circumference. Ultrasonic sensors engage the robot's antisocial cuing (e.g., “spiking” when a person comes too close). To evaluate people's perceptions of the robot and the impact of the spike mechanism on their behavior, we plan to deploy the robot in social settings where it would be inappropriate for a person to approach (e.g., in front of a door with a “do not disturb” sign). We expect that exploration of robot antisociality, in addition to prosociality, will help inform the design of more socially complex human-robot interactions.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"29 1","pages":"763-764"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89345302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-11DOI: 10.1109/HRI.2019.8673172
Raquel Thiessen, Daniel J. Rea, Diljot S. Garcha, Cheng Cheng, J. Young
We investigate robots using infrasound, low-frequency vibrational energy at or near the human hearing threshold, as an interaction tool for working with people. Research in psychology suggests that the presence of infrasound can impact a person's emotional state and mood, even when the person is not acutely aware of the infrasound. Although often not noticed, infrasound is commonly present in many situations including factories, airports, or near motor vehicles. Further, a robot itself can produce infrasound. Thus, we examine if infrasound may impact how people interpret a robot's social communication: if the presence of infrasound makes a robot seem more or less happy, energetic, etc., as a result of impacting a person's mood. We present the results from a series of experiments that investigate how people rate a social robot's emotionally-charged gestures, and how varied levels and sources of infrasound impact these ratings. Our results show that infrasound does have a psychological effect on the person's perception of the robot's behaviors, supporting this as a technique that a robot can use as part of its interaction design toolkit. We further provide a comparison of infrasound generation methods.
{"title":"Infrasound for HRI: A Robot Using Low-Frequency Vibrations to Impact How People Perceive its Actions","authors":"Raquel Thiessen, Daniel J. Rea, Diljot S. Garcha, Cheng Cheng, J. Young","doi":"10.1109/HRI.2019.8673172","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673172","url":null,"abstract":"We investigate robots using infrasound, low-frequency vibrational energy at or near the human hearing threshold, as an interaction tool for working with people. Research in psychology suggests that the presence of infrasound can impact a person's emotional state and mood, even when the person is not acutely aware of the infrasound. Although often not noticed, infrasound is commonly present in many situations including factories, airports, or near motor vehicles. Further, a robot itself can produce infrasound. Thus, we examine if infrasound may impact how people interpret a robot's social communication: if the presence of infrasound makes a robot seem more or less happy, energetic, etc., as a result of impacting a person's mood. We present the results from a series of experiments that investigate how people rate a social robot's emotionally-charged gestures, and how varied levels and sources of infrasound impact these ratings. Our results show that infrasound does have a psychological effect on the person's perception of the robot's behaviors, supporting this as a technique that a robot can use as part of its interaction design toolkit. We further provide a comparison of infrasound generation methods.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"29 1","pages":"11-18"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81959010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-11DOI: 10.1109/HRI.2019.8673103
Rinat B. Rosenberg-Kima, Yaacov Koren, Maya Yachini, Goren Gordon
Can we find real value for educational social robots in the very near future? We argue that the answer is yes. Specifically, in a classroom we observed, we identified a common gap: the instructor divided the class into small groups to work on a learning activity and could not address all their questions simultaneously. The purpose of this study was to examine whether social robots can assist in this scenario. In particular, we were interested to find whether a physical robot serves this purpose better than other technologies such as tablets. Benefits and drawbacks of the robot facilitator are discussed.
{"title":"Human-Robot-Collaboration (HRC): Social Robots as Teaching Assistants for Training Activities in Small Groups","authors":"Rinat B. Rosenberg-Kima, Yaacov Koren, Maya Yachini, Goren Gordon","doi":"10.1109/HRI.2019.8673103","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673103","url":null,"abstract":"Can we find real value for educational social robots in the very near future? We argue that the answer is yes. Specifically, in a classroom we observed, we identified a common gap: the instructor divided the class into small groups to work on a learning activity and could not address all their questions simultaneously. The purpose of this study was to examine whether social robots can assist in this scenario. In particular, we were interested to find whether a physical robot serves this purpose better than other technologies such as tablets. Benefits and drawbacks of the robot facilitator are discussed.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"2 1","pages":"522-523"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75660749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-11DOI: 10.1109/HRI.2019.8673104
Aaquib Tabrez, Shivendra Agrawal, Bradley Hayes
For robots to effectively collaborate with humans, it is critical to establish a shared mental model amongst teammates. In the case of incongruous models, catastrophic failures may occur unless mitigating steps are taken. To identify and remedy these potential issues, we propose a novel mechanism for enabling an autonomous system to detect model disparity between itself and a human collaborator, infer the source of the disagreement within the model, evaluate potential consequences of this error, and finally, provide human-interpretable feedback to encourage model correction. This process effectively enables a robot to provide a human with a policy update based on perceived model disparity, reducing the likelihood of costly or dangerous failures during joint task execution. This paper makes two contributions at the intersection of explainable AI (xAI) and human-robot collaboration: 1) The Reward Augmentation and Repair through Explanation (RARE) framework for estimating task understanding and 2) A human subjects study illustrating the effectiveness of reward augmentation-based policy repair in a complex collaborative task.
{"title":"Explanation-Based Reward Coaching to Improve Human Performance via Reinforcement Learning","authors":"Aaquib Tabrez, Shivendra Agrawal, Bradley Hayes","doi":"10.1109/HRI.2019.8673104","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673104","url":null,"abstract":"For robots to effectively collaborate with humans, it is critical to establish a shared mental model amongst teammates. In the case of incongruous models, catastrophic failures may occur unless mitigating steps are taken. To identify and remedy these potential issues, we propose a novel mechanism for enabling an autonomous system to detect model disparity between itself and a human collaborator, infer the source of the disagreement within the model, evaluate potential consequences of this error, and finally, provide human-interpretable feedback to encourage model correction. This process effectively enables a robot to provide a human with a policy update based on perceived model disparity, reducing the likelihood of costly or dangerous failures during joint task execution. This paper makes two contributions at the intersection of explainable AI (xAI) and human-robot collaboration: 1) The Reward Augmentation and Repair through Explanation (RARE) framework for estimating task understanding and 2) A human subjects study illustrating the effectiveness of reward augmentation-based policy repair in a complex collaborative task.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"12 1","pages":"249-257"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81648453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-11DOI: 10.1109/HRI.2019.8673156
Ker-Jiun Wang, C. Zheng, Zhihong Mao
In the future, Human-Robot Interaction should be enabled by a compact, human-centered and ergonomic wearable device that can merge human and machine altogether seamlessly by constantly identifying each other's intentions. In this paper, we will showcase the use of an ergonomic and lightweight wearable device that can identify human's eye/facial gestures with physiological signal measurements. Since human's intentions are usually coupled with eye movements and facial expressions, through proper design of interactions using these gestures, we can let people interact with the robots or smart home objects naturally. Combined with Computer Vision object recognition algorithms, we can allow people use very simple and straightforward communication strategies to operate telepresence robot and control smart home objects remotely, totally “Hands-Free”. People can wear a VR head-mounted display and see through the robot's eyes (the remote camera attached on the robot) and interact with the smart home devices intuitively by simple facial gestures or blink of the eyes. It is tremendous beneficial for the people with motor impairment as an assistive tool. For the normal people without disabilities, they can also free their hands to do other tasks and operate the smart home devices at the same time as multimodal control strategies.
{"title":"Human-Centered, Ergonomic Wearable Device with Computer Vision Augmented Intelligence for VR Multimodal Human-Smart Home Object Interaction","authors":"Ker-Jiun Wang, C. Zheng, Zhihong Mao","doi":"10.1109/HRI.2019.8673156","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673156","url":null,"abstract":"In the future, Human-Robot Interaction should be enabled by a compact, human-centered and ergonomic wearable device that can merge human and machine altogether seamlessly by constantly identifying each other's intentions. In this paper, we will showcase the use of an ergonomic and lightweight wearable device that can identify human's eye/facial gestures with physiological signal measurements. Since human's intentions are usually coupled with eye movements and facial expressions, through proper design of interactions using these gestures, we can let people interact with the robots or smart home objects naturally. Combined with Computer Vision object recognition algorithms, we can allow people use very simple and straightforward communication strategies to operate telepresence robot and control smart home objects remotely, totally “Hands-Free”. People can wear a VR head-mounted display and see through the robot's eyes (the remote camera attached on the robot) and interact with the smart home devices intuitively by simple facial gestures or blink of the eyes. It is tremendous beneficial for the people with motor impairment as an assistive tool. For the normal people without disabilities, they can also free their hands to do other tasks and operate the smart home devices at the same time as multimodal control strategies.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"5 1","pages":"767-768"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91377941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-11DOI: 10.1109/HRI.2019.8673192
Connor Brooks, D. Szafir
Robotic teleoperation can be a complex task due to factors such as high degree-of-freedom manipulators, operator inexperience, and limited operator situational awareness. To reduce teleoperation complexity, researchers have developed the shared autonomy control paradigm that involves joint control of a robot by a human user and an autonomous control system. We introduce the concept of active learning into shared autonomy by developing a method for systems to leverage information gathering: minimizing the system's uncertainty about user goals by moving to information-rich states to observe user input. We create a framework for balancing information gathering actions, which help the system gain information about user goals, with goal-oriented actions, which move the robot towards the goal the system has inferred from the user. We conduct an evaluation within the context of users who are multitasking that compares pure teleoperation with two forms of shared autonomy: our balanced system and a traditional goal-oriented system. Our results show significant improvements for both shared autonomy systems over pure teleoperation in terms of belief convergence about the user's goal and task completion speed and reveal trade-offs across shared autonomy strategies that may inform future investigations in this space.
{"title":"Balanced Information Gathering and Goal-Oriented Actions in Shared Autonomy","authors":"Connor Brooks, D. Szafir","doi":"10.1109/HRI.2019.8673192","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673192","url":null,"abstract":"Robotic teleoperation can be a complex task due to factors such as high degree-of-freedom manipulators, operator inexperience, and limited operator situational awareness. To reduce teleoperation complexity, researchers have developed the shared autonomy control paradigm that involves joint control of a robot by a human user and an autonomous control system. We introduce the concept of active learning into shared autonomy by developing a method for systems to leverage information gathering: minimizing the system's uncertainty about user goals by moving to information-rich states to observe user input. We create a framework for balancing information gathering actions, which help the system gain information about user goals, with goal-oriented actions, which move the robot towards the goal the system has inferred from the user. We conduct an evaluation within the context of users who are multitasking that compares pure teleoperation with two forms of shared autonomy: our balanced system and a traditional goal-oriented system. Our results show significant improvements for both shared autonomy systems over pure teleoperation in terms of belief convergence about the user's goal and task completion speed and reveal trade-offs across shared autonomy strategies that may inform future investigations in this space.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"57 1","pages":"85-94"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90952533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-11DOI: 10.1109/HRI.2019.8673188
Wonse Jo, J. Park, Sangjun Lee, Ahreum Lee, B. Min
We present a new multi-robot system as a means of creating a visual communication cue that can add dynamic illustration to static figures or diagrams to enhance the power of delivery and improve an audience's attention. The proposed idea is that when a presenter/speaker writes something such as a shape or letter on a whiteboard table, multiple mobile robots trace the shape or letter while dynamically expressing it. The dynamic movement of multi-robots will further stimulate the cognitive perception of the audience with handwriting, positively affecting the comprehension of content. To do this, we apply image processing algorithms to extract feature points from a handwritten shape or letter while a task allocation algorithm deploys multi-robots on the feature points to highlight the shape or letter. We present preliminary experiment results that verify the proposed system with various characters and letters such as the English alphabet.
{"title":"Design of a Human Multi-Robot Interaction Medium of Cognitive Perception","authors":"Wonse Jo, J. Park, Sangjun Lee, Ahreum Lee, B. Min","doi":"10.1109/HRI.2019.8673188","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673188","url":null,"abstract":"We present a new multi-robot system as a means of creating a visual communication cue that can add dynamic illustration to static figures or diagrams to enhance the power of delivery and improve an audience's attention. The proposed idea is that when a presenter/speaker writes something such as a shape or letter on a whiteboard table, multiple mobile robots trace the shape or letter while dynamically expressing it. The dynamic movement of multi-robots will further stimulate the cognitive perception of the audience with handwriting, positively affecting the comprehension of content. To do this, we apply image processing algorithms to extract feature points from a handwritten shape or letter while a task allocation algorithm deploys multi-robots on the feature points to highlight the shape or letter. We present preliminary experiment results that verify the proposed system with various characters and letters such as the English alphabet.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"46 1","pages":"652-653"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86718644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-11DOI: 10.1109/HRI.2019.8673198
Aaquib Tabrez, Bradley Hayes
Gathering the most informative data from humans without overloading them remains an active research area in AI, and is closely coupled with the problems of determining how and when information should be communicated to others [12]. Current decision support systems (DSS) are still overly simple and static, and cannot adapt to changing environments we expect to deploy in modern systems [3], [4], [9], [11]. They are intrinsically limited in their ability to explain rationale versus merely listing their future behaviors, limiting a human's understanding of the system [2], [7]. Most probabilistic assessments of a task are conveyed after the task/skill is attempted rather than before [10], [14], [16]. This limits failure recovery and danger avoidance mechanisms. Existing work on predicting failures relies on sensors to accurately detect explicitly annotated and learned failure modes [13]. As such, important non-obvious pieces of information for assessing appropriate trust and/or course-of-action (COA) evaluation in collaborative scenarios can go overlooked, while irrelevant information may instead be provided that increases clutter and mental workload. Understanding how AI models arrive at specific decisions is a key principle of trust [8]. Therefore, it is critically important to develop new strategies for anticipating, communicating, and explaining justifications and rationale for AI driven behaviors via contextually appropriate semantics.
{"title":"Improving Human-Robot Interaction Through Explainable Reinforcement Learning","authors":"Aaquib Tabrez, Bradley Hayes","doi":"10.1109/HRI.2019.8673198","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673198","url":null,"abstract":"Gathering the most informative data from humans without overloading them remains an active research area in AI, and is closely coupled with the problems of determining how and when information should be communicated to others [12]. Current decision support systems (DSS) are still overly simple and static, and cannot adapt to changing environments we expect to deploy in modern systems [3], [4], [9], [11]. They are intrinsically limited in their ability to explain rationale versus merely listing their future behaviors, limiting a human's understanding of the system [2], [7]. Most probabilistic assessments of a task are conveyed after the task/skill is attempted rather than before [10], [14], [16]. This limits failure recovery and danger avoidance mechanisms. Existing work on predicting failures relies on sensors to accurately detect explicitly annotated and learned failure modes [13]. As such, important non-obvious pieces of information for assessing appropriate trust and/or course-of-action (COA) evaluation in collaborative scenarios can go overlooked, while irrelevant information may instead be provided that increases clutter and mental workload. Understanding how AI models arrive at specific decisions is a key principle of trust [8]. Therefore, it is critically important to develop new strategies for anticipating, communicating, and explaining justifications and rationale for AI driven behaviors via contextually appropriate semantics.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"51 1","pages":"751-753"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86156226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/HRI.2019.8673212
Chanmi Park, Jung Yeon Lee, Hyoung Woo Baek, Hae-Sung Lee, Jeehang Lee, Jinwoo Kim
Human's direct supervision on robot's erroneous behavior is crucial to enhance a robot intelligence for a ‘flawless’ human-robot interaction. Motivating humans to engage more actively for this purpose is however difficult. To alleviate such strain, this research proposes a novel approach, a growth and regression metaphoric interaction design inspired from human's communicative, intellectual, social competence aspect of developmental stages. We implemented the interaction design principle unto a conversational agent combined with a set of synthetic sensors. Within this context, we aim to show that the agent successfully encourages the online labeling activity in response to the faulty behavior of robots as a supervision process. The field study is going to be conducted to evaluate the efficacy of our proposal by measuring the annotation performance of real-time activity events in the wild. We expect to provide a more effective and practical means to supervise robot by real-time data labeling process for long-term usage in the human-robot interaction.
{"title":"Lifespan Design of Conversational Agent with Growth and Regression Metaphor for the Natural Supervision on Robot Intelligence","authors":"Chanmi Park, Jung Yeon Lee, Hyoung Woo Baek, Hae-Sung Lee, Jeehang Lee, Jinwoo Kim","doi":"10.1109/HRI.2019.8673212","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673212","url":null,"abstract":"Human's direct supervision on robot's erroneous behavior is crucial to enhance a robot intelligence for a ‘flawless’ human-robot interaction. Motivating humans to engage more actively for this purpose is however difficult. To alleviate such strain, this research proposes a novel approach, a growth and regression metaphoric interaction design inspired from human's communicative, intellectual, social competence aspect of developmental stages. We implemented the interaction design principle unto a conversational agent combined with a set of synthetic sensors. Within this context, we aim to show that the agent successfully encourages the online labeling activity in response to the faulty behavior of robots as a supervision process. The field study is going to be conducted to evaluate the efficacy of our proposal by measuring the annotation performance of real-time activity events in the wild. We expect to provide a more effective and practical means to supervise robot by real-time data labeling process for long-term usage in the human-robot interaction.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"29 1","pages":"646-647"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73886597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-01DOI: 10.1109/HRI.2019.8673312
Filipa Correia, Raquel Oliveira, Mayara Bonani, André Rodrigues, Tiago Guerreiro, Ana Paiva
Our goal is to disseminate an exploratory investigation that examined how physical presence and collaboration can be important factors in the development of assistive robots that can go beyond information-giving technologies. In particular, this video exhibits the setting and procedures of a user study that explored different types of collaborative interactions between robots and blind people.
{"title":"Exploring Collaborative Interactions Between Robots and Blind People","authors":"Filipa Correia, Raquel Oliveira, Mayara Bonani, André Rodrigues, Tiago Guerreiro, Ana Paiva","doi":"10.1109/HRI.2019.8673312","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673312","url":null,"abstract":"Our goal is to disseminate an exploratory investigation that examined how physical presence and collaboration can be important factors in the development of assistive robots that can go beyond information-giving technologies. In particular, this video exhibits the setting and procedures of a user study that explored different types of collaborative interactions between robots and blind people.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"14 1","pages":"365-365"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74319463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}