Pub Date : 2019-03-11DOI: 10.1109/HRI.2019.8673135
Bob R. Schadenberg
A commonly used argument for using robots in interventions for autistic children is that robots can be very predictable. Even though robot behaviour can be designed to be perceived as predictable, a degree of perceived unpredictability is unavoidable and may sometimes be desirable to some extent. To balance the robot's predictability for autistic children, we will need to gain a better understanding of what factors influence the perceived (un)predictability of the robot, how those factors can be taken into account through the design of the interaction, and how they influence the autistic child-robot interaction. In our work, we look at a specific type of predictability and define it as “the ability to quickly and accurately predict the robot's future actions”. Initial results show that seeing the cause of a robot's responsive actions influences to what extent it is perceived as being unpredictable and its competence. In future work, we will investigate the effects of the variability of the robot's behaviour on the perceived predictability of a robot for both typically developing and autistic individuals.
{"title":"Predictability in Human-Robot Interactions for Autistic Children","authors":"Bob R. Schadenberg","doi":"10.1109/HRI.2019.8673135","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673135","url":null,"abstract":"A commonly used argument for using robots in interventions for autistic children is that robots can be very predictable. Even though robot behaviour can be designed to be perceived as predictable, a degree of perceived unpredictability is unavoidable and may sometimes be desirable to some extent. To balance the robot's predictability for autistic children, we will need to gain a better understanding of what factors influence the perceived (un)predictability of the robot, how those factors can be taken into account through the design of the interaction, and how they influence the autistic child-robot interaction. In our work, we look at a specific type of predictability and define it as “the ability to quickly and accurately predict the robot's future actions”. Initial results show that seeing the cause of a robot's responsive actions influences to what extent it is perceived as being unpredictable and its competence. In future work, we will investigate the effects of the variability of the robot's behaviour on the perceived predictability of a robot for both typically developing and autistic individuals.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"4 1","pages":"748-750"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75248918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-11DOI: 10.1109/HRI.2019.8673240
Michaela Koller
Gaze is an intuitive and effective human-robot interaction (HRI) modality as it communicates the attention of a robot. Implementations of different gaze mechanisms often include plausible human gaze timings and are evaluated in isolated single-task settings. However, humanoid social robots will be deployed in complex social situations. During a conversation a robot therefore might need to avert its gaze away from the human to demonstrate awareness of other events. Human-based gaze timings could be too restrictive to accomplish this task. Participants' levels of comfort, received attention, and behavioral engagement will be measured during a systematic variation on the gaze focus ratio in a conversation. The robot will alternatingly focus on the interviewer and objects at the back of the room. The findings will explore whether it is possible to design robot interactions that do not adhere to predetermined human-based parameters and compare interaction quality measures along the varied gaze focus split to actual HHI timing distributions.
{"title":"Systematic Variation of Gaze Timings and Effects on the Human Level of Comfort and Feeling of Being Attended","authors":"Michaela Koller","doi":"10.1109/HRI.2019.8673240","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673240","url":null,"abstract":"Gaze is an intuitive and effective human-robot interaction (HRI) modality as it communicates the attention of a robot. Implementations of different gaze mechanisms often include plausible human gaze timings and are evaluated in isolated single-task settings. However, humanoid social robots will be deployed in complex social situations. During a conversation a robot therefore might need to avert its gaze away from the human to demonstrate awareness of other events. Human-based gaze timings could be too restrictive to accomplish this task. Participants' levels of comfort, received attention, and behavioral engagement will be measured during a systematic variation on the gaze focus ratio in a conversation. The robot will alternatingly focus on the interviewer and objects at the back of the room. The findings will explore whether it is possible to design robot interactions that do not adhere to predetermined human-based parameters and compare interaction quality measures along the varied gaze focus split to actual HHI timing distributions.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"14 1","pages":"721-723"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85284434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-11DOI: 10.1109/HRI.2019.8673273
Gil Weinberg
Robotic Musicianship research at Georgia Tech Center for Music Technology (GTCMT) focuses on the construction of autonomous and wearable robots that can analyze, reason, and generate music. The goal of our research is to facilitate meaningful and inspiring musical interactions between humans and artificially creative machines. In this talk I present the work conducted by the Robotic Musicianship Group at GTCMT over the last 15 years, highlighting the motivation, research questions, platforms, methods, and underlining guidelines for our work.
{"title":"Robotic Musicianship and Musical Human Augmentation","authors":"Gil Weinberg","doi":"10.1109/HRI.2019.8673273","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673273","url":null,"abstract":"Robotic Musicianship research at Georgia Tech Center for Music Technology (GTCMT) focuses on the construction of autonomous and wearable robots that can analyze, reason, and generate music. The goal of our research is to facilitate meaningful and inspiring musical interactions between humans and artificially creative machines. In this talk I present the work conducted by the Robotic Musicianship Group at GTCMT over the last 15 years, highlighting the motivation, research questions, platforms, methods, and underlining guidelines for our work.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"21 1","pages":"305-306"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87371635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-11DOI: 10.1109/HRI.2019.8673295
Frano Petric, Z. Kovačić
Since the diagnosis of autism spectrum disorder (ASD) relies heavily on behavioral observations by experienced clinician, we seek to investigate whether parts of this job can be autonomously performed by a humanoid robot using only sensors available on-board. To that end, we developed a robot-assisted ASD diagnostic protocol. In this work we propose the Partially observable Markov decision process (POMDP) framework for such protocol which enables the robot to infer information about the state of the child based on observations of child's behavior. We extend our previous work by developing a protocol POMDP model which uses tasks of the protocol as actions. We devise a method to interface protocol and task models by using belief at the end of a task to generate observations for the protocol POMDP, resulting in a hierarchical POMDP framework. We evaluate our approach through an exploratory study with fifteen children (seven typically developing and eight with ASD).
{"title":"Hierarchical POMDP Framework for a Robot-Assisted ASD Diagnostic Protocol","authors":"Frano Petric, Z. Kovačić","doi":"10.1109/HRI.2019.8673295","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673295","url":null,"abstract":"Since the diagnosis of autism spectrum disorder (ASD) relies heavily on behavioral observations by experienced clinician, we seek to investigate whether parts of this job can be autonomously performed by a humanoid robot using only sensors available on-board. To that end, we developed a robot-assisted ASD diagnostic protocol. In this work we propose the Partially observable Markov decision process (POMDP) framework for such protocol which enables the robot to infer information about the state of the child based on observations of child's behavior. We extend our previous work by developing a protocol POMDP model which uses tasks of the protocol as actions. We devise a method to interface protocol and task models by using belief at the end of a task to generate observations for the protocol POMDP, resulting in a hierarchical POMDP framework. We evaluate our approach through an exploratory study with fifteen children (seven typically developing and eight with ASD).","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"39 1","pages":"286-293"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86830798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-11DOI: 10.1109/HRI.2019.8673307
Hannah Mieczkowski, S. Liu, Jeffrey T. Hancock, Byron Reeves
This paper examines relationships between perceptions of warmth and competence, emotional responses, and behavioral tendencies in the context of social robots. Participants answered questions about these three aspects of impression formation after viewing an image of one of 342 social robots in the Stanford Social Robots Database. Results suggest that people have similar emotional and behavioral reactions to robots as they have to humans; impressions of the robots' warmth and competence predicted specific emotional responses (admiration, envy, contempt, pity) and those emotional responses predicted distinct behavioral tendencies (active facilitation, active harm, passive facilitation, passive harm). However, the predicted relationships between impressions and harmful behavioral tendencies were absent. This novel asymmetry for perceptions and intentions towards robots is deliberated in the context of the computers as social actors framework and opportunities for further research are discussed.
{"title":"Helping Not Hurting: Applying the Stereotype Content Model and BIAS Map to Social Robotics","authors":"Hannah Mieczkowski, S. Liu, Jeffrey T. Hancock, Byron Reeves","doi":"10.1109/HRI.2019.8673307","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673307","url":null,"abstract":"This paper examines relationships between perceptions of warmth and competence, emotional responses, and behavioral tendencies in the context of social robots. Participants answered questions about these three aspects of impression formation after viewing an image of one of 342 social robots in the Stanford Social Robots Database. Results suggest that people have similar emotional and behavioral reactions to robots as they have to humans; impressions of the robots' warmth and competence predicted specific emotional responses (admiration, envy, contempt, pity) and those emotional responses predicted distinct behavioral tendencies (active facilitation, active harm, passive facilitation, passive harm). However, the predicted relationships between impressions and harmful behavioral tendencies were absent. This novel asymmetry for perceptions and intentions towards robots is deliberated in the context of the computers as social actors framework and opportunities for further research are discussed.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"46 1","pages":"222-229"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76788306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-11DOI: 10.1109/HRI.2019.8673221
David J. Porfirio, Allison Sauppé, Aws Albarghouthi, Bilge Mutlu
Robots must exercise socially appropriate behavior when interacting with humans. How can we assist interaction designers to embed socially appropriate and avoid socially inappropriate behavior within human-robot interactions? We propose a multi-faceted interaction-design approach that intersects human-robot interaction and formal methods to help us achieve this goal. At the lowest level, designers create interactions from scratch and receive feedback from formal verification, while higher levels involve automated synthesis and repair of designs. In this extended abstract, we discuss past, present, and future work within each level of our design approach.
{"title":"Computational Tools for Human-Robot Interaction Design","authors":"David J. Porfirio, Allison Sauppé, Aws Albarghouthi, Bilge Mutlu","doi":"10.1109/HRI.2019.8673221","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673221","url":null,"abstract":"Robots must exercise socially appropriate behavior when interacting with humans. How can we assist interaction designers to embed socially appropriate and avoid socially inappropriate behavior within human-robot interactions? We propose a multi-faceted interaction-design approach that intersects human-robot interaction and formal methods to help us achieve this goal. At the lowest level, designers create interactions from scratch and receive feedback from formal verification, while higher levels involve automated synthesis and repair of designs. In this extended abstract, we discuss past, present, and future work within each level of our design approach.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"23 1","pages":"733-735"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86039023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-11DOI: 10.1109/HRI.2019.8673309
Daniel Gallenberger, T. Bhattacharjee, Youngsun Kim, S. Srinivasa
Successful robotic assistive feeding depends on reliable bite acquisition and easy bite transfer. The latter constitutes a unique type of robot-human handover where the human needs to use the mouth. This places a high burden on the robot to make the transfer easy. We believe that the ease of transfer not only depends on the transfer action but also is tightly coupled with the way a food item was acquired in the first place. To determine the factors influencing good bite transfer, we designed both skewering and transfer primitives and developed a robotic feeding system that uses these manipulation primitives to feed people autonomously. First, we determined the primitives' success rates for bite acquisition with robot experiments. Next, we conducted user studies to evaluate the ease of bite transfer for different combinations of skewering and transfer primitives. Our results show that an intelligent food item dependent skewering strategy improves the bite acquisition success rate and that the choice of skewering location and the fork orientation affects the ease of bite transfer sianificantly.
{"title":"Transfer Depends on Acquisition: Analyzing Manipulation Strategies for Robotic Feeding","authors":"Daniel Gallenberger, T. Bhattacharjee, Youngsun Kim, S. Srinivasa","doi":"10.1109/HRI.2019.8673309","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673309","url":null,"abstract":"Successful robotic assistive feeding depends on reliable bite acquisition and easy bite transfer. The latter constitutes a unique type of robot-human handover where the human needs to use the mouth. This places a high burden on the robot to make the transfer easy. We believe that the ease of transfer not only depends on the transfer action but also is tightly coupled with the way a food item was acquired in the first place. To determine the factors influencing good bite transfer, we designed both skewering and transfer primitives and developed a robotic feeding system that uses these manipulation primitives to feed people autonomously. First, we determined the primitives' success rates for bite acquisition with robot experiments. Next, we conducted user studies to evaluate the ease of bite transfer for different combinations of skewering and transfer primitives. Our results show that an intelligent food item dependent skewering strategy improves the bite acquisition success rate and that the choice of skewering location and the fork orientation affects the ease of bite transfer sianificantly.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"13 1","pages":"267-276"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90860220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-11DOI: 10.1109/HRI.2019.8673024
Denise Y. Geiskkovitch, Raquel Thiessen, J. Young, Melanie R. Glenwright
Robots that interact with children are becoming more common in places such as child care and hospital environments. While such robots may mistakenly provide nonsensical information, or have mechanical malfunctions, we know little of how these robot errors are perceived by children, and how they impact trust. This is particularly important when robots provide children with information or instructions, such as in education or health care. Drawing inspiration from established psychology literature investigating how children trust entities who teach or provide them with information (informants), we designed and conducted an experiment to examine how robot errors affect how young children (3–5 years old) trust robots. Our results suggest that children utilize their understanding of people to develop their perceptions of robots, and use this to determine how to interact with robots. Specifically, we found that children developed their trust model of a robot based on the robot's previous errors, similar to how they would for a person. We however failed to replicate other prior findings with robots. Our results provide insight into how children as young as 3 years old might perceive robot errors and develop trust.
{"title":"What? That's Not a Chair!: How Robot Informational Errors Affect Children's Trust Towards Robots","authors":"Denise Y. Geiskkovitch, Raquel Thiessen, J. Young, Melanie R. Glenwright","doi":"10.1109/HRI.2019.8673024","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673024","url":null,"abstract":"Robots that interact with children are becoming more common in places such as child care and hospital environments. While such robots may mistakenly provide nonsensical information, or have mechanical malfunctions, we know little of how these robot errors are perceived by children, and how they impact trust. This is particularly important when robots provide children with information or instructions, such as in education or health care. Drawing inspiration from established psychology literature investigating how children trust entities who teach or provide them with information (informants), we designed and conducted an experiment to examine how robot errors affect how young children (3–5 years old) trust robots. Our results suggest that children utilize their understanding of people to develop their perceptions of robots, and use this to determine how to interact with robots. Specifically, we found that children developed their trust model of a robot based on the robot's previous errors, similar to how they would for a person. We however failed to replicate other prior findings with robots. Our results provide insight into how children as young as 3 years old might perceive robot errors and develop trust.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"19 5 1","pages":"48-56"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83207224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-11DOI: 10.1109/HRI.2019.8673218
T. Inamura, Y. Mizuchi
Robot competition such as RoboCup@Home is one of the most effective ways to evaluate the performance of human-robot interaction; however, it takes a lot of costs for real robot maintenance and the practice of evaluation sessions. We have proposed a simulation software to evaluate human-robot interaction in daily life environment based on immersive virtual reality. In this paper, we design a task named ‘human navigation’ in which the evaluation requires a subjective impression by the users. Through a substantiative experiment, we confirmed that the proposed task and system reduced the cost for the practice of the competition.
{"title":"Robot Competition to Evaluate Guidance Skill for General Users in VR Environment","authors":"T. Inamura, Y. Mizuchi","doi":"10.1109/HRI.2019.8673218","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673218","url":null,"abstract":"Robot competition such as RoboCup@Home is one of the most effective ways to evaluate the performance of human-robot interaction; however, it takes a lot of costs for real robot maintenance and the practice of evaluation sessions. We have proposed a simulation software to evaluate human-robot interaction in daily life environment based on immersive virtual reality. In this paper, we design a task named ‘human navigation’ in which the evaluation requires a subjective impression by the users. Through a substantiative experiment, we confirmed that the proposed task and system reduced the cost for the practice of the competition.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"188 1","pages":"552-553"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79395694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-11DOI: 10.1109/HRI.2019.8673263
Lindsey Coffee-Johnson, Debbie Perouli
With the need for geriatric care workers growing faster than can be met, the possibility of socially assistive robots filling this need has garnered increasing attention. This heightened interest in robots as social care workers, however, leads to concerns in detecting possible robot misbehavior. We propose a short questionnaire, based on current elder abuse screening tools, as a method to detect intrusion or misconfiguration in caregiver robots. We focus on misbehavior that can cause psychological or financial harm to the caregiver recipient. We discuss requirements, limitations, and future enhancements.
{"title":"Detecting Anomalous Behavior of Socially Assistive Robots in Geriatric Care Facilities","authors":"Lindsey Coffee-Johnson, Debbie Perouli","doi":"10.1109/HRI.2019.8673263","DOIUrl":"https://doi.org/10.1109/HRI.2019.8673263","url":null,"abstract":"With the need for geriatric care workers growing faster than can be met, the possibility of socially assistive robots filling this need has garnered increasing attention. This heightened interest in robots as social care workers, however, leads to concerns in detecting possible robot misbehavior. We propose a short questionnaire, based on current elder abuse screening tools, as a method to detect intrusion or misconfiguration in caregiver robots. We focus on misbehavior that can cause psychological or financial harm to the caregiver recipient. We discuss requirements, limitations, and future enhancements.","PeriodicalId":6600,"journal":{"name":"2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)","volume":"24 1","pages":"582-583"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75159492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}