Recently, there have been advancements in the research and development of educational support robots supporting learning. Previous studies reported that the problem with these types of robots is that as learning progresses, learners lose interest in collaborative learning with robots. Thus, this paper reports a method to maintain learners' interest in collaborative learning with the robot alternately solving problems with the learner. Moreover, this study investigates the impression effect of collaborative learning with robots displaying utterance contents on learning system monitors. The results of this experiment indicated that a robot using the proposed model leaves a good impression on learners.
{"title":"Effect of an Educational Support Robot Displaying Utterance Contents on a Learning System","authors":"Shunsuke Shibata, Felix Jimenez, K. Murakami","doi":"10.1145/3349537.3352782","DOIUrl":"https://doi.org/10.1145/3349537.3352782","url":null,"abstract":"Recently, there have been advancements in the research and development of educational support robots supporting learning. Previous studies reported that the problem with these types of robots is that as learning progresses, learners lose interest in collaborative learning with robots. Thus, this paper reports a method to maintain learners' interest in collaborative learning with the robot alternately solving problems with the learner. Moreover, this study investigates the impression effect of collaborative learning with robots displaying utterance contents on learning system monitors. The results of this experiment indicated that a robot using the proposed model leaves a good impression on learners.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123339499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In human-agent interactions, human emotions and gestures expressed when interacting with agents is a high-level personally trait that quantifies human attitudes, intentions, motivations, and behaviors. The virtual reality space provides a chance to interact with virtual agents in a more immersive way. In this paper, we present a computational framework to analyze human eye movements by using a virtual reality system in a job interview scene. First, we developed a remote interview system using virtual agents and implemented the system into a virtual reality headset. Second, by tracking eye movements and collecting other multimodal data, the system could better analyze human personality traits in interview communication with virtual agents, and it could better support training in people's communication skills. In experiments, we analyzed the relationship between eye gaze feature and interview performance annotated by human experts. Experimental results showed acceptable accuracy value for the single modality of eye movement in the prediction of eye contact and total performance in job interviews.
{"title":"Analyzing Eye Movements in Interview Communication with Virtual Reality Agents","authors":"Fuhui Tian, S. Okada, K. Nitta","doi":"10.1145/3349537.3351889","DOIUrl":"https://doi.org/10.1145/3349537.3351889","url":null,"abstract":"In human-agent interactions, human emotions and gestures expressed when interacting with agents is a high-level personally trait that quantifies human attitudes, intentions, motivations, and behaviors. The virtual reality space provides a chance to interact with virtual agents in a more immersive way. In this paper, we present a computational framework to analyze human eye movements by using a virtual reality system in a job interview scene. First, we developed a remote interview system using virtual agents and implemented the system into a virtual reality headset. Second, by tracking eye movements and collecting other multimodal data, the system could better analyze human personality traits in interview communication with virtual agents, and it could better support training in people's communication skills. In experiments, we analyzed the relationship between eye gaze feature and interview performance annotated by human experts. Experimental results showed acceptable accuracy value for the single modality of eye movement in the prediction of eye contact and total performance in job interviews.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126275806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents our efforts towards building an architecture for digitally-mediated sign language communication. The architecture is based on a client-server model and enables a near real-time recognition of sign language signs on a mobile device. The paper describes the two main components of the architecture, a recognition engine (server-side) and a mobile application (client-side), and outlines directions for future work.
{"title":"Towards Digitally-Mediated Sign Language Communication","authors":"Kalin Stefanov, M. Bono","doi":"10.1145/3349537.3352794","DOIUrl":"https://doi.org/10.1145/3349537.3352794","url":null,"abstract":"This paper presents our efforts towards building an architecture for digitally-mediated sign language communication. The architecture is based on a client-server model and enables a near real-time recognition of sign language signs on a mobile device. The paper describes the two main components of the architecture, a recognition engine (server-side) and a mobile application (client-side), and outlines directions for future work.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129958955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To be capable of life-long learning in a real-life environment, robots have to tackle multiple challenges. Being able to relate physical properties they may observe in their environment to possible interactions they may have is one of them. This skill, named affordance learning, is strongly related to embodiment and is mastered through each person's development: each individual learns affordances differently through their own interactions with their surroundings. Current methods for affordance learning usually use either fixed actions to learn these affordances or focus on static setups involving a robotic arm to be operated. In this article, we propose an algorithm using intrinsic motivation to guide the learning of affordances for a mobile robot. This algorithm is capable to autonomously discover, learn and adapt interrelated affordances without pre-programmed actions. Once learned, these affordances may be used by the algorithm to plan sequences of actions in order to perform tasks of various difficulties. We then present one experiment and analyse our system before comparing it with other approaches from reinforcement learning and affordance learning.
{"title":"Hierarchical Affordance Discovery using Intrinsic Motivation","authors":"A. Manoury, S. Nguyen, Cédric Buche","doi":"10.1145/3349537.3351898","DOIUrl":"https://doi.org/10.1145/3349537.3351898","url":null,"abstract":"To be capable of life-long learning in a real-life environment, robots have to tackle multiple challenges. Being able to relate physical properties they may observe in their environment to possible interactions they may have is one of them. This skill, named affordance learning, is strongly related to embodiment and is mastered through each person's development: each individual learns affordances differently through their own interactions with their surroundings. Current methods for affordance learning usually use either fixed actions to learn these affordances or focus on static setups involving a robotic arm to be operated. In this article, we propose an algorithm using intrinsic motivation to guide the learning of affordances for a mobile robot. This algorithm is capable to autonomously discover, learn and adapt interrelated affordances without pre-programmed actions. Once learned, these affordances may be used by the algorithm to plan sequences of actions in order to perform tasks of various difficulties. We then present one experiment and analyse our system before comparing it with other approaches from reinforcement learning and affordance learning.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130902691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study aimed to realize the natural interactions of humans and machines by experimentally investigating the effects of turn-taking dynamics during visual interactions between two persons. During the experiment, we provided an environment in which only two circles moved horizontally across a monitor screen. One circle was operated by a participant, while the other was operated by another participant or computer. Results confirmed that participants could not clearly recognize computer actions when the computer used turn-taking dynamics to exchange leaders and followers. This was true even when these dynamics had no contingency with the movements of participants.
{"title":"Effects of Turn-Taking Dynamics Without Contingency: A Visual Interaction Experiment","authors":"Ryohei Irie, Takeshi Konno","doi":"10.1145/3349537.3352778","DOIUrl":"https://doi.org/10.1145/3349537.3352778","url":null,"abstract":"This study aimed to realize the natural interactions of humans and machines by experimentally investigating the effects of turn-taking dynamics during visual interactions between two persons. During the experiment, we provided an environment in which only two circles moved horizontally across a monitor screen. One circle was operated by a participant, while the other was operated by another participant or computer. Results confirmed that participants could not clearly recognize computer actions when the computer used turn-taking dynamics to exchange leaders and followers. This was true even when these dynamics had no contingency with the movements of participants.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132685782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Social robots often have expressive faces. However, it is not always clear how to design expressions that show a certain emotion. We present a method for a social robot to learn the emotional meaning of its own facial expressions, based on which it can automatically generate faces for any emotion. The robot collects data from an imitation game where humans are asked to mimic the robot's facial expression. The interacting person does not need to explicitly input the meaning of the robot's face so the interaction is natural. We show that humans can successfully recognise the emotions from the learned facial expressions.
{"title":"Calibrate My Smile: Robot Learning Its Facial Expressions through Interactive Play with Humans","authors":"Dino Ilic, Ivana Žužić, D. Brscic","doi":"10.1145/3349537.3351890","DOIUrl":"https://doi.org/10.1145/3349537.3351890","url":null,"abstract":"Social robots often have expressive faces. However, it is not always clear how to design expressions that show a certain emotion. We present a method for a social robot to learn the emotional meaning of its own facial expressions, based on which it can automatically generate faces for any emotion. The robot collects data from an imitation game where humans are asked to mimic the robot's facial expression. The interacting person does not need to explicitly input the meaning of the robot's face so the interaction is natural. We show that humans can successfully recognise the emotions from the learned facial expressions.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133318644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a portable robot, named "Pocketable-Bones", that fits into a user's breast pocket and communicates with the user "side-by-side", which involves coordinating the direction in which the user is looking and the object of interest. In this paper, we discuss the development of a platform for the robot and the hardware configuration needed to establish the human-robot "side-by-side" communication. In our presentation, we will demonstrate the side-by-side communication with the robot and the participants can experience it.
{"title":"Pocketable-Bones: A Portable Robot Sharing Interests with User in the Breast Pocket","authors":"Ryosuke Mayumi, Naoki Ohshima, M. Okada","doi":"10.1145/3349537.3352768","DOIUrl":"https://doi.org/10.1145/3349537.3352768","url":null,"abstract":"We propose a portable robot, named \"Pocketable-Bones\", that fits into a user's breast pocket and communicates with the user \"side-by-side\", which involves coordinating the direction in which the user is looking and the object of interest. In this paper, we discuss the development of a platform for the robot and the hardware configuration needed to establish the human-robot \"side-by-side\" communication. In our presentation, we will demonstrate the side-by-side communication with the robot and the participants can experience it.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132379174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trust calibration is critical to the success of human-agent interaction (HAI). However, individual differences are ubiquitous in people's trust relationships with autonomous systems. To assist its heterogeneous human teammates calibrate their trust in it, an agent must first dynamically model them as individuals, rather than communicating with them all in the same manner. It can then generate expectations of its teammates' behavior and optimize its own communication based on the current state of the trust relationship it has with them. In this work, we examine how an agent can generate accurate expectations given observations of only the teammate's trust-related behaviors (e.g., did the person follow or ignore its advice?). In addition to this limited input, we also seek a specific output: accurately predicting its human teammate's future trust behavior (e.g., will the person follow or ignore my next suggestion?). In this investigation, we construct a model capable of generating such expectations using data gathered in a human-subject study of behavior in a simulated human-robot interaction (HRI) scenario. We first analyze the ability of measures from a pre-survey on trust-related traits to accurately predict subsequent trust behaviors. However, as the interaction progresses, this effect is dwarfed by the direct experience. We therefore analyze the ability of sequences of prior behavior by the teammate to accurately predict subsequent trust behaviors. Such behavioral sequences have shown to be indicative of the subjective beliefs of other teammates, and we show here that they have a predictive power as well.
{"title":"A Markovian Method for Predicting Trust Behavior in Human-Agent Interaction","authors":"D. Pynadath, Ning Wang, Sreekar Kamireddy","doi":"10.1145/3349537.3351905","DOIUrl":"https://doi.org/10.1145/3349537.3351905","url":null,"abstract":"Trust calibration is critical to the success of human-agent interaction (HAI). However, individual differences are ubiquitous in people's trust relationships with autonomous systems. To assist its heterogeneous human teammates calibrate their trust in it, an agent must first dynamically model them as individuals, rather than communicating with them all in the same manner. It can then generate expectations of its teammates' behavior and optimize its own communication based on the current state of the trust relationship it has with them. In this work, we examine how an agent can generate accurate expectations given observations of only the teammate's trust-related behaviors (e.g., did the person follow or ignore its advice?). In addition to this limited input, we also seek a specific output: accurately predicting its human teammate's future trust behavior (e.g., will the person follow or ignore my next suggestion?). In this investigation, we construct a model capable of generating such expectations using data gathered in a human-subject study of behavior in a simulated human-robot interaction (HRI) scenario. We first analyze the ability of measures from a pre-survey on trust-related traits to accurately predict subsequent trust behaviors. However, as the interaction progresses, this effect is dwarfed by the direct experience. We therefore analyze the ability of sequences of prior behavior by the teammate to accurately predict subsequent trust behaviors. Such behavioral sequences have shown to be indicative of the subjective beliefs of other teammates, and we show here that they have a predictive power as well.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130097004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A critical aspect of any recommendation process is explaining the reasoning behind each recommendation. These explanations can not only improve users' experiences, but also change their perception of the recommendation quality. This work describes our human-centered design for our conversational movie recommendation agent, which explains its decisions as humans would. After exploring and analyzing a corpus of dyadic interactions, we developed a computational model of explanations. We then incorporated this model in the architecture of a conversational agent and evaluated the resulting system via a user experiment. Our results show that social explanations can improve the perceived quality of both the system and the interaction, regardless of the intrinsic quality of the recommendations.
{"title":"A Model of Social Explanations for a Conversational Movie Recommendation System","authors":"Florian Pecune, Shruti Murali, Vivian Tsai, Yoichi Matsuyama, Justine Cassell","doi":"10.1145/3349537.3351899","DOIUrl":"https://doi.org/10.1145/3349537.3351899","url":null,"abstract":"A critical aspect of any recommendation process is explaining the reasoning behind each recommendation. These explanations can not only improve users' experiences, but also change their perception of the recommendation quality. This work describes our human-centered design for our conversational movie recommendation agent, which explains its decisions as humans would. After exploring and analyzing a corpus of dyadic interactions, we developed a computational model of explanations. We then incorporated this model in the architecture of a conversational agent and evaluated the resulting system via a user experiment. Our results show that social explanations can improve the perceived quality of both the system and the interaction, regardless of the intrinsic quality of the recommendations.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130216113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Designers of human-agent systems often assume that users interact with machines as if they are interacting with another person. As a consequences, fidelity to human behavior is often viewed as the gold standard for judging agent design, and theories of human social psychology are often accepted without question as a framework for informing human-agent interaction. This assumption was given strength by the pioneering work of Cliff Nass showing that many of the effects studied within social psychology seem to apply to human-machine interaction. In this talk, I will illustrate that these social effects are much weaker than widely supposed, and that the differences in how people treat machines are arguably more interesting than the similarities. These differences can lead to novel insights into human social cognition and unique technological solutions to intractable social problems. I will discuss this in the context of our research on education and mental health. Thus, rather copying human behavior, I will argue that HAI researchers should aim to transcend conventional forms of social interaction, and work towards novel theoretical frameworks that address the novel psychology of human-agent interaction.
{"title":"The Social Psychology of Human-agent Interaction","authors":"J. Gratch","doi":"10.1145/3349537.3351909","DOIUrl":"https://doi.org/10.1145/3349537.3351909","url":null,"abstract":"Designers of human-agent systems often assume that users interact with machines as if they are interacting with another person. As a consequences, fidelity to human behavior is often viewed as the gold standard for judging agent design, and theories of human social psychology are often accepted without question as a framework for informing human-agent interaction. This assumption was given strength by the pioneering work of Cliff Nass showing that many of the effects studied within social psychology seem to apply to human-machine interaction. In this talk, I will illustrate that these social effects are much weaker than widely supposed, and that the differences in how people treat machines are arguably more interesting than the similarities. These differences can lead to novel insights into human social cognition and unique technological solutions to intractable social problems. I will discuss this in the context of our research on education and mental health. Thus, rather copying human behavior, I will argue that HAI researchers should aim to transcend conventional forms of social interaction, and work towards novel theoretical frameworks that address the novel psychology of human-agent interaction.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"153 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121398772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}