This paper describes an estimation method of a person's awareness of an obstacle. We assume that the person's awareness influences the person's motion, and construct a model of the relationship between the awareness and the motion using HCRF. We extract a sequence of motion features from the person trajectory, and then classify whether the person is aware of the obstacle or not using the model. Awareness estimation experiments are conducted in order to validate the method and evaluate its performance. Since the method uses only the position and the velocity of the person, it can be applicable to mobile robots.
{"title":"Estimating Person's Awareness of an Obstacle using HCRF for an Attendant Robot","authors":"Kenji Koide, J. Miura","doi":"10.1145/2974804.2974832","DOIUrl":"https://doi.org/10.1145/2974804.2974832","url":null,"abstract":"This paper describes an estimation method of a person's awareness of an obstacle. We assume that the person's awareness influences the person's motion, and construct a model of the relationship between the awareness and the motion using HCRF. We extract a sequence of motion features from the person trajectory, and then classify whether the person is aware of the obstacle or not using the model. Awareness estimation experiments are conducted in order to validate the method and evaluate its performance. Since the method uses only the position and the velocity of the person, it can be applicable to mobile robots.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126755735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adam S. Miner, A. Chow, Sarah Adler, Ilia Zaitsev, P. Tero, Alison M Darcy, A. Paepcke
A study deployed the mental health Relational Frame Theory as grounding for an analysis of sentiment dynamics in human-language dialogs. The work takes a step towards enabling use of conversational agents in mental health settings. Sentiment tendencies and mirroring behaviors in 11k human-human dialogs were compared with behaviors when humans interacted with conversational agents in a similar-sized collection. The study finds that human sentiment-related interaction norms persist in human-agent dialogs, but that humans are twice as likely to respond negatively when faced with a negative utterance by a robot than in a comparable situation with humans. Similarly, inhibition towards use of obscenity is greatly reduced. We introduce a new Affective Neural Net implementation that specializes in analyzing sentiment in real time.
{"title":"Conversational Agents and Mental Health: Theory-Informed Assessment of Language and Affect","authors":"Adam S. Miner, A. Chow, Sarah Adler, Ilia Zaitsev, P. Tero, Alison M Darcy, A. Paepcke","doi":"10.1145/2974804.2974820","DOIUrl":"https://doi.org/10.1145/2974804.2974820","url":null,"abstract":"A study deployed the mental health Relational Frame Theory as grounding for an analysis of sentiment dynamics in human-language dialogs. The work takes a step towards enabling use of conversational agents in mental health settings. Sentiment tendencies and mirroring behaviors in 11k human-human dialogs were compared with behaviors when humans interacted with conversational agents in a similar-sized collection. The study finds that human sentiment-related interaction norms persist in human-agent dialogs, but that humans are twice as likely to respond negatively when faced with a negative utterance by a robot than in a comparable situation with humans. Similarly, inhibition towards use of obscenity is greatly reduced. We introduce a new Affective Neural Net implementation that specializes in analyzing sentiment in real time.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129289007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The transfer of task knowledge is ubiquitous in our daily lives, where various types of interaction occur. Such an interactive task knowledge transfer, however, requires that an instructor and a learner to be at the same place and time. If we use a robot to mediate between them, such limitations can be eliminated. This paper focuses on human-to-robot teaching, in which a robot instructor interactively teaches a human learner how to achieve a task. We develop an ambiguity-driven formulation of interactive teaching based on the Dempster-Shafer theory. We implemented an experimental system for blocks world tasks as a proof-of-concept and show our preliminary results.
{"title":"Ambiguity-driven Interaction in Robot-to-Human Teaching","authors":"Kenta Yamada, J. Miura","doi":"10.1145/2974804.2980514","DOIUrl":"https://doi.org/10.1145/2974804.2980514","url":null,"abstract":"The transfer of task knowledge is ubiquitous in our daily lives, where various types of interaction occur. Such an interactive task knowledge transfer, however, requires that an instructor and a learner to be at the same place and time. If we use a robot to mediate between them, such limitations can be eliminated. This paper focuses on human-to-robot teaching, in which a robot instructor interactively teaches a human learner how to achieve a task. We develop an ambiguity-driven formulation of interactive teaching based on the Dempster-Shafer theory. We implemented an experimental system for blocks world tasks as a proof-of-concept and show our preliminary results.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125465230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trust is a fundamental element in human cooperation and a critical factor in determining the success of human-human communications. This paper investigates trust in pharmacists and their customers and provides an insight into how trust is being established to effect human-human and human-agent interactions (HAI), in order to promote better design of robotic agents, such as a robot pharmacy-advisor, in support of the healthcare industry.
{"title":"A Study on Trust in Pharmacists for Better HAI Design","authors":"Jia Qi Lim, Nico Lim, Maya Zheng, S. L. See","doi":"10.1145/2974804.2980507","DOIUrl":"https://doi.org/10.1145/2974804.2980507","url":null,"abstract":"Trust is a fundamental element in human cooperation and a critical factor in determining the success of human-human communications. This paper investigates trust in pharmacists and their customers and provides an insight into how trust is being established to effect human-human and human-agent interactions (HAI), in order to promote better design of robotic agents, such as a robot pharmacy-advisor, in support of the healthcare industry.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125260206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Mwangi, E. Barakova, Ruixin Zhang, M. Díaz, Andreu Català, G.W.M. Rauterberg
Gaze is an important nonverbal cue in human - human communication, for example, in communicating direction of attention. Therefore, presumably being able to understand and provide gaze cues is an important aspect in robot's interactive behavior. While there is considerable progress, as regards the design of social gaze cues for robots, there is little that has been done to examine the ability of humans to read and accept help signals from a robot's gaze. In this study, we examine how people perceive gaze cues and head angles directed towards different target positions on a table when human and NAO robot are sitting against each other as in board game scenarios. From the results, we show that when the head pitch angle is higher (24±2) and the depth is less, approximately 20 cm from the robot, participants detected the positions with good accuracy. Unexpectedly, the locations on the left of the robot were detected with lower accuracy. In conclusion, we discuss the implications of this research for design of interaction settings between human and a robot that is intended for social and educational support.
{"title":"See Where I am Looking at: Perceiving Gaze Cues With a NAO Robot","authors":"E. Mwangi, E. Barakova, Ruixin Zhang, M. Díaz, Andreu Català, G.W.M. Rauterberg","doi":"10.1145/2974804.2980479","DOIUrl":"https://doi.org/10.1145/2974804.2980479","url":null,"abstract":"Gaze is an important nonverbal cue in human - human communication, for example, in communicating direction of attention. Therefore, presumably being able to understand and provide gaze cues is an important aspect in robot's interactive behavior. While there is considerable progress, as regards the design of social gaze cues for robots, there is little that has been done to examine the ability of humans to read and accept help signals from a robot's gaze. In this study, we examine how people perceive gaze cues and head angles directed towards different target positions on a table when human and NAO robot are sitting against each other as in board game scenarios. From the results, we show that when the head pitch angle is higher (24±2) and the depth is less, approximately 20 cm from the robot, participants detected the positions with good accuracy. Unexpectedly, the locations on the left of the robot were detected with lower accuracy. In conclusion, we discuss the implications of this research for design of interaction settings between human and a robot that is intended for social and educational support.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133304669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In mediated communications, the social presence is said to be enhanced not only by eye contact and body gestures but also by remote haptics. To realize remote haptics, we present a mask telepresence system worn by a surrogate that displays the face of a remote user through a livestream. This system allows someone to shake the hand of a remote user via the surrogate. We conducted experiments where the surrogate contacted the participants to determine with whom the participants thought they shook hands: the surrogate or the remote user. We hy- pothesized that the relationship between the participant and surrogate or remote user is affected by the haptic sensations. Therefore, we prepared four conditions where the remote user was an acquaintance or a stranger and the surrogate was an acquaintance or a stranger. The results showed that, when the surrogate and the remote user were acquaintances, the participants felt like they were shaking hands with the remote user.
{"title":"Who am I Touching?: User Study of Remote Handshaking with a Telepresence Face","authors":"Kana Misawa, J. Rekimoto","doi":"10.1145/2974804.2974821","DOIUrl":"https://doi.org/10.1145/2974804.2974821","url":null,"abstract":"In mediated communications, the social presence is said to be enhanced not only by eye contact and body gestures but also by remote haptics. To realize remote haptics, we present a mask telepresence system worn by a surrogate that displays the face of a remote user through a livestream. This system allows someone to shake the hand of a remote user via the surrogate. We conducted experiments where the surrogate contacted the participants to determine with whom the participants thought they shook hands: the surrogate or the remote user. We hy- pothesized that the relationship between the participant and surrogate or remote user is affected by the haptic sensations. Therefore, we prepared four conditions where the remote user was an acquaintance or a stranger and the surrogate was an acquaintance or a stranger. The results showed that, when the surrogate and the remote user were acquaintances, the participants felt like they were shaking hands with the remote user.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123465148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Nishide, Kei Shimonishi, H. Kawashima, T. Matsuyama
While many spoken dialog systems are recently developed, users need to summarize and convey what they want the system to do clearly. However, in a human dialog, a speaker often summarize what to say incrementally, provided that there is a good listener who responds to the speaker's utterances at appropriate timing. We consider that generating backchannel responses, where appropriate, overlapped with the user's utterances is crucial for an artificial listener system that can promote user's utterances since such overlaps are the norm in human dialogs. Toward the goal to realize such a listener system, in this paper, we propose a voting-based algorithm of predicting the end of utterances early (i.e., before the utterances end) using audio-visual information. In the evaluation, we demonstrate the effectiveness of using audio-visual information and the applicability of the voting-based prediction algorithm with some early results.
{"title":"Voting-Based Backchannel Timing Prediction Using Audio-Visual Information","authors":"T. Nishide, Kei Shimonishi, H. Kawashima, T. Matsuyama","doi":"10.1145/2974804.2980501","DOIUrl":"https://doi.org/10.1145/2974804.2980501","url":null,"abstract":"While many spoken dialog systems are recently developed, users need to summarize and convey what they want the system to do clearly. However, in a human dialog, a speaker often summarize what to say incrementally, provided that there is a good listener who responds to the speaker's utterances at appropriate timing. We consider that generating backchannel responses, where appropriate, overlapped with the user's utterances is crucial for an artificial listener system that can promote user's utterances since such overlaps are the norm in human dialogs. Toward the goal to realize such a listener system, in this paper, we propose a voting-based algorithm of predicting the end of utterances early (i.e., before the utterances end) using audio-visual information. In the evaluation, we demonstrate the effectiveness of using audio-visual information and the applicability of the voting-based prediction algorithm with some early results.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121354344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The purpose of this work is to investigate which of an agent's properties determines leader-follower relationships in cooperative tasks performed by a human and an agent (a computer). The possible factors of an agent are intelligence, obstinance, and appearance. In this paper, we focused on intelligence and obstinance and conducted a psychological experiment using a mark matching game with a declaration phase, which enables us to observe who becomes the leader in a cooperative task. Experimental results showed that humans tend to follow an agent who has low intelligence and more obstinance rather than an agent who has high intelligence and less obstinance, and we found that obstinance is more important than intelligence in being a leader in human-computer interaction.
{"title":"A Leader-Follower Relation between a Human and an Agent","authors":"K. Terada, S. Yamada, Kazuyuki Takahashi","doi":"10.1145/2974804.2974822","DOIUrl":"https://doi.org/10.1145/2974804.2974822","url":null,"abstract":"The purpose of this work is to investigate which of an agent's properties determines leader-follower relationships in cooperative tasks performed by a human and an agent (a computer). The possible factors of an agent are intelligence, obstinance, and appearance. In this paper, we focused on intelligence and obstinance and conducted a psychological experiment using a mark matching game with a declaration phase, which enables us to observe who becomes the leader in a cooperative task. Experimental results showed that humans tend to follow an agent who has low intelligence and more obstinance rather than an agent who has high intelligence and less obstinance, and we found that obstinance is more important than intelligence in being a leader in human-computer interaction.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115922067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Masato Fukuda, Hung-Hsuan Huang, Tetsuya Kanno, Naoki Ohta, K. Kuwabara
While the environment of schools become more and more complicated, the improvement of teachers' skills in teaching and management is required. In this study, we focus on the development of a Wizard-of-Oz (WOZ) platform of simulated school environment, which can be utilized for teacher training or the examination of teacher recruitment from remote. This system is comprised of two front ends, one is a simulated classroom for the trainee, the other one is the interface for the system operator / investigator. The virtual classroom contains a number of virtual students who are controlled by the operator from the remote. The operator can observe the trainee from a dedicated interface and control the behaviors of any individual student as well as the atmosphere of the whole class. The whole-class atmosphere created by relatively large number of students is modeled as a concentration-arousal two dimensional space. The prototype system is evaluated with subject experiment and the results are reported.
{"title":"Development of a Simulated Environment for Recruitment Examination and Training of High School Teachers","authors":"Masato Fukuda, Hung-Hsuan Huang, Tetsuya Kanno, Naoki Ohta, K. Kuwabara","doi":"10.1145/2974804.2980508","DOIUrl":"https://doi.org/10.1145/2974804.2980508","url":null,"abstract":"While the environment of schools become more and more complicated, the improvement of teachers' skills in teaching and management is required. In this study, we focus on the development of a Wizard-of-Oz (WOZ) platform of simulated school environment, which can be utilized for teacher training or the examination of teacher recruitment from remote. This system is comprised of two front ends, one is a simulated classroom for the trainee, the other one is the interface for the system operator / investigator. The virtual classroom contains a number of virtual students who are controlled by the operator from the remote. The operator can observe the trainee from a dedicated interface and control the behaviors of any individual student as well as the atmosphere of the whole class. The whole-class atmosphere created by relatively large number of students is modeled as a concentration-arousal two dimensional space. The prototype system is evaluated with subject experiment and the results are reported.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128295791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Over recent years, the world has seen multiple uses for conversational agents. Chatbots has been implemented into ecommerce systems, such as Amazon Echo's Alexa [1]. Businesses and organizations like Facebook are also implementing bots into their applications. While a number of amazing chatbot platform exists, there are still difficulties in creating data-driven-systems as they large amount of data is needed for development and training. This paper we describe an advanced platform for evaluating and annotating human-chatbot interactions, its main features and goals, as well as the future plans we have for it.
{"title":"A Web-based Platform for Collection of Human-Chatbot Interactions","authors":"Lue Lin, L. F. D’Haro, R. Banchs","doi":"10.1145/2974804.2980500","DOIUrl":"https://doi.org/10.1145/2974804.2980500","url":null,"abstract":"Over recent years, the world has seen multiple uses for conversational agents. Chatbots has been implemented into ecommerce systems, such as Amazon Echo's Alexa [1]. Businesses and organizations like Facebook are also implementing bots into their applications. While a number of amazing chatbot platform exists, there are still difficulties in creating data-driven-systems as they large amount of data is needed for development and training. This paper we describe an advanced platform for evaluating and annotating human-chatbot interactions, its main features and goals, as well as the future plans we have for it.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"53 54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125541570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}