In this paper the nexus between fictional and real robots in Human Robot Interaction (HRI) is explored. We claim that design guidelines for HRI must not be borrowed from fiction blindly, as contradictions between the two may emerge with respect to what is desired by the human user. To understand human perception of robots appearing in movies we analyse viewing statistics and qualitative comments of a set of YouTube videos comprising of fictional and real robots. Analysis of the viewing statistics showed that real robots are more popular. Furthermore, analysis of the comments showed that two real robots (Nao and Shakey) generated significantly more positive comments and significantly more attributions of usage in human society as compared to the two fictional robots (AstroBoy and HAL9000). Based on the sample of robots considered in this research, our results reveal that contrary to expectation humans are more exposed to real robots and are more preferred, and we conclude by reasserting the contradiction that emerges between real and fictional robots.
{"title":"Using Video Preferences to Understand the Human Perception of Real and Fictional Robots","authors":"Omar Mubin, M. Obaid, E. B. Sandoval, M. Fjeld","doi":"10.1145/2814940.2814958","DOIUrl":"https://doi.org/10.1145/2814940.2814958","url":null,"abstract":"In this paper the nexus between fictional and real robots in Human Robot Interaction (HRI) is explored. We claim that design guidelines for HRI must not be borrowed from fiction blindly, as contradictions between the two may emerge with respect to what is desired by the human user. To understand human perception of robots appearing in movies we analyse viewing statistics and qualitative comments of a set of YouTube videos comprising of fictional and real robots. Analysis of the viewing statistics showed that real robots are more popular. Furthermore, analysis of the comments showed that two real robots (Nao and Shakey) generated significantly more positive comments and significantly more attributions of usage in human society as compared to the two fictional robots (AstroBoy and HAL9000). Based on the sample of robots considered in this research, our results reveal that contrary to expectation humans are more exposed to real robots and are more preferred, and we conclude by reasserting the contradiction that emerges between real and fictional robots.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128457527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce a facial expression training system using the bilinear shape model which helps people to practice making a facial expression. The user face on the camera preview screen is reconstructed into a 3D face model and the model is transformed to blend shape model which represents the facial expression. This way, the system can precisely analyze the facial expression of the user. With target 3D face model appearing on the screen, the 3D face model changes its facial expression, it leads the user to change his facial expression to become look like same. The system recognizes whether the facial expression of the user is same with the one of 3D face model. As the system gives the various missions to user to change his facial expression, user can practice the facial expression. It can be used for bell's palsy patient who needs face rehabilitation exercise or someone who need to practice unique facial expression such as stewardess smile or facial mimicry.
{"title":"Facial Expression Training System using Bilinear Shape Model","authors":"Byung-Hwa Park, Se-Young Oh","doi":"10.1145/2814940.2814985","DOIUrl":"https://doi.org/10.1145/2814940.2814985","url":null,"abstract":"We introduce a facial expression training system using the bilinear shape model which helps people to practice making a facial expression. The user face on the camera preview screen is reconstructed into a 3D face model and the model is transformed to blend shape model which represents the facial expression. This way, the system can precisely analyze the facial expression of the user. With target 3D face model appearing on the screen, the 3D face model changes its facial expression, it leads the user to change his facial expression to become look like same. The system recognizes whether the facial expression of the user is same with the one of 3D face model. As the system gives the various missions to user to change his facial expression, user can practice the facial expression. It can be used for bell's palsy patient who needs face rehabilitation exercise or someone who need to practice unique facial expression such as stewardess smile or facial mimicry.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130007229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a real-time hand gesture detection and recognition method. Proposed method consists of three steps - detection, validation and recognition. In the detection stage, several areas, estimated to contain hand shapes are detected by random forest hand detector over the whole image. The next steps are validation and recognition stages. In order to check whether each area contains hand or not, we used Linear Discriminant Analysis. The proposed work is based on the assumption that samples with similar posture are distributed near each other in high dimensional space. So, training data used for random forest are also analyzed in three dimensional space. In the reduced dimensional space, we can determine decision conditions for validation and classification. After detecting exact area of hand, we need to search for hand just in the nearby area. It reduces processing time for hand detection process.
{"title":"Real Time Hand Gesture Recognition Using Random Forest and Linear Discriminant Analysis","authors":"O. Sangjun, R. Mallipeddi, Minho Lee","doi":"10.1145/2814940.2814997","DOIUrl":"https://doi.org/10.1145/2814940.2814997","url":null,"abstract":"This paper presents a real-time hand gesture detection and recognition method. Proposed method consists of three steps - detection, validation and recognition. In the detection stage, several areas, estimated to contain hand shapes are detected by random forest hand detector over the whole image. The next steps are validation and recognition stages. In order to check whether each area contains hand or not, we used Linear Discriminant Analysis. The proposed work is based on the assumption that samples with similar posture are distributed near each other in high dimensional space. So, training data used for random forest are also analyzed in three dimensional space. In the reduced dimensional space, we can determine decision conditions for validation and classification. After detecting exact area of hand, we need to search for hand just in the nearby area. It reduces processing time for hand detection process.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130519359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hirofumi Okazaki, Yusuke Kanai, Masa Ogata, Komei Hasegawa, Kentaro Ishii, M. Imai
The purpose of our study is to investigate human teaching behavior and robot learning behavior when a human teaches a robot. Agents for learning support need to build a pedagogical relationship, in which a teacher agent and a student agent change their behaviors as they recognize the other's characteristic behaviors. In order to investigate how a robot that behaves as a student should respond to humans' teaching behaviors in a pedagogical relationship between human and robot, we conducted a case study using a game played on a tablet with a robot. In the case study, we analyzed how humans changed their teaching behaviors when the humanoid robot failed to understand what they taught. From the results of this case study, we observed that some subjects carefully taught the robot in each trial in order to allow the robot to understand the subjects. Moreover, we also observed that subjects' teaching behavior changed when the subject received feedback from the robot about the teaching.
{"title":"Building Pedagogical Relationships Between Humans and Robots in Natural Interactions","authors":"Hirofumi Okazaki, Yusuke Kanai, Masa Ogata, Komei Hasegawa, Kentaro Ishii, M. Imai","doi":"10.1145/2814940.2814941","DOIUrl":"https://doi.org/10.1145/2814940.2814941","url":null,"abstract":"The purpose of our study is to investigate human teaching behavior and robot learning behavior when a human teaches a robot. Agents for learning support need to build a pedagogical relationship, in which a teacher agent and a student agent change their behaviors as they recognize the other's characteristic behaviors. In order to investigate how a robot that behaves as a student should respond to humans' teaching behaviors in a pedagogical relationship between human and robot, we conducted a case study using a game played on a tablet with a robot. In the case study, we analyzed how humans changed their teaching behaviors when the humanoid robot failed to understand what they taught. From the results of this case study, we observed that some subjects carefully taught the robot in each trial in order to allow the robot to understand the subjects. Moreover, we also observed that subjects' teaching behavior changed when the subject received feedback from the robot about the teaching.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130423673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jaejoon Hwang, Yoojin Nam, Sangheon Lee, Gil-Jin Jang
This paper proposes a system that implements a computerized doorlock system using face recognition on photographic images captured by digital cameras. The doorlock is equipped with a simple raspberry pi board whose functions are capturing the image of a user who claims to be a valid user, and the image is then transferred to a server, where a face recognition is carried out to decide the claimed user is enrolled or not. Unlike other doorlock systems that are based on password numbers, the proposed system deprives the need for inputting passcodes. The proposed system can be used with many doorlock systems.
{"title":"Implementation of Doorlock System Using Face Recognition","authors":"Jaejoon Hwang, Yoojin Nam, Sangheon Lee, Gil-Jin Jang","doi":"10.1145/2814940.2814969","DOIUrl":"https://doi.org/10.1145/2814940.2814969","url":null,"abstract":"This paper proposes a system that implements a computerized doorlock system using face recognition on photographic images captured by digital cameras. The doorlock is equipped with a simple raspberry pi board whose functions are capturing the image of a user who claims to be a valid user, and the image is then transferred to a server, where a face recognition is carried out to decide the claimed user is enrolled or not. Unlike other doorlock systems that are based on password numbers, the proposed system deprives the need for inputting passcodes. The proposed system can be used with many doorlock systems.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127810859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Atsushi Hirota, Shogo Furuhashi, Takashi Uchida, Yusuke Suetsugu, Eri Takashima, Toshimasa Takai, Misa Yoshizaki, Hirofumi Hayakawa, Yukiko Nishizaki, N. Oka
Studies have shown that a person's current body state can affect his thinking (embodied cognition). We tested how value judgments can be changed by this. The results of our experiment demonstrated that participants tended to discount a future reward less when looking up than when looking down. Moreover, we found that the β parameter, which represents the value of immediate rewards relative to delayed rewards received at another point in time, significantly differed between the two conditions; whereas the δ parameter, the discount rate in the standard exponential formula, did not show a significant difference. Using functional magnetic resonance imaging, McClure et al. (2004) showed that β is mediated by the lower level, automatic processes of the limbic structures; δ is mediated by the lateral prefrontal cortex, supporting higher cognitive functions. Combining the above two results, we can conclude that the embodied cognition in our experiment was mainly produced by the lower level brain processes. We believe that the knowing that discount factor β can be controlled by posture can be applied when designing robot behavior, such as encouraging a diet, trying to sell insurance, or offering customers a card loan.
{"title":"Keep Your Chin Up When You Want to Believe in Future Rewards: The Effect of Facial Direction on Discount Factors","authors":"Atsushi Hirota, Shogo Furuhashi, Takashi Uchida, Yusuke Suetsugu, Eri Takashima, Toshimasa Takai, Misa Yoshizaki, Hirofumi Hayakawa, Yukiko Nishizaki, N. Oka","doi":"10.1145/2814940.2814973","DOIUrl":"https://doi.org/10.1145/2814940.2814973","url":null,"abstract":"Studies have shown that a person's current body state can affect his thinking (embodied cognition). We tested how value judgments can be changed by this. The results of our experiment demonstrated that participants tended to discount a future reward less when looking up than when looking down. Moreover, we found that the β parameter, which represents the value of immediate rewards relative to delayed rewards received at another point in time, significantly differed between the two conditions; whereas the δ parameter, the discount rate in the standard exponential formula, did not show a significant difference. Using functional magnetic resonance imaging, McClure et al. (2004) showed that β is mediated by the lower level, automatic processes of the limbic structures; δ is mediated by the lateral prefrontal cortex, supporting higher cognitive functions. Combining the above two results, we can conclude that the embodied cognition in our experiment was mainly produced by the lower level brain processes. We believe that the knowing that discount factor β can be controlled by posture can be applied when designing robot behavior, such as encouraging a diet, trying to sell insurance, or offering customers a card loan.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129218033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Implicit intention is the intention that is not expressed externally but having in one's mind. Implicit intention is difficult to be recognized, but it can be significant information if it is recognized with some measures. When people buy something, they also have implicit intention in their mind, whether I buy this or not. We proposed an experimental paradigm to recognize shopper's implicit intention, and the result of experiment was analyzed in this paper. On the experiment, subjects were instructed to select items to buy from the candidates, and eye-tracking and speech data were recorded during the selection. On data analysis, measures discriminating the existence of implicit shopping intention were selected and compared. From the result, fixation duration, fixation count, multiplication of first fixation duration, and visit count showed different tendency between two cases: when people have intention to buy it and when people do not have. By using this standards, implicit shopping intention of people can be recognized.
{"title":"Implicit Shopping Intention Recognition with Eye Tracking Data and Response Time","authors":"Dong-Gun Lee, Kyeongho Lee, Soo-Young Lee","doi":"10.1145/2814940.2815001","DOIUrl":"https://doi.org/10.1145/2814940.2815001","url":null,"abstract":"Implicit intention is the intention that is not expressed externally but having in one's mind. Implicit intention is difficult to be recognized, but it can be significant information if it is recognized with some measures. When people buy something, they also have implicit intention in their mind, whether I buy this or not. We proposed an experimental paradigm to recognize shopper's implicit intention, and the result of experiment was analyzed in this paper. On the experiment, subjects were instructed to select items to buy from the candidates, and eye-tracking and speech data were recorded during the selection. On data analysis, measures discriminating the existence of implicit shopping intention were selected and compared. From the result, fixation duration, fixation count, multiplication of first fixation duration, and visit count showed different tendency between two cases: when people have intention to buy it and when people do not have. By using this standards, implicit shopping intention of people can be recognized.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126772100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In our previous study, we developed Pedagogical Agent with Gaze Interaction (PAGI), an anthropomorphic animated pedagogical agent that engages in gaze interaction with students. Using PAGI, we revealed that temporal contingency from virtual agents facilitate learning (temporal contingency effect), and proposed two hypotheses that may explain the result; 1) temporal contingency reduces extraneous cognitive load related to visual search, 2) temporal contingency prime social stance in learners which enhances learning. To assess more deeply into this matter, we tested two critical features of the agent, saliency and socialness. Two arrow shaped agents, of which differed in saliency, were employed. Apart from the appearance of the agents, the experimental design was identical to the previous study. University students learned words of a foreign language, with temporally contingent agent or recorded version of the agent, which played pre-recorded sessions from the contingent agents. From the result we gained evidence supporting the second hypothesis. Non-social agents did not trigger temporal contingency effect.
{"title":"Social Appearance of Virtual Agent and Temporal Contingency Effect","authors":"Hanju Lee, Yasuhiro Kanakogi, K. Hiraki","doi":"10.1145/2814940.2814961","DOIUrl":"https://doi.org/10.1145/2814940.2814961","url":null,"abstract":"In our previous study, we developed Pedagogical Agent with Gaze Interaction (PAGI), an anthropomorphic animated pedagogical agent that engages in gaze interaction with students. Using PAGI, we revealed that temporal contingency from virtual agents facilitate learning (temporal contingency effect), and proposed two hypotheses that may explain the result; 1) temporal contingency reduces extraneous cognitive load related to visual search, 2) temporal contingency prime social stance in learners which enhances learning. To assess more deeply into this matter, we tested two critical features of the agent, saliency and socialness. Two arrow shaped agents, of which differed in saliency, were employed. Apart from the appearance of the agents, the experimental design was identical to the previous study. University students learned words of a foreign language, with temporally contingent agent or recorded version of the agent, which played pre-recorded sessions from the contingent agents. From the result we gained evidence supporting the second hypothesis. Non-social agents did not trigger temporal contingency effect.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115271055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Iizuka, S. Saitoh, D. Marocco, Masahito Yamamoto
This paper investigates time-delay effects of the human social interaction to understand how human can adapt to the time delay, which will be required in software agents to establish a harmonic interaction with human. We performed the minimal experiments of social interaction called perceptual crossing experiments with time delay. Our result shows that the social interaction breaks down when the total amount of time delay is given more than about one second. However, the interaction breaks more easily when the time delay is given to both participants than to either participant.
{"title":"Time Delay Effect on Social Interaction Dynamics","authors":"H. Iizuka, S. Saitoh, D. Marocco, Masahito Yamamoto","doi":"10.1145/2814940.2814979","DOIUrl":"https://doi.org/10.1145/2814940.2814979","url":null,"abstract":"This paper investigates time-delay effects of the human social interaction to understand how human can adapt to the time delay, which will be required in software agents to establish a harmonic interaction with human. We performed the minimal experiments of social interaction called perceptual crossing experiments with time delay. Our result shows that the social interaction breaks down when the total amount of time delay is given more than about one second. However, the interaction breaks more easily when the time delay is given to both participants than to either participant.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122532553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Seo, Jihyang Gu, Seongmi Jeong, Keelin Griffin, J. Young, Andrea Bunt, S. Prentice
This paper presents an original scenario design specifically created for exploring gender-related issues surrounding collaborative human-robot teams on assembly lines. Our methodology is grounded squarely in the need for increased gender work in human-robot interaction. As with most research in social human-robot interaction, investigating and exploring gender issues relies heavily on an evaluation methodology and scenario that aims to maximize ecological validity, so that the lab results can generalize to a real-world social scenario. In this paper, we present our discussion on study elements required for ecological validity in our context, present an original study design that meets these criteria, and present initial pilot results that reflect on our approach and study design.
{"title":"Women and Men Collaborating with Robots on Assembly Lines: Designing a Novel Evaluation Scenario for Collocated Human-Robot Teamwork","authors":"S. Seo, Jihyang Gu, Seongmi Jeong, Keelin Griffin, J. Young, Andrea Bunt, S. Prentice","doi":"10.1145/2814940.2814948","DOIUrl":"https://doi.org/10.1145/2814940.2814948","url":null,"abstract":"This paper presents an original scenario design specifically created for exploring gender-related issues surrounding collaborative human-robot teams on assembly lines. Our methodology is grounded squarely in the need for increased gender work in human-robot interaction. As with most research in social human-robot interaction, investigating and exploring gender issues relies heavily on an evaluation methodology and scenario that aims to maximize ecological validity, so that the lab results can generalize to a real-world social scenario. In this paper, we present our discussion on study elements required for ecological validity in our context, present an original study design that meets these criteria, and present initial pilot results that reflect on our approach and study design.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127955526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}