Takashi Numata, Yasuhiro Asa, T. Kitagaki, T. Hashimoto, K. Karasawa
The development of non-human robots and virtual agents is focused on encouraging affective communication between humans and agents while supporting the daily life of individuals. For an agent to achieve affective communication with a wide range of users, understanding emotion recognition of an agent's expression is important. In previous studies, age has had an obvious effect on emotion recognition. However, the effect of age on emotion recognition in the context of non-human agents is not yet well understood. In this study, we investigated emotion recognition of young and elderly users when confronted with a non-human agent's expressions. A questionnaire with a seven-emotion alternative forced choice task was used to analyze emotion recognition in 62 young and 39 elderly users. Dynamically formed virtual agent expressions were used to analyze the effect of non-human expressions on emotion recognition. The elderly users had a higher variability of emotion recognition compared with the young users. Studying the individual characteristics of emotion recognition should be prioritized would allow for more affective communication between elderly users and non-human agents.
{"title":"Young and Elderly Users' Emotion Recognition of Dynamically Formed Expressions Made by a Non-Human Virtual Agent","authors":"Takashi Numata, Yasuhiro Asa, T. Kitagaki, T. Hashimoto, K. Karasawa","doi":"10.1145/3349537.3352783","DOIUrl":"https://doi.org/10.1145/3349537.3352783","url":null,"abstract":"The development of non-human robots and virtual agents is focused on encouraging affective communication between humans and agents while supporting the daily life of individuals. For an agent to achieve affective communication with a wide range of users, understanding emotion recognition of an agent's expression is important. In previous studies, age has had an obvious effect on emotion recognition. However, the effect of age on emotion recognition in the context of non-human agents is not yet well understood. In this study, we investigated emotion recognition of young and elderly users when confronted with a non-human agent's expressions. A questionnaire with a seven-emotion alternative forced choice task was used to analyze emotion recognition in 62 young and 39 elderly users. Dynamically formed virtual agent expressions were used to analyze the effect of non-human expressions on emotion recognition. The elderly users had a higher variability of emotion recognition compared with the young users. Studying the individual characteristics of emotion recognition should be prioritized would allow for more affective communication between elderly users and non-human agents.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"529 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116487154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We have developed a speech promotion system that presents images retrieved based on spoken words on a display and shows the auto-generated nodding reaction of the search images with a listener character. The online image search function can respond to various utterances of the user. In this research, in order to speed up the time to present images, we prepare not only the online image search function but also the pre-stored image call function. Moreover, the listener character in the system moves to look at the image at the same time that it appears. This character's motion prompts the user to observe the image by joint attention guidance. Thus, the system promotes user's speech by the auto-generated nodding reaction of the listener character and images related to spoken words.
{"title":"A Speech Promotion System by Using Embodied Entrainment Objects of Spoken Words and a Listener Character for Joint Attention","authors":"Masakatsu Kubota, Tomio Watanabe, Yutaka Ishii","doi":"10.1145/3349537.3352803","DOIUrl":"https://doi.org/10.1145/3349537.3352803","url":null,"abstract":"We have developed a speech promotion system that presents images retrieved based on spoken words on a display and shows the auto-generated nodding reaction of the search images with a listener character. The online image search function can respond to various utterances of the user. In this research, in order to speed up the time to present images, we prepare not only the online image search function but also the pre-stored image call function. Moreover, the listener character in the system moves to look at the image at the same time that it appears. This character's motion prompts the user to observe the image by joint attention guidance. Thus, the system promotes user's speech by the auto-generated nodding reaction of the listener character and images related to spoken words.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123569902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A future world populated by robots is a projection that has long inspired and still inspires science-fiction stories. How to consider these artificial agents is a tremendous question for human society as it highlights an eternal issue "What defines human?" More precisely, "What defines the human mind?" In these two studies, we investigate how individuals' representation of mind may define attitudes towards robots and pro/antisocial behaviours towards them. Also, we evaluate the role of human-robot interaction as a moderator of the above-mentioned process. Our study's results demonstrate that the conceptualization of mind as a function results in more positive attitudes towards robots, and a lesser likelihood of acting negatively toward them. The opposite is true for participants who consider the mind as a human essence. Finally, we show that interaction with a robot seems to level out these differences. These results are discussed in terms of attitudes towards robots measured and implications for future human-robot interaction (HRI).
{"title":"Switch off a Robot, Switch off a Mind?","authors":"Nicolas Spatola","doi":"10.1145/3349537.3351897","DOIUrl":"https://doi.org/10.1145/3349537.3351897","url":null,"abstract":"A future world populated by robots is a projection that has long inspired and still inspires science-fiction stories. How to consider these artificial agents is a tremendous question for human society as it highlights an eternal issue \"What defines human?\" More precisely, \"What defines the human mind?\" In these two studies, we investigate how individuals' representation of mind may define attitudes towards robots and pro/antisocial behaviours towards them. Also, we evaluate the role of human-robot interaction as a moderator of the above-mentioned process. Our study's results demonstrate that the conceptualization of mind as a function results in more positive attitudes towards robots, and a lesser likelihood of acting negatively toward them. The opposite is true for participants who consider the mind as a human essence. Finally, we show that interaction with a robot seems to level out these differences. These results are discussed in terms of attitudes towards robots measured and implications for future human-robot interaction (HRI).","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128768526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To construct a general theory in human-agent interaction (HAI), it is essential to discuss how to generalize from models within the research area. In this study, we propose a formulation of HAI for the establishment of axiom systems. The environment for agents, behavior as a time change in the environment, and the internal state as an evaluation of the environment are identified as basic variables. The relations of these variables are expressed by functions. Ideally, HAIs should be explained in one axiomatic system to discuss the similarities and differences among human-human interaction, human-animal interaction, and human-robot interaction. For this purpose, we need to verify the model theoretically in a future study.
{"title":"Tentative Formalization of Human-Agent Interaction for Model-Based Interaction Design","authors":"Takafumi Sakamoto, Yugo Takeuchi","doi":"10.1145/3349537.3352806","DOIUrl":"https://doi.org/10.1145/3349537.3352806","url":null,"abstract":"To construct a general theory in human-agent interaction (HAI), it is essential to discuss how to generalize from models within the research area. In this study, we propose a formulation of HAI for the establishment of axiom systems. The environment for agents, behavior as a time change in the environment, and the internal state as an evaluation of the environment are identified as basic variables. The relations of these variables are expressed by functions. Ideally, HAIs should be explained in one axiomatic system to discuss the similarities and differences among human-human interaction, human-animal interaction, and human-robot interaction. For this purpose, we need to verify the model theoretically in a future study.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116657694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Using robots and virtual agents as teachers in education is one of the most important fields in HAI. Many pieces of work have been published; however, little has been reported on the relationship between the subject on which a virtual teacher (VT) gives a lesson and the appearance of the VT. For example, are robot-like agents usually effective regardless of the subject being taught? In this paper, we hypothesized that the subject and the appearance of the VT affect students' level of understanding through the interaction of the two. To verify this, we conducted an experiment with two factors: subject and VT appearance. Under all conditions, the participants watched movies in which VTs gave a lesson, and they took a short test on the lesson and provided a subjective evaluation on the VTs. As a result, the subject and appearance affected the short-test scores through the interaction of the two. This result suggests a novel design method that can be used to construct a VT.
{"title":"The Design Method of the Virtual Teacher","authors":"T. Matsui, S. Yamada","doi":"10.1145/3349537.3351906","DOIUrl":"https://doi.org/10.1145/3349537.3351906","url":null,"abstract":"Using robots and virtual agents as teachers in education is one of the most important fields in HAI. Many pieces of work have been published; however, little has been reported on the relationship between the subject on which a virtual teacher (VT) gives a lesson and the appearance of the VT. For example, are robot-like agents usually effective regardless of the subject being taught? In this paper, we hypothesized that the subject and the appearance of the VT affect students' level of understanding through the interaction of the two. To verify this, we conducted an experiment with two factors: subject and VT appearance. Under all conditions, the participants watched movies in which VTs gave a lesson, and they took a short test on the lesson and provided a subjective evaluation on the VTs. As a result, the subject and appearance affected the short-test scores through the interaction of the two. This result suggests a novel design method that can be used to construct a VT.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114215174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The purpose of this research is to explore how an intelligent digital companion (agent) can support persons (human) with stress-related exhaustion to manage daily activities. In this paper, we explore in particular how information about a person's emotions can be communicated to the agent with means of non-verbal communication through tangible interfaces. The purpose is to explore how different individuals approach the task of designing their own tangible interfaces for communicating emotions with a digital companion, and the range of different preferences and expectations. Six participants were interviewed and created tangible prototypes during a co-creation workshop. The data was analysed using theories about human emotions and activity, and translated into a generic user model, an architecture for a multiagent system and interface design proposals. The results include increased understanding of how different individuals would like to express their emotions with tangible interfaces, and informed the design of the information models regarding representing emotions. The study illuminated the importance of personalisation of functionality and interface design to address the diversity among individuals, as well as the design of the adaptive behaviour of a digital companion. Future work includes further studies involving additional participants, the development of the stress management application and conducting user studies where prototypes are used in daily activities.
{"title":"Tangible Communication of Emotions with a Digital Companion for Managing Stress: An Exploratory Co-Design Study","authors":"Monika Jingar, H. Lindgren","doi":"10.1145/3349537.3351907","DOIUrl":"https://doi.org/10.1145/3349537.3351907","url":null,"abstract":"The purpose of this research is to explore how an intelligent digital companion (agent) can support persons (human) with stress-related exhaustion to manage daily activities. In this paper, we explore in particular how information about a person's emotions can be communicated to the agent with means of non-verbal communication through tangible interfaces. The purpose is to explore how different individuals approach the task of designing their own tangible interfaces for communicating emotions with a digital companion, and the range of different preferences and expectations. Six participants were interviewed and created tangible prototypes during a co-creation workshop. The data was analysed using theories about human emotions and activity, and translated into a generic user model, an architecture for a multiagent system and interface design proposals. The results include increased understanding of how different individuals would like to express their emotions with tangible interfaces, and informed the design of the information models regarding representing emotions. The study illuminated the importance of personalisation of functionality and interface design to address the diversity among individuals, as well as the design of the adaptive behaviour of a digital companion. Future work includes further studies involving additional participants, the development of the stress management application and conducting user studies where prototypes are used in daily activities.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114592635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Empathy is important for smooth interaction. In this research, we focused on EQ (Empathy Quotient), which is an index to measure individuals' empathic ability, and analyzed behavior related to EQ, particularly the effect of participants' embodiment on empathic behavior. In the experiment, participants played a ball-tossing game with two agents. The experimental task was created by adapting the Cyberball game that has been used in studies of social psychology. We conducted the experiment under two conditions: an equitable scenario where the two agents toss evenly to participants and other agents; and an inequitable scenario where only one agent tosses only to participants. Then we compared the number of tosses performed by each participants. As a result, we found that the participants with high EQ tended to exclude the agent who did not receive tosses from the other agent, but that this exclusion was alleviated by embodiment.
{"title":"Factors Influencing Empathic Behaviors for Virtual Agents: -Examining about the Effect of Embodiment-","authors":"Yuna Kano, J. Morita","doi":"10.1145/3349537.3352777","DOIUrl":"https://doi.org/10.1145/3349537.3352777","url":null,"abstract":"Empathy is important for smooth interaction. In this research, we focused on EQ (Empathy Quotient), which is an index to measure individuals' empathic ability, and analyzed behavior related to EQ, particularly the effect of participants' embodiment on empathic behavior. In the experiment, participants played a ball-tossing game with two agents. The experimental task was created by adapting the Cyberball game that has been used in studies of social psychology. We conducted the experiment under two conditions: an equitable scenario where the two agents toss evenly to participants and other agents; and an inequitable scenario where only one agent tosses only to participants. Then we compared the number of tosses performed by each participants. As a result, we found that the participants with high EQ tended to exclude the agent who did not receive tosses from the other agent, but that this exclusion was alleviated by embodiment.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"2672 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114531018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kouichi Enami, Kohei Okuoka, Shohei Akita, M. Imai
Automatic driving systems not only for cars, but also for wheelchairs are being developed. Improving the operability and safety of electric wheelchairs is an important issue. For example, if a driving system changes the speed of the vehicle without a driver's operation, it makes the driver uneasy. We developed a system to address this uneasiness. Our system, called the MIZUSAKI system, notifies drivers of a change in the speed gain, which is controlled by the system, before the change. This system uses 4 in-screen effects, which are intended to be seen in the peripheral vision and do not inhibit drivers' attention to driving. In developing this system, we have always considered the notification timing to be a key factor. We tested this system to find the best notification timing and found that if the anticipatory timing is 3 seconds before the speed gain changes or later.
{"title":"Notification Timing of Agent with Vection and Character for Semi-Automatic Wheelchair Operation","authors":"Kouichi Enami, Kohei Okuoka, Shohei Akita, M. Imai","doi":"10.1145/3349537.3351900","DOIUrl":"https://doi.org/10.1145/3349537.3351900","url":null,"abstract":"Automatic driving systems not only for cars, but also for wheelchairs are being developed. Improving the operability and safety of electric wheelchairs is an important issue. For example, if a driving system changes the speed of the vehicle without a driver's operation, it makes the driver uneasy. We developed a system to address this uneasiness. Our system, called the MIZUSAKI system, notifies drivers of a change in the speed gain, which is controlled by the system, before the change. This system uses 4 in-screen effects, which are intended to be seen in the peripheral vision and do not inhibit drivers' attention to driving. In developing this system, we have always considered the notification timing to be a key factor. We tested this system to find the best notification timing and found that if the anticipatory timing is 3 seconds before the speed gain changes or later.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129654732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lukman Heryawan, P. Khotimah, Goshiro Yamamoto, Osamu Sugiyama, S. Hiragi, K. Okamoto, T. Kuroda
In this paper, we present an agent-based completion that cooperates with a physician to optimize parameter-text pairs collection from a medical note. Currently, in hospitals, there is a medical coding process, which collects certain parameter-text pairs from the medical note's narrative text. The process starts with categorizing the text into certain parameter-text pairs. Then, the collected pairs are studied by the coders to produce correct medical codes. However, since the physicians may not aware of the coders' requirements, the medical coding process is quite problematic, such as lack of parameter-text pairs. To address this problem, we propose an agent-based completion, which represents the coder's view to categorize the text into certain parameter-text pairs and recommend the parameters to be filled. This paper shows a basic design of the agent and the background technologies to support the completion system.
{"title":"Agent-based Completion for Collecting Medical Note Parameters","authors":"Lukman Heryawan, P. Khotimah, Goshiro Yamamoto, Osamu Sugiyama, S. Hiragi, K. Okamoto, T. Kuroda","doi":"10.1145/3349537.3352780","DOIUrl":"https://doi.org/10.1145/3349537.3352780","url":null,"abstract":"In this paper, we present an agent-based completion that cooperates with a physician to optimize parameter-text pairs collection from a medical note. Currently, in hospitals, there is a medical coding process, which collects certain parameter-text pairs from the medical note's narrative text. The process starts with categorizing the text into certain parameter-text pairs. Then, the collected pairs are studied by the coders to produce correct medical codes. However, since the physicians may not aware of the coders' requirements, the medical coding process is quite problematic, such as lack of parameter-text pairs. To address this problem, we propose an agent-based completion, which represents the coder's view to categorize the text into certain parameter-text pairs and recommend the parameters to be filled. This paper shows a basic design of the agent and the background technologies to support the completion system.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130861972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Arno Hartholt, S. Mozgai, Edward Fast, Matt Liewer, Adam Reilly, W. Whitcup, Albert A. Rizzo
We present one of the first applications of virtual humans in Augmented Reality (AR), which allows young adults with Autism Spectrum Disorder (ASD) the opportunity to practice job interviews. It uses the Magic Leap's AR hardware sensors to provide users with immediate feedback on six different metrics, including eye gaze, blink rate and head orientation. The system provides two characters, with three conversational modes each. Ported from an existing desktop application, the main development lessons learned were: 1) provide users with navigation instructions in the user interface, 2) avoid dark colors as they are rendered transparently, 3) use dynamic gaze so characters maintain eye contact with the user, 4) use hardware sensors like eye gaze to provide user feedback, and 5) use surface detection to place characters dynamically in the world.
{"title":"Virtual Humans in Augmented Reality: A First Step towards Real-World Embedded Virtual Roleplayers","authors":"Arno Hartholt, S. Mozgai, Edward Fast, Matt Liewer, Adam Reilly, W. Whitcup, Albert A. Rizzo","doi":"10.1145/3349537.3352766","DOIUrl":"https://doi.org/10.1145/3349537.3352766","url":null,"abstract":"We present one of the first applications of virtual humans in Augmented Reality (AR), which allows young adults with Autism Spectrum Disorder (ASD) the opportunity to practice job interviews. It uses the Magic Leap's AR hardware sensors to provide users with immediate feedback on six different metrics, including eye gaze, blink rate and head orientation. The system provides two characters, with three conversational modes each. Ported from an existing desktop application, the main development lessons learned were: 1) provide users with navigation instructions in the user interface, 2) avoid dark colors as they are rendered transparently, 3) use dynamic gaze so characters maintain eye contact with the user, 4) use hardware sensors like eye gaze to provide user feedback, and 5) use surface detection to place characters dynamically in the world.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121512187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}