In this paper we present an interaction model for incremental information presentation for situated human-agent assistive systems, which support a user in daily activities such as packing a bag or fetching ingredients for a cake or a menu. In a smart home interaction scenario, we provide a first realization as a proof of concept for our approach.
{"title":"Interaction Model for Incremental Information Presentation","authors":"Birte Carlmeyer, Monika Chromik, B. Wrede","doi":"10.1145/3125739.3132582","DOIUrl":"https://doi.org/10.1145/3125739.3132582","url":null,"abstract":"In this paper we present an interaction model for incremental information presentation for situated human-agent assistive systems, which support a user in daily activities such as packing a bag or fetching ingredients for a cake or a menu. In a smart home interaction scenario, we provide a first realization as a proof of concept for our approach.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"7 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132737187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To establish social relationships between a human and an artificial agent, the agent has to induce and maintain the intentional stance on its human partner. In this study, we focus on contingency, which is the behavior that occurs synchronously with the last action, and the icebreaker, which is a facilitation exercise that helps start an interaction. The aim of this study is to investigate whether an agent that implements contingent responses is capable of inducing and maintaining an intentional stance. We conducted an experiment using the contingent agent and a ``subgoal-oriented agent' as a control group. As a result, we conclude that the contingent responses are capable of maintaining the intentional stance during the main task from the behavior analysis. On the other hand, we suggested that only the participants who actively joined the icebreaker with the contingent agent be induced into taking an intentional stance. From these results, we conclude that the contingent responses could maintain the intentional stance but not induce it.
{"title":"Effect of an Agent's Contingent Responses on Maintaining an Intentional Stance","authors":"Y. Ohmoto, Shunya Ueno, T. Nishida","doi":"10.1145/3125739.3125770","DOIUrl":"https://doi.org/10.1145/3125739.3125770","url":null,"abstract":"To establish social relationships between a human and an artificial agent, the agent has to induce and maintain the intentional stance on its human partner. In this study, we focus on contingency, which is the behavior that occurs synchronously with the last action, and the icebreaker, which is a facilitation exercise that helps start an interaction. The aim of this study is to investigate whether an agent that implements contingent responses is capable of inducing and maintaining an intentional stance. We conducted an experiment using the contingent agent and a ``subgoal-oriented agent' as a control group. As a result, we conclude that the contingent responses are capable of maintaining the intentional stance during the main task from the behavior analysis. On the other hand, we suggested that only the participants who actively joined the icebreaker with the contingent agent be induced into taking an intentional stance. From these results, we conclude that the contingent responses could maintain the intentional stance but not induce it.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125309390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper investigates whether a network robot salesclerk system increases sales in real shopping contexts. Our robot system, which consists of an autonomous virtual agent and a semi-autonomous physical agent, enables customers to interact with the virtual agents on their smartphones and reserve special character merchandise. Moreover, their virtual agent is transferred to the physical agent at the shop to physically distribute the reserved merchandise to customers. Through such cyber-physical interaction, we provided rich shopping experiences to customers to increase sales. We collaborated with an animation company, Production I.G Inc., and employed an animation character named Tachikoma from the Ghost in the Shell:Stand Alone Complex (a.k.a S.A.C. seriese) universe to design the appearance and the characteristics of both agents. We conducted field trials to investigate whether the developed system contributed to sales related to the animation merchandise of Ghost in the Shell, and the results showed our system's effectiveness.
{"title":"Does an Animation Character Robot Increase Sales?","authors":"Reo Matsumura, M. Shiomi, N. Hagita","doi":"10.1145/3125739.3132596","DOIUrl":"https://doi.org/10.1145/3125739.3132596","url":null,"abstract":"This paper investigates whether a network robot salesclerk system increases sales in real shopping contexts. Our robot system, which consists of an autonomous virtual agent and a semi-autonomous physical agent, enables customers to interact with the virtual agents on their smartphones and reserve special character merchandise. Moreover, their virtual agent is transferred to the physical agent at the shop to physically distribute the reserved merchandise to customers. Through such cyber-physical interaction, we provided rich shopping experiences to customers to increase sales. We collaborated with an animation company, Production I.G Inc., and employed an animation character named Tachikoma from the Ghost in the Shell:Stand Alone Complex (a.k.a S.A.C. seriese) universe to design the appearance and the characteristics of both agents. We conducted field trials to investigate whether the developed system contributed to sales related to the animation merchandise of Ghost in the Shell, and the results showed our system's effectiveness.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134010669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Mwangi, M. Díaz, E. Barakova, Andreu Català, G.W.M. Rauterberg
This paper presents a study that analyzes the effects of robots' gaze hints on children's performance in a card-matching game. We conducted a within-subjects study, in which children played a card matching game "Memory" in the presence of a robot tutor in two sessions. In one session, the robot gave hints to help the child find matching cards by looking at the correct match and, in the other session, the robot only looked at the child and did not give them any help. Children performance was measured using the number of tries and overall time used to complete the game. Our findings show that the use of gaze hints (help condition) made the matching task significantly easier and that children used significantly fewer tries than without help.
{"title":"Can Children Take Advantage of Nao Gaze-Based Hints During GamePlay?","authors":"E. Mwangi, M. Díaz, E. Barakova, Andreu Català, G.W.M. Rauterberg","doi":"10.1145/3125739.3132613","DOIUrl":"https://doi.org/10.1145/3125739.3132613","url":null,"abstract":"This paper presents a study that analyzes the effects of robots' gaze hints on children's performance in a card-matching game. We conducted a within-subjects study, in which children played a card matching game \"Memory\" in the presence of a robot tutor in two sessions. In one session, the robot gave hints to help the child find matching cards by looking at the correct match and, in the other session, the robot only looked at the child and did not give them any help. Children performance was measured using the number of tries and overall time used to complete the game. Our findings show that the use of gaze hints (help condition) made the matching task significantly easier and that children used significantly fewer tries than without help.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134227265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seiya Kimura, Hung-Hsuan Huang, Qi Zhang, S. Okada, Naoki Ohta, K. Kuwabara
In recent years, companies are seeking for communication skill from their employers. More and more companies adopt group discussions in employer recruitment to evaluate the ap- plicants' communication skill. However, the opportunity to improve communication skill in group discussion is limited due to the lack of partners. In order to solve this issue, our ongoing project is aiming to build a virtual agent or a robot that can participate group discussion, so that its users can re- peatedly practice group discussion with it. In this paper, we propose the models in directing the agent's attention toward the other participants in three situations:when the agent is speaking, when the agent is listening, and when no partic- ipant is speaking. First, we gathered a data corpus of the discussion of 10 four-people groups. We then use low-level non-verbal features including attention of other participant, voice prosody, head movements, and speech turn extracted in the 10-hour corpus to train support vector machine models to determine the agent's attention on the other participants, or the material. The performance of the detection models in F-measure range between 0.4 and 0.6.
{"title":"Proposal of a Model to Determine the Attention Target for an Agent in Group Discussion with Non-verbal Features","authors":"Seiya Kimura, Hung-Hsuan Huang, Qi Zhang, S. Okada, Naoki Ohta, K. Kuwabara","doi":"10.1145/3125739.3125775","DOIUrl":"https://doi.org/10.1145/3125739.3125775","url":null,"abstract":"In recent years, companies are seeking for communication skill from their employers. More and more companies adopt group discussions in employer recruitment to evaluate the ap- plicants' communication skill. However, the opportunity to improve communication skill in group discussion is limited due to the lack of partners. In order to solve this issue, our ongoing project is aiming to build a virtual agent or a robot that can participate group discussion, so that its users can re- peatedly practice group discussion with it. In this paper, we propose the models in directing the agent's attention toward the other participants in three situations:when the agent is speaking, when the agent is listening, and when no partic- ipant is speaking. First, we gathered a data corpus of the discussion of 10 four-people groups. We then use low-level non-verbal features including attention of other participant, voice prosody, head movements, and speech turn extracted in the 10-hour corpus to train support vector machine models to determine the agent's attention on the other participants, or the material. The performance of the detection models in F-measure range between 0.4 and 0.6.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126562664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A new type of wearable robot that assists the user with additional arms has been presented in recent years. Previous studies have reported that wearable robot arms allow wearer to perform multiple tasks simultaneously. Although concept designs have been qualitatively discussed, quantitative consideration in the design phase of wearable robot arms is necessary to improve their safety and operational efficiency. In order to evaluate the man-machine cooperativeness between users and wearable robot arms, the present paper proposes three evaluation indexes:workspace extensiveness, cooperativeness and invasiveness. The proposed evaluation indexes were defined according to the calculations of the common domain of the human workspace and the robot arm workspace. The calculation results suggest a design tradeoff between the work efficiency and the workspace invasiveness. The proposed indexes can be utilized as a new parameter for evaluating human-robot interactions when designing wearable robot arms.
{"title":"Development of Evaluation Indexes for Human-Centered Design of a Wearable Robot Arm","authors":"Koki Nakabayashi, Yukiko Iwasaki, H. Iwata","doi":"10.1145/3125739.3125763","DOIUrl":"https://doi.org/10.1145/3125739.3125763","url":null,"abstract":"A new type of wearable robot that assists the user with additional arms has been presented in recent years. Previous studies have reported that wearable robot arms allow wearer to perform multiple tasks simultaneously. Although concept designs have been qualitatively discussed, quantitative consideration in the design phase of wearable robot arms is necessary to improve their safety and operational efficiency. In order to evaluate the man-machine cooperativeness between users and wearable robot arms, the present paper proposes three evaluation indexes:workspace extensiveness, cooperativeness and invasiveness. The proposed evaluation indexes were defined according to the calculations of the common domain of the human workspace and the robot arm workspace. The calculation results suggest a design tradeoff between the work efficiency and the workspace invasiveness. The proposed indexes can be utilized as a new parameter for evaluating human-robot interactions when designing wearable robot arms.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115970360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A comparison between behavioural architectures, specifically a BDI architecture and a finite-state machine, for a collaborative package delivery system is presented. The system should assist a user in handling packages in cluttered environments. The entire system is built using open-source solutions for modules including speech recognition, person detection and tracking, and navigation. For the comparison, we use three criteria, namely, a static implementation-based comparison, a dynamic comparison and a qualitative comparison. Based on our results, we provide experimental evidence that supports the theoretical consensus about the domain of applicability for both, BDI architectures and finite-state machines. However, we cannot support or discourage any of the tested architectures for the particular case of the collaborative package delivery scenario, due to the non-overlapping strengths and weakness of both approaches. Finally, we outline future improvements to the system itself as well as the comparison of both behavioural architectures.
{"title":"Comparison of Behaviour-Based Architectures for a Collaborative Package Delivery Task","authors":"Melanie Remmels, Nicolás Navarro, S. Wermter","doi":"10.1145/3125739.3125764","DOIUrl":"https://doi.org/10.1145/3125739.3125764","url":null,"abstract":"A comparison between behavioural architectures, specifically a BDI architecture and a finite-state machine, for a collaborative package delivery system is presented. The system should assist a user in handling packages in cluttered environments. The entire system is built using open-source solutions for modules including speech recognition, person detection and tracking, and navigation. For the comparison, we use three criteria, namely, a static implementation-based comparison, a dynamic comparison and a qualitative comparison. Based on our results, we provide experimental evidence that supports the theoretical consensus about the domain of applicability for both, BDI architectures and finite-state machines. However, we cannot support or discourage any of the tested architectures for the particular case of the collaborative package delivery scenario, due to the non-overlapping strengths and weakness of both approaches. Finally, we outline future improvements to the system itself as well as the comparison of both behavioural architectures.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114888827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the progress in several technologies, robots have been developed for educational support. However, these educational robots have a problem that children gradually lose their motivation for learning as they get used to interacting with the robots. To solve the problem of maintaining interest among children, the authors propose a new interaction method with the agent (user-generated agent:UGA) for an educational system with robots, in which children themselves design the contents of the agent. With the theme of introducing recommended books based on actual educational approaches, we construct a UGA system with an agent interposed between the child creator and the child listener, and conduct field research at an elementary school. The results reveal that the robot introduced books 17.9 times per day on average, the children took actions to pick up books after receiving introductions, and introduction methods with non-verbal expressions were effective. The UGA designer was not used much in the early stages of the research. The authors improved the UGA designer so that children designed four new items of content despite a short period. It is important to hide the name of the child who created the content in a method that allows children to design.
{"title":"Book Introduction Robot Designed by Children for Promoting Interest in Reading","authors":"Takuya Sato, Yusuke Kudo, Hirotaka Osawa","doi":"10.1145/3125739.3125740","DOIUrl":"https://doi.org/10.1145/3125739.3125740","url":null,"abstract":"With the progress in several technologies, robots have been developed for educational support. However, these educational robots have a problem that children gradually lose their motivation for learning as they get used to interacting with the robots. To solve the problem of maintaining interest among children, the authors propose a new interaction method with the agent (user-generated agent:UGA) for an educational system with robots, in which children themselves design the contents of the agent. With the theme of introducing recommended books based on actual educational approaches, we construct a UGA system with an agent interposed between the child creator and the child listener, and conduct field research at an elementary school. The results reveal that the robot introduced books 17.9 times per day on average, the children took actions to pick up books after receiving introductions, and introduction methods with non-verbal expressions were effective. The UGA designer was not used much in the early stages of the research. The authors improved the UGA designer so that children designed four new items of content despite a short period. It is important to hide the name of the child who created the content in a method that allows children to design.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114694015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Technologies that aim to achieve intelligent automation in smart homes typically involve either trigger-action pairs or machine learning. These, however, are often complex to configure or hard to comprehend for the user. To maximize automation efficiency while keeping the configuration simple and the effects comprehensible, we thus explore an alternative agent-based approach. With the help of a survey, we put together a set of intelligent agents that act autonomously in the environment. Conflicts between behaviors, identified with a secondary study, are thereby resolved with a competitive combination of agents. We finally present the draft of a user interface that allows for individual configuration of all agents.
{"title":"Competitive Agents for Intelligent Home Automation","authors":"T. Michalski, M. Pohling, Patrick Holthaus","doi":"10.1145/3125739.3132616","DOIUrl":"https://doi.org/10.1145/3125739.3132616","url":null,"abstract":"Technologies that aim to achieve intelligent automation in smart homes typically involve either trigger-action pairs or machine learning. These, however, are often complex to configure or hard to comprehend for the user. To maximize automation efficiency while keeping the configuration simple and the effects comprehensible, we thus explore an alternative agent-based approach. With the help of a survey, we put together a set of intelligent agents that act autonomously in the environment. Conflicts between behaviors, identified with a secondary study, are thereby resolved with a competitive combination of agents. We finally present the draft of a user interface that allows for individual configuration of all agents.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115229071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To increase efficiency at work, it is important to keep the work space tidy. In this study, we propose a stationery holder robot to improve deskwork efficiency by reducing clutter. The robot is developed based on human-robot interactions, and it vibrates to remind and encourage workers to tidy their desks. First, we explain the concept of the robot and introduce a prototype. Then, we report the results of a preliminary experiment conducted to investigate the participants' intention to tidy up.
{"title":"Stationery Holder Robot that Encourages Office Workers to Tidy Their Desks","authors":"A. Ogasawara, M. Gouko","doi":"10.1145/3125739.3132581","DOIUrl":"https://doi.org/10.1145/3125739.3132581","url":null,"abstract":"To increase efficiency at work, it is important to keep the work space tidy. In this study, we propose a stationery holder robot to improve deskwork efficiency by reducing clutter. The robot is developed based on human-robot interactions, and it vibrates to remind and encourage workers to tidy their desks. First, we explain the concept of the robot and introduce a prototype. Then, we report the results of a preliminary experiment conducted to investigate the participants' intention to tidy up.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121362072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}