首页 > 最新文献

Proceedings of the 5th International Conference on Human Agent Interaction最新文献

英文 中文
Conversational Agent Learning Natural Gaze and Motion of Multi-Party Conversation from Example 会话智能体从实例中学习多人对话的自然注视和运动
Pub Date : 2017-10-27 DOI: 10.1145/3125739.3132607
Shuai Zou, Kento Kuzushima, Hironori Mitake, S. Hasegawa
Recent developments in robotics and virtual reality (VR) are making embodied agents familiar, and social behaviors of embodied conversational agents are essential to create mindful daily lives with conversational agents. Especially, natural nonverbal behaviors are required, such as gaze and gesture movement. We propose a novel method to create an agent with human-like gaze as a listener in multi-party conversation, using Hidden Markov Model (HMM) to learn the behavior from real conversation examples. The model can generate gaze reaction according to users' gaze and utterance. We implemented an agent with proposed method, and created VR environment to interact with the agent. The proposed agent reproduced several features of gaze behavior in example conversations. Impression survey result showed that there is at least a group who felt the proposed agent is similar to human and better than conventional methods.
机器人技术和虚拟现实(VR)的最新发展使人熟悉具身代理,而具身会话代理的社会行为对于与会话代理一起创造正念的日常生活至关重要。尤其需要自然的非语言行为,如凝视和手势运动。本文提出了一种新的方法,利用隐马尔可夫模型(HMM)从真实对话实例中学习行为,在多方对话中创建一个具有类似人类凝视的智能体作为倾听者。该模型可以根据用户的注视和话语产生注视反应。利用所提出的方法实现了一个agent,并创建了虚拟现实环境与agent进行交互。所提出的智能体再现了示例对话中凝视行为的几个特征。印象调查结果显示,至少有一组人认为所提出的代理与人类相似,比传统方法更好。
{"title":"Conversational Agent Learning Natural Gaze and Motion of Multi-Party Conversation from Example","authors":"Shuai Zou, Kento Kuzushima, Hironori Mitake, S. Hasegawa","doi":"10.1145/3125739.3132607","DOIUrl":"https://doi.org/10.1145/3125739.3132607","url":null,"abstract":"Recent developments in robotics and virtual reality (VR) are making embodied agents familiar, and social behaviors of embodied conversational agents are essential to create mindful daily lives with conversational agents. Especially, natural nonverbal behaviors are required, such as gaze and gesture movement. We propose a novel method to create an agent with human-like gaze as a listener in multi-party conversation, using Hidden Markov Model (HMM) to learn the behavior from real conversation examples. The model can generate gaze reaction according to users' gaze and utterance. We implemented an agent with proposed method, and created VR environment to interact with the agent. The proposed agent reproduced several features of gaze behavior in example conversations. Impression survey result showed that there is at least a group who felt the proposed agent is similar to human and better than conventional methods.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115546488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Keynote Talk 主旨演讲
Pub Date : 2017-10-27 DOI: 10.1145/3125739.3134523
A. Waibel
"89:&#;<=$ Dr. Alexander Waibel is a Professor of Computer Science at Carnegie Mellon University, Pittsburgh and at the Karlsruhe Institute of Technology, Germany. He is the director of the International Center for Advanced Communication Technologies (interACT). The Center works in a network with eight of the world’s top research institutions. Its mission is to develop advanced machine learning algorithms to improve human-human and human-machine communication technologies. Prof. Waibel and his team pioneered many statistical and neural learning algorithms that made such communication breakthroughs possible. Most notably, the “Time-Delay Neural Network” (1987) (now also known as “convolutional” neural network) is at the heart of many of today’s AI technologies. System breakthroughs that followed suit included early multimodal dialog interfaces, the first speech translation system in Europe&USA (1990/1991), the first simultaneous lecture interpretation system (2005), and Jibbigo, the first commercial speech translator on a phone (2009).
89:&#;<=$ Alexander Waibel博士是匹兹堡卡内基梅隆大学和德国卡尔斯鲁厄理工学院的计算机科学教授。他是国际先进通信技术中心(interACT)的主任。该中心与世界上八家顶级研究机构建立了一个网络。它的使命是开发先进的机器学习算法,以改善人与人之间和人机通信技术。Waibel教授和他的团队开创了许多统计和神经学习算法,使这种通信突破成为可能。最值得注意的是,“时滞神经网络”(1987年)(现在也被称为“卷积”神经网络)是当今许多人工智能技术的核心。随后的系统突破包括早期的多模态对话界面,欧洲和美国的第一个语音翻译系统(1990/1991),第一个同声传译系统(2005),以及Jibbigo,第一个商业电话语音翻译器(2009)。
{"title":"Keynote Talk","authors":"A. Waibel","doi":"10.1145/3125739.3134523","DOIUrl":"https://doi.org/10.1145/3125739.3134523","url":null,"abstract":"\"89:&#;<=$ Dr. Alexander Waibel is a Professor of Computer Science at Carnegie Mellon University, Pittsburgh and at the Karlsruhe Institute of Technology, Germany. He is the director of the International Center for Advanced Communication Technologies (interACT). The Center works in a network with eight of the world’s top research institutions. Its mission is to develop advanced machine learning algorithms to improve human-human and human-machine communication technologies. Prof. Waibel and his team pioneered many statistical and neural learning algorithms that made such communication breakthroughs possible. Most notably, the “Time-Delay Neural Network” (1987) (now also known as “convolutional” neural network) is at the heart of many of today’s AI technologies. System breakthroughs that followed suit included early multimodal dialog interfaces, the first speech translation system in Europe&USA (1990/1991), the first simultaneous lecture interpretation system (2005), and Jibbigo, the first commercial speech translator on a phone (2009).","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122580494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Virtual Character Agent for Lowering Knowledge-sharing Barriers on Q&A Websites 降低问答网站知识共享壁垒的虚拟角色代理
Pub Date : 2017-10-27 DOI: 10.1145/3125739.3132591
Hao Yin, Keiko Yamamoto, Itaru Kuramoto, Y. Tsujino
With the development of Web 2.0 technology, Q&A websites have become one of the most common avenues for large scale knowledge sharing. However, three types of barriers between questioners and respondents make knowledge sharing difficult:differences in the personality types of the questioners and respondents, lack of trust, and an arrogant or negative attitude exhibited by some questioners. In order to lower these barriers, we propose a Q&A mediator system with virtual character agents. In this system, a questioner asks questions through his/her own agent, and the respondents see the question from their agent. Each agent has four characteristics:personality traits similar to knowledge sharers, good looks, affable personality, and a positive attitude. The results of a preliminary experiment indicated that the proposed system can improve the users' motivation to answer questions.
随着Web 2.0技术的发展,问答网站已经成为大规模知识共享最常见的途径之一。然而,提问者和被调查者之间的三种障碍使得知识共享变得困难:提问者和被调查者的性格类型的差异,缺乏信任,以及一些提问者表现出的傲慢或消极态度。为了降低这些障碍,我们提出了一个带有虚拟角色代理的问答中介系统。在这个系统中,提问者通过他/她自己的代理提出问题,被调查者从他们的代理那里看到问题。每个代理都有四个特征:与知识共享者相似的性格特征、漂亮的外表、和蔼可亲的性格和积极的态度。初步实验结果表明,该系统可以提高用户回答问题的动机。
{"title":"Virtual Character Agent for Lowering Knowledge-sharing Barriers on Q&A Websites","authors":"Hao Yin, Keiko Yamamoto, Itaru Kuramoto, Y. Tsujino","doi":"10.1145/3125739.3132591","DOIUrl":"https://doi.org/10.1145/3125739.3132591","url":null,"abstract":"With the development of Web 2.0 technology, Q&A websites have become one of the most common avenues for large scale knowledge sharing. However, three types of barriers between questioners and respondents make knowledge sharing difficult:differences in the personality types of the questioners and respondents, lack of trust, and an arrogant or negative attitude exhibited by some questioners. In order to lower these barriers, we propose a Q&A mediator system with virtual character agents. In this system, a questioner asks questions through his/her own agent, and the respondents see the question from their agent. Each agent has four characteristics:personality traits similar to knowledge sharers, good looks, affable personality, and a positive attitude. The results of a preliminary experiment indicated that the proposed system can improve the users' motivation to answer questions.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124932687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Impact of Personalisation on Human-Robot Interaction in Learning Scenarios 个性化对学习场景中人机交互的影响
Pub Date : 2017-10-27 DOI: 10.1145/3125739.3125756
Nikhil Churamani, Paul Anton, M. Brügger, Erik Fließwasser, Thomas Hummel, Julius Mayer, Waleed Mustafa, Hwei Geok Ng, Thi Linh Chi Nguyen, Quan Nguyen, Marcus Soll, S. Springenberg, Sascha S. Griffiths, Stefan Heinrich, Nicolás Navarro-Guerrero, Erik Strahl, Johannes Twiefel, C. Weber, S. Wermter
Advancements in Human-Robot Interaction involve robots being more responsive and adaptive to the human user they are interacting with. For example, robots model a personalised dialogue with humans, adapting the conversation to accommodate the user's preferences in order to allow natural interactions. This study investigates the impact of such personalised interaction capabilities of a human companion robot on its social acceptance, perceived intelligence and likeability in a human-robot interaction scenario. In order to measure this impact, the study makes use of an object learning scenario where the user teaches different objects to the robot using natural language. An interaction module is built on top of the learning scenario which engages the user in a personalised conversation before teaching the robot to recognise different objects. The two systems, i.e. with and without the interaction module, are compared with respect to how different users rate the robot on its intelligence and sociability. Although the system equipped with personalised interaction capabilities is rated lower on social acceptance, it is perceived as more intelligent and likeable by the users.
人机交互的进步涉及到机器人对与它们交互的人类用户的响应和适应能力的提高。例如,机器人模拟与人类的个性化对话,调整对话以适应用户的偏好,以便进行自然交互。本研究调查了在人机交互场景中,人类伴侣机器人的这种个性化交互能力对其社会接受度、感知智力和亲和力的影响。为了衡量这种影响,该研究利用了对象学习场景,用户使用自然语言向机器人教授不同的对象。交互模块建立在学习场景之上,在教会机器人识别不同物体之前,用户可以进行个性化对话。这两个系统,即有交互模块和没有交互模块,比较了不同用户对机器人的智能和社交能力的评价。尽管配备个性化交互功能的系统在社会接受度上的评分较低,但用户认为它更智能,更讨人喜欢。
{"title":"The Impact of Personalisation on Human-Robot Interaction in Learning Scenarios","authors":"Nikhil Churamani, Paul Anton, M. Brügger, Erik Fließwasser, Thomas Hummel, Julius Mayer, Waleed Mustafa, Hwei Geok Ng, Thi Linh Chi Nguyen, Quan Nguyen, Marcus Soll, S. Springenberg, Sascha S. Griffiths, Stefan Heinrich, Nicolás Navarro-Guerrero, Erik Strahl, Johannes Twiefel, C. Weber, S. Wermter","doi":"10.1145/3125739.3125756","DOIUrl":"https://doi.org/10.1145/3125739.3125756","url":null,"abstract":"Advancements in Human-Robot Interaction involve robots being more responsive and adaptive to the human user they are interacting with. For example, robots model a personalised dialogue with humans, adapting the conversation to accommodate the user's preferences in order to allow natural interactions. This study investigates the impact of such personalised interaction capabilities of a human companion robot on its social acceptance, perceived intelligence and likeability in a human-robot interaction scenario. In order to measure this impact, the study makes use of an object learning scenario where the user teaches different objects to the robot using natural language. An interaction module is built on top of the learning scenario which engages the user in a personalised conversation before teaching the robot to recognise different objects. The two systems, i.e. with and without the interaction module, are compared with respect to how different users rate the robot on its intelligence and sociability. Although the system equipped with personalised interaction capabilities is rated lower on social acceptance, it is perceived as more intelligent and likeable by the users.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125575798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 55
Speech-to-Gesture Generation: A Challenge in Deep Learning Approach with Bi-Directional LSTM 语音到手势生成:双向LSTM深度学习方法中的挑战
Pub Date : 2017-10-17 DOI: 10.1145/3125739.3132594
Kenta Takeuchi, Dai Hasegawa, S. Shirakawa, Naoshi Kaneko, H. Sakuta, K. Sumi
In this research, we take a first step in generating motion data for gestures directly from speech features. Such a method can make creating gesture animations for Embodied Conversational Agents much easier. We implemented a model using Bi-Directional LSTM taking phonemic features from speech audio data as input to output time sequence data of rotations of bone joints. We assessed the validity of the predicted gesture motion data by evaluating the final loss value of the network, and evaluating the impressions of the predicted gesture by comparing it with the actual motion data that accompanied the audio data used for input and motion data that accompanied a different audio data. The results showed that the accuracy of the prediction for the LSTM model was better than a simple RNN model. In contrast, the impressions evaluation of the predicted gesture was rated lower than the original and mismatched gestures, although individually some predicted gestures were rated the same degree as the mismatched gestures.
在这项研究中,我们迈出了直接从语音特征中生成手势运动数据的第一步。这种方法可以更容易地为Embodied Conversational Agents创建手势动画。我们使用双向LSTM实现了一个模型,将语音音频数据中的音位特征作为输入,输出骨关节旋转的时间序列数据。我们通过评估网络的最终损失值来评估预测手势运动数据的有效性,并通过将预测手势与用于输入的音频数据的实际运动数据和伴随着不同音频数据的运动数据进行比较来评估预测手势的印象。结果表明,LSTM模型的预测精度优于简单的RNN模型。相比之下,预测手势的印象评价评分低于原始和不匹配的手势,尽管个别预测手势的评分与不匹配的手势相同。
{"title":"Speech-to-Gesture Generation: A Challenge in Deep Learning Approach with Bi-Directional LSTM","authors":"Kenta Takeuchi, Dai Hasegawa, S. Shirakawa, Naoshi Kaneko, H. Sakuta, K. Sumi","doi":"10.1145/3125739.3132594","DOIUrl":"https://doi.org/10.1145/3125739.3132594","url":null,"abstract":"In this research, we take a first step in generating motion data for gestures directly from speech features. Such a method can make creating gesture animations for Embodied Conversational Agents much easier. We implemented a model using Bi-Directional LSTM taking phonemic features from speech audio data as input to output time sequence data of rotations of bone joints. We assessed the validity of the predicted gesture motion data by evaluating the final loss value of the network, and evaluating the impressions of the predicted gesture by comparing it with the actual motion data that accompanied the audio data used for input and motion data that accompanied a different audio data. The results showed that the accuracy of the prediction for the LSTM model was better than a simple RNN model. In contrast, the impressions evaluation of the predicted gesture was rated lower than the original and mismatched gestures, although individually some predicted gestures were rated the same degree as the mismatched gestures.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115112810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Don't Judge a Book by its Cover: A Study of the Social Acceptance of NAO vs. Pepper 不要以貌取人:NAO与Pepper的社会接受度研究
Pub Date : 2017-10-17 DOI: 10.1145/3125739.3132583
Sofia Thunberg, Sam Thellman, T. Ziemke
In an explorative study concerning the social acceptance of two specific humanoid robots, the experimenter asked participants (N = 36) to place a book in an adjacent room. Upon entering the room, participants were confronted by a NAO or a Pepper robot expressing persistent opposition against the idea of placing the book in the room. On average, 72% of participants facing NAO complied with the robot's requests and returned the book to the experimenter. The corresponding figure for the Pepper robot was 50%, which shows that the two robot morphologies had a different effect on participants' social behavior. Furthermore, results from a post-study questionnaire (GODSPEED) indicated that participants perceived NAO as more likable, intelligent, safe and lifelike than Pepper. Moreover, participants used significantly more positive words and fewer negative words to describe NAO than Pepper in an open-ended interview. There was no statistically significant difference between conditions in participants' negative attitudes toward robots in general, as assessed using the NARS questionnaire.
在一项关于两种特定人形机器人的社会接受度的探索性研究中,实验者要求参与者(N = 36)在相邻的房间里放一本书。一进入房间,参与者就会遇到一个NAO或Pepper机器人,它们对把书放在房间里的想法表示坚决反对。平均而言,面对NAO的参与者中,72%的人遵从了机器人的要求,把书还给了实验者。Pepper机器人的相应数字是50%,这表明两种机器人形态对参与者的社会行为有不同的影响。此外,研究后问卷(GODSPEED)的结果表明,参与者认为NAO比Pepper更可爱、更聪明、更安全、更逼真。此外,在开放式访谈中,参与者比Pepper使用更多的积极词汇和更少的消极词汇来描述NAO。在使用NARS问卷进行评估时,参与者对机器人的负面态度总体上没有统计学上的显著差异。
{"title":"Don't Judge a Book by its Cover: A Study of the Social Acceptance of NAO vs. Pepper","authors":"Sofia Thunberg, Sam Thellman, T. Ziemke","doi":"10.1145/3125739.3132583","DOIUrl":"https://doi.org/10.1145/3125739.3132583","url":null,"abstract":"In an explorative study concerning the social acceptance of two specific humanoid robots, the experimenter asked participants (N = 36) to place a book in an adjacent room. Upon entering the room, participants were confronted by a NAO or a Pepper robot expressing persistent opposition against the idea of placing the book in the room. On average, 72% of participants facing NAO complied with the robot's requests and returned the book to the experimenter. The corresponding figure for the Pepper robot was 50%, which shows that the two robot morphologies had a different effect on participants' social behavior. Furthermore, results from a post-study questionnaire (GODSPEED) indicated that participants perceived NAO as more likable, intelligent, safe and lifelike than Pepper. Moreover, participants used significantly more positive words and fewer negative words to describe NAO than Pepper in an open-ended interview. There was no statistically significant difference between conditions in participants' negative attitudes toward robots in general, as assessed using the NARS questionnaire.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"76 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120838031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Personal Influences on Dynamic Trust Formation in Human-Agent Interaction 人-代理互动中个人对动态信任形成的影响
Pub Date : 2017-10-17 DOI: 10.1145/3125739.3125749
Hsiao-Ying Huang, Masooda N. Bashir
The development of automated technologies in our daily life has transformed the role of human operators from a controller to a teammate who shares control with automated agents. However, this 'teammate' relationship between humans and automation raises an important but challenging research question regarding the formation of human-agent trust. Considering that the formation of human-agent trust is a dynamic and sophisticated process involving human factors, this study conducted a two-phase online experiment to examine personal influences on users' trust propensity and their trust formation in human-agent interactions. Our findings revealed distinctive personal influences on dispositional trust and the formation of human-agent trust at different stages. We found that users who exhibit higher trust propensities in humans also develop higher trust toward automated agents in initial stages. This study, as the first of its kind, not only fills the gap of knowledge about personal influences on human-agent trust, but also offers opportunities to enhance the future design of automated agent systems.
自动化技术在我们日常生活中的发展,使人类操作员的角色从控制者转变为与自动化代理共享控制权的队友。然而,人类和自动化之间的这种“队友”关系提出了一个重要但具有挑战性的研究问题,即人类-代理信任的形成。考虑到人-agent信任的形成是一个涉及人为因素的动态复杂过程,本研究通过两阶段的在线实验来考察个人对人-agent交互中用户信任倾向和信任形成的影响。我们的研究结果显示,在不同阶段,个人对性格信任和人-代理信任形成的影响是显著的。我们发现,对人类表现出更高信任倾向的用户在初始阶段也对自动代理产生了更高的信任。本研究作为同类研究的首例,不仅填补了个人对人-代理信任影响的知识空白,而且为未来自动化代理系统的设计提供了机会。
{"title":"Personal Influences on Dynamic Trust Formation in Human-Agent Interaction","authors":"Hsiao-Ying Huang, Masooda N. Bashir","doi":"10.1145/3125739.3125749","DOIUrl":"https://doi.org/10.1145/3125739.3125749","url":null,"abstract":"The development of automated technologies in our daily life has transformed the role of human operators from a controller to a teammate who shares control with automated agents. However, this 'teammate' relationship between humans and automation raises an important but challenging research question regarding the formation of human-agent trust. Considering that the formation of human-agent trust is a dynamic and sophisticated process involving human factors, this study conducted a two-phase online experiment to examine personal influences on users' trust propensity and their trust formation in human-agent interactions. Our findings revealed distinctive personal influences on dispositional trust and the formation of human-agent trust at different stages. We found that users who exhibit higher trust propensities in humans also develop higher trust toward automated agents in initial stages. This study, as the first of its kind, not only fills the gap of knowledge about personal influences on human-agent trust, but also offers opportunities to enhance the future design of automated agent systems.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124835886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Prediction of Next-Utterance Timing using Head Movement in Multi-Party Meetings 多人会议中使用头部运动预测下一话语时间
Pub Date : 2017-10-17 DOI: 10.1145/3125739.3125765
Ryo Ishii, Shiro Kumano, K. Otsuka
To build a conversational interface wherein an agent system can smoothly communicate with multiple persons, it is imperative to know how the timing of speaking is decided. In this research, we explore the head movements of participants as an easy-to-measure nonverbal behavior to predict the nest-utterance timing, i.e., the interval between the end of the current speaker's utterance and the start of the next speaker's utterance, in turn-changing in multi-party meetings. First, we collected data on participants' six degree-of-freedom head movements and utterances in four-person meetings. The results of the analysis revealed that the amount of head movements of current speaker, next speaker, and listeners have a positive correlation with the utterance interval. Moreover, the degree of synchrony of the head position and posture between the current speaker and next speaker is negatively correlated with the utterance interval. On the basis of these findings, we used their head movements and the synchrony of their head movements as feature values and devised several prediction models. A model using all features performed the best and was able to predict the next-utterance timing well. Therefore, this research revealed that the participants' head movement is useful for predicting the next-utterance timing in turn-changing in multi-party meetings.
为了构建一个能使agent系统与多人顺畅沟通的会话界面,必须知道如何决定说话的时机。在本研究中,我们探索了参与者的头部运动作为一种易于测量的非语言行为来预测巢话语时间,即当前说话者话语结束和下一个说话者话语开始之间的间隔,在多方会议中轮流变化。首先,我们收集了四人会议中参与者的六个自由度头部运动和话语数据。分析结果表明,当前说话者、下一个说话者和听者的头部运动次数与话语间隔呈正相关。此外,当前说话人和下一个说话人的头部位置和姿势的同步性程度与话语间隔呈负相关。在此基础上,我们以头部运动和头部运动的同步性为特征值,设计了几种预测模型。使用所有特征的模型表现最好,能够很好地预测下一个话语的时间。因此,本研究揭示了在多人会议中,参与者的头部运动对预测下一个话语的时间是有用的。
{"title":"Prediction of Next-Utterance Timing using Head Movement in Multi-Party Meetings","authors":"Ryo Ishii, Shiro Kumano, K. Otsuka","doi":"10.1145/3125739.3125765","DOIUrl":"https://doi.org/10.1145/3125739.3125765","url":null,"abstract":"To build a conversational interface wherein an agent system can smoothly communicate with multiple persons, it is imperative to know how the timing of speaking is decided. In this research, we explore the head movements of participants as an easy-to-measure nonverbal behavior to predict the nest-utterance timing, i.e., the interval between the end of the current speaker's utterance and the start of the next speaker's utterance, in turn-changing in multi-party meetings. First, we collected data on participants' six degree-of-freedom head movements and utterances in four-person meetings. The results of the analysis revealed that the amount of head movements of current speaker, next speaker, and listeners have a positive correlation with the utterance interval. Moreover, the degree of synchrony of the head position and posture between the current speaker and next speaker is negatively correlated with the utterance interval. On the basis of these findings, we used their head movements and the synchrony of their head movements as feature values and devised several prediction models. A model using all features performed the best and was able to predict the next-utterance timing well. Therefore, this research revealed that the participants' head movement is useful for predicting the next-utterance timing in turn-changing in multi-party meetings.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126743558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Exploring Gaze-Activated Object With the CoffeePet 用CoffeePet探索凝视激活的物体
Pub Date : 2017-10-17 DOI: 10.1145/3125739.3132578
S. A. Anas, G.W.M. Rauterberg, Jun Hu
The feeling of being looked back when we look at someone and that someone is also aware that we are looking at him/her is a basic fundamental during social interaction. This situation can only occur if both realize the presence of each other. Based on these theories, this research is motivated in exploiting the possibility of designing for a gaze sensitive object - how people can relate to object by depending on their eyes only. In this paper, we present a gaze-activated coffee machine called the CoffeePet attached with two small, OLED screen that will displays animated eyes. These eyes are responsive towards the user's gaze behavior. Furthermore, we used a sensor module (HVC Omron) to detect and track the eyes of a user in real time. It gives the ability for the user to interact with the CoffeePet simply by moving their eyes. The CoffeePet is also able to automatically brew and pour the coffee out of its spout if it feels appropriate during the interaction. We further explain the description of the system, modification of the real product, and the experimental plan to compare the user's perception of the CoffeePet's eyes and to investigate whether the user realizes or not that their gaze behavior influences the CoffeePet to react.
当我们看别人的时候,对方也知道我们在看他/她,这种感觉是社会交往中最基本的。这种情况只有在双方都意识到彼此的存在时才会发生。基于这些理论,这项研究的动机是探索设计凝视敏感物体的可能性——人们如何仅仅依靠他们的眼睛来与物体联系起来。在这篇论文中,我们提出了一种名为CoffeePet的凝视激活咖啡机,它带有两个小的OLED屏幕,可以显示动画眼睛。这些眼睛会对用户的注视行为做出反应。此外,我们使用传感器模块(HVC欧姆龙)来实时检测和跟踪用户的眼睛。用户只需移动眼睛就能与咖啡宠物互动。如果在互动过程中感觉合适,CoffeePet还可以自动冲泡和倒出咖啡。我们进一步解释了系统的描述,对真实产品的修改,以及实验计划,以比较用户对CoffeePet眼睛的感知,并调查用户是否意识到他们的注视行为会影响CoffeePet的反应。
{"title":"Exploring Gaze-Activated Object With the CoffeePet","authors":"S. A. Anas, G.W.M. Rauterberg, Jun Hu","doi":"10.1145/3125739.3132578","DOIUrl":"https://doi.org/10.1145/3125739.3132578","url":null,"abstract":"The feeling of being looked back when we look at someone and that someone is also aware that we are looking at him/her is a basic fundamental during social interaction. This situation can only occur if both realize the presence of each other. Based on these theories, this research is motivated in exploiting the possibility of designing for a gaze sensitive object - how people can relate to object by depending on their eyes only. In this paper, we present a gaze-activated coffee machine called the CoffeePet attached with two small, OLED screen that will displays animated eyes. These eyes are responsive towards the user's gaze behavior. Furthermore, we used a sensor module (HVC Omron) to detect and track the eyes of a user in real time. It gives the ability for the user to interact with the CoffeePet simply by moving their eyes. The CoffeePet is also able to automatically brew and pour the coffee out of its spout if it feels appropriate during the interaction. We further explain the description of the system, modification of the real product, and the experimental plan to compare the user's perception of the CoffeePet's eyes and to investigate whether the user realizes or not that their gaze behavior influences the CoffeePet to react.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"281 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115021988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Autonomous Self-Explanation of Behavior for Interactive Reinforcement Learning Agents 交互式强化学习智能体行为的自主自我解释
Pub Date : 2017-10-17 DOI: 10.1145/3125739.3125746
Yosuke Fukuchi, Masahiko Osawa, H. Yamakawa, M. Imai
In cooperation, the workers must know how co-workers behave. However, an agent's policy, which is embedded in a statistical machine learning model, is hard to understand, and requires much time and knowledge to comprehend. Therefore, it is difficult for people to predict the behavior of machine learning robots, which makes Human Robot Cooperation challenging. In this paper, we propose Instruction-based Behavior Explanation (IBE), a method to explain an autonomous agent's future behavior. In IBE, an agent can autonomously acquire the expressions to explain its own behavior by reusing the instructions given by a human expert to accelerate the learning of the agent's policy. IBE also enables a developmental agent, whose policy may change during the cooperation, to explain its own behavior with sufficient time granularity.
在合作中,员工必须知道同事的行为。然而,嵌入在统计机器学习模型中的代理策略很难理解,并且需要大量的时间和知识来理解。因此,人们很难预测机器学习机器人的行为,这给人机合作带来了挑战。在本文中,我们提出了基于指令的行为解释(IBE),这是一种解释自主智能体未来行为的方法。在IBE中,智能体可以通过重用人类专家给出的指令来自主获取解释其自身行为的表达式,从而加速智能体策略的学习。IBE还使策略可能在合作过程中发生变化的发展性代理能够以足够的时间粒度解释自己的行为。
{"title":"Autonomous Self-Explanation of Behavior for Interactive Reinforcement Learning Agents","authors":"Yosuke Fukuchi, Masahiko Osawa, H. Yamakawa, M. Imai","doi":"10.1145/3125739.3125746","DOIUrl":"https://doi.org/10.1145/3125739.3125746","url":null,"abstract":"In cooperation, the workers must know how co-workers behave. However, an agent's policy, which is embedded in a statistical machine learning model, is hard to understand, and requires much time and knowledge to comprehend. Therefore, it is difficult for people to predict the behavior of machine learning robots, which makes Human Robot Cooperation challenging. In this paper, we propose Instruction-based Behavior Explanation (IBE), a method to explain an autonomous agent's future behavior. In IBE, an agent can autonomously acquire the expressions to explain its own behavior by reusing the instructions given by a human expert to accelerate the learning of the agent's policy. IBE also enables a developmental agent, whose policy may change during the cooperation, to explain its own behavior with sufficient time granularity.","PeriodicalId":346669,"journal":{"name":"Proceedings of the 5th International Conference on Human Agent Interaction","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122603962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
期刊
Proceedings of the 5th International Conference on Human Agent Interaction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1