首页 > 最新文献

Proceedings of the 7th International Conference on Human-Agent Interaction最新文献

英文 中文
Effect of an Educational Support Robot Displaying Utterance Contents on a Learning System 教育支持机器人话语内容显示对学习系统的影响
Pub Date : 2019-09-25 DOI: 10.1145/3349537.3352782
Shunsuke Shibata, Felix Jimenez, K. Murakami
Recently, there have been advancements in the research and development of educational support robots supporting learning. Previous studies reported that the problem with these types of robots is that as learning progresses, learners lose interest in collaborative learning with robots. Thus, this paper reports a method to maintain learners' interest in collaborative learning with the robot alternately solving problems with the learner. Moreover, this study investigates the impression effect of collaborative learning with robots displaying utterance contents on learning system monitors. The results of this experiment indicated that a robot using the proposed model leaves a good impression on learners.
近年来,支持学习的教育辅助机器人的研究和开发取得了进展。先前的研究报告称,这类机器人的问题在于,随着学习的进展,学习者失去了与机器人合作学习的兴趣。因此,本文报告了一种与机器人交替解决问题的方法来保持学习者对协作学习的兴趣。此外,本研究还探讨了机器人在学习系统监视器上显示话语内容的协作学习的印象效应。实验结果表明,使用该模型的机器人给学习者留下了良好的印象。
{"title":"Effect of an Educational Support Robot Displaying Utterance Contents on a Learning System","authors":"Shunsuke Shibata, Felix Jimenez, K. Murakami","doi":"10.1145/3349537.3352782","DOIUrl":"https://doi.org/10.1145/3349537.3352782","url":null,"abstract":"Recently, there have been advancements in the research and development of educational support robots supporting learning. Previous studies reported that the problem with these types of robots is that as learning progresses, learners lose interest in collaborative learning with robots. Thus, this paper reports a method to maintain learners' interest in collaborative learning with the robot alternately solving problems with the learner. Moreover, this study investigates the impression effect of collaborative learning with robots displaying utterance contents on learning system monitors. The results of this experiment indicated that a robot using the proposed model leaves a good impression on learners.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123339499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Analyzing Eye Movements in Interview Communication with Virtual Reality Agents 基于虚拟现实代理的访谈交流眼动分析
Pub Date : 2019-09-25 DOI: 10.1145/3349537.3351889
Fuhui Tian, S. Okada, K. Nitta
In human-agent interactions, human emotions and gestures expressed when interacting with agents is a high-level personally trait that quantifies human attitudes, intentions, motivations, and behaviors. The virtual reality space provides a chance to interact with virtual agents in a more immersive way. In this paper, we present a computational framework to analyze human eye movements by using a virtual reality system in a job interview scene. First, we developed a remote interview system using virtual agents and implemented the system into a virtual reality headset. Second, by tracking eye movements and collecting other multimodal data, the system could better analyze human personality traits in interview communication with virtual agents, and it could better support training in people's communication skills. In experiments, we analyzed the relationship between eye gaze feature and interview performance annotated by human experts. Experimental results showed acceptable accuracy value for the single modality of eye movement in the prediction of eye contact and total performance in job interviews.
在人-agent交互中,人类在与agent交互时所表达的情感和手势是一种高层次的个人特征,它量化了人类的态度、意图、动机和行为。虚拟现实空间提供了一个以更身临其境的方式与虚拟代理互动的机会。在本文中,我们提出了一个计算框架,通过使用虚拟现实系统来分析面试场景中的人眼运动。首先,我们开发了一个使用虚拟代理的远程面试系统,并将该系统实现到虚拟现实头显中。其次,通过跟踪眼球运动和收集其他多模态数据,系统可以更好地分析人在与虚拟代理的访谈沟通中的性格特征,更好地支持人的沟通技巧训练。在实验中,我们分析了眼睛注视特征与人类专家注释的采访表现之间的关系。实验结果表明,眼动单模态在预测面试中的眼神接触和整体表现方面具有可接受的准确性。
{"title":"Analyzing Eye Movements in Interview Communication with Virtual Reality Agents","authors":"Fuhui Tian, S. Okada, K. Nitta","doi":"10.1145/3349537.3351889","DOIUrl":"https://doi.org/10.1145/3349537.3351889","url":null,"abstract":"In human-agent interactions, human emotions and gestures expressed when interacting with agents is a high-level personally trait that quantifies human attitudes, intentions, motivations, and behaviors. The virtual reality space provides a chance to interact with virtual agents in a more immersive way. In this paper, we present a computational framework to analyze human eye movements by using a virtual reality system in a job interview scene. First, we developed a remote interview system using virtual agents and implemented the system into a virtual reality headset. Second, by tracking eye movements and collecting other multimodal data, the system could better analyze human personality traits in interview communication with virtual agents, and it could better support training in people's communication skills. In experiments, we analyzed the relationship between eye gaze feature and interview performance annotated by human experts. Experimental results showed acceptable accuracy value for the single modality of eye movement in the prediction of eye contact and total performance in job interviews.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126275806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Towards Digitally-Mediated Sign Language Communication 迈向数字媒介的手语交流
Pub Date : 2019-09-25 DOI: 10.1145/3349537.3352794
Kalin Stefanov, M. Bono
This paper presents our efforts towards building an architecture for digitally-mediated sign language communication. The architecture is based on a client-server model and enables a near real-time recognition of sign language signs on a mobile device. The paper describes the two main components of the architecture, a recognition engine (server-side) and a mobile application (client-side), and outlines directions for future work.
本文介绍了我们为构建数字媒介手语交流架构所做的努力。该体系结构基于客户机-服务器模型,能够在移动设备上近乎实时地识别手语符号。本文描述了该体系结构的两个主要组成部分,一个识别引擎(服务器端)和一个移动应用程序(客户端),并概述了未来工作的方向。
{"title":"Towards Digitally-Mediated Sign Language Communication","authors":"Kalin Stefanov, M. Bono","doi":"10.1145/3349537.3352794","DOIUrl":"https://doi.org/10.1145/3349537.3352794","url":null,"abstract":"This paper presents our efforts towards building an architecture for digitally-mediated sign language communication. The architecture is based on a client-server model and enables a near real-time recognition of sign language signs on a mobile device. The paper describes the two main components of the architecture, a recognition engine (server-side) and a mobile application (client-side), and outlines directions for future work.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129958955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Hierarchical Affordance Discovery using Intrinsic Motivation 基于内在动机的分层功能发现
Pub Date : 2019-09-25 DOI: 10.1145/3349537.3351898
A. Manoury, S. Nguyen, Cédric Buche
To be capable of life-long learning in a real-life environment, robots have to tackle multiple challenges. Being able to relate physical properties they may observe in their environment to possible interactions they may have is one of them. This skill, named affordance learning, is strongly related to embodiment and is mastered through each person's development: each individual learns affordances differently through their own interactions with their surroundings. Current methods for affordance learning usually use either fixed actions to learn these affordances or focus on static setups involving a robotic arm to be operated. In this article, we propose an algorithm using intrinsic motivation to guide the learning of affordances for a mobile robot. This algorithm is capable to autonomously discover, learn and adapt interrelated affordances without pre-programmed actions. Once learned, these affordances may be used by the algorithm to plan sequences of actions in order to perform tasks of various difficulties. We then present one experiment and analyse our system before comparing it with other approaches from reinforcement learning and affordance learning.
为了能够在现实环境中终身学习,机器人必须应对多种挑战。能够将他们在环境中观察到的物理特性与他们可能发生的相互作用联系起来就是其中之一。这种技能被称为可视性学习,与体现密切相关,并通过每个人的发展来掌握:每个人通过自己与周围环境的互动来学习不同的可视性。目前的可视性学习方法通常使用固定动作来学习这些可视性,或者专注于涉及操作机械臂的静态设置。在本文中,我们提出了一种使用内在动机来指导移动机器人的可视性学习的算法。该算法能够自主发现、学习和适应相互关联的启示,而无需预先编程的动作。一旦学习,这些启示可以被算法用来计划行动序列,以执行各种困难的任务。然后,我们提出了一个实验并分析了我们的系统,然后将其与强化学习和可视性学习的其他方法进行比较。
{"title":"Hierarchical Affordance Discovery using Intrinsic Motivation","authors":"A. Manoury, S. Nguyen, Cédric Buche","doi":"10.1145/3349537.3351898","DOIUrl":"https://doi.org/10.1145/3349537.3351898","url":null,"abstract":"To be capable of life-long learning in a real-life environment, robots have to tackle multiple challenges. Being able to relate physical properties they may observe in their environment to possible interactions they may have is one of them. This skill, named affordance learning, is strongly related to embodiment and is mastered through each person's development: each individual learns affordances differently through their own interactions with their surroundings. Current methods for affordance learning usually use either fixed actions to learn these affordances or focus on static setups involving a robotic arm to be operated. In this article, we propose an algorithm using intrinsic motivation to guide the learning of affordances for a mobile robot. This algorithm is capable to autonomously discover, learn and adapt interrelated affordances without pre-programmed actions. Once learned, these affordances may be used by the algorithm to plan sequences of actions in order to perform tasks of various difficulties. We then present one experiment and analyse our system before comparing it with other approaches from reinforcement learning and affordance learning.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130902691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Effects of Turn-Taking Dynamics Without Contingency: A Visual Interaction Experiment 无偶然性的轮转动力学效应:一个视觉交互实验
Pub Date : 2019-09-25 DOI: 10.1145/3349537.3352778
Ryohei Irie, Takeshi Konno
This study aimed to realize the natural interactions of humans and machines by experimentally investigating the effects of turn-taking dynamics during visual interactions between two persons. During the experiment, we provided an environment in which only two circles moved horizontally across a monitor screen. One circle was operated by a participant, while the other was operated by another participant or computer. Results confirmed that participants could not clearly recognize computer actions when the computer used turn-taking dynamics to exchange leaders and followers. This was true even when these dynamics had no contingency with the movements of participants.
本研究旨在通过实验研究轮替动力学对两个人视觉交互的影响,实现人与机器的自然交互。在实验过程中,我们提供了一个环境,在这个环境中,只有两个圆圈在监视器屏幕上水平移动。一个圈由参与者操作,而另一个圈由另一个参与者或计算机操作。结果证实,当计算机使用轮流动态交换领导者和追随者时,参与者不能清楚地识别计算机的动作。即使这些动态与参与者的运动没有偶然性,这也是正确的。
{"title":"Effects of Turn-Taking Dynamics Without Contingency: A Visual Interaction Experiment","authors":"Ryohei Irie, Takeshi Konno","doi":"10.1145/3349537.3352778","DOIUrl":"https://doi.org/10.1145/3349537.3352778","url":null,"abstract":"This study aimed to realize the natural interactions of humans and machines by experimentally investigating the effects of turn-taking dynamics during visual interactions between two persons. During the experiment, we provided an environment in which only two circles moved horizontally across a monitor screen. One circle was operated by a participant, while the other was operated by another participant or computer. Results confirmed that participants could not clearly recognize computer actions when the computer used turn-taking dynamics to exchange leaders and followers. This was true even when these dynamics had no contingency with the movements of participants.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132685782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Calibrate My Smile: Robot Learning Its Facial Expressions through Interactive Play with Humans 校准我的微笑:机器人通过与人类互动学习面部表情
Pub Date : 2019-09-25 DOI: 10.1145/3349537.3351890
Dino Ilic, Ivana Žužić, D. Brscic
Social robots often have expressive faces. However, it is not always clear how to design expressions that show a certain emotion. We present a method for a social robot to learn the emotional meaning of its own facial expressions, based on which it can automatically generate faces for any emotion. The robot collects data from an imitation game where humans are asked to mimic the robot's facial expression. The interacting person does not need to explicitly input the meaning of the robot's face so the interaction is natural. We show that humans can successfully recognise the emotions from the learned facial expressions.
社交机器人通常有表情丰富的面孔。然而,如何设计表达某种情感的表情并不总是很清楚。我们提出了一种社交机器人学习自己面部表情的情感含义的方法,在此基础上,它可以自动生成任何情感的脸。机器人从一个模仿游戏中收集数据,在这个游戏中,人类被要求模仿机器人的面部表情。交互的人不需要明确输入机器人面部的含义,因此交互是自然的。我们证明,人类可以成功地从习得的面部表情中识别出情绪。
{"title":"Calibrate My Smile: Robot Learning Its Facial Expressions through Interactive Play with Humans","authors":"Dino Ilic, Ivana Žužić, D. Brscic","doi":"10.1145/3349537.3351890","DOIUrl":"https://doi.org/10.1145/3349537.3351890","url":null,"abstract":"Social robots often have expressive faces. However, it is not always clear how to design expressions that show a certain emotion. We present a method for a social robot to learn the emotional meaning of its own facial expressions, based on which it can automatically generate faces for any emotion. The robot collects data from an imitation game where humans are asked to mimic the robot's facial expression. The interacting person does not need to explicitly input the meaning of the robot's face so the interaction is natural. We show that humans can successfully recognise the emotions from the learned facial expressions.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133318644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Pocketable-Bones: A Portable Robot Sharing Interests with User in the Breast Pocket 口袋骨:一种与用户在胸前口袋分享兴趣的便携式机器人
Pub Date : 2019-09-25 DOI: 10.1145/3349537.3352768
Ryosuke Mayumi, Naoki Ohshima, M. Okada
We propose a portable robot, named "Pocketable-Bones", that fits into a user's breast pocket and communicates with the user "side-by-side", which involves coordinating the direction in which the user is looking and the object of interest. In this paper, we discuss the development of a platform for the robot and the hardware configuration needed to establish the human-robot "side-by-side" communication. In our presentation, we will demonstrate the side-by-side communication with the robot and the participants can experience it.
我们提出了一种便携式机器人,名为“Pocketable-Bones”,它可以放进用户的胸袋里,并与用户“并排”交流,这包括协调用户所看的方向和感兴趣的物体。在本文中,我们讨论了机器人平台的开发以及建立人机“并排”通信所需的硬件配置。在我们的演示中,我们将演示与机器人的并排通信,参与者可以体验它。
{"title":"Pocketable-Bones: A Portable Robot Sharing Interests with User in the Breast Pocket","authors":"Ryosuke Mayumi, Naoki Ohshima, M. Okada","doi":"10.1145/3349537.3352768","DOIUrl":"https://doi.org/10.1145/3349537.3352768","url":null,"abstract":"We propose a portable robot, named \"Pocketable-Bones\", that fits into a user's breast pocket and communicates with the user \"side-by-side\", which involves coordinating the direction in which the user is looking and the object of interest. In this paper, we discuss the development of a platform for the robot and the hardware configuration needed to establish the human-robot \"side-by-side\" communication. In our presentation, we will demonstrate the side-by-side communication with the robot and the participants can experience it.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132379174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Markovian Method for Predicting Trust Behavior in Human-Agent Interaction 人- agent交互信任行为预测的马尔可夫方法
Pub Date : 2019-09-25 DOI: 10.1145/3349537.3351905
D. Pynadath, Ning Wang, Sreekar Kamireddy
Trust calibration is critical to the success of human-agent interaction (HAI). However, individual differences are ubiquitous in people's trust relationships with autonomous systems. To assist its heterogeneous human teammates calibrate their trust in it, an agent must first dynamically model them as individuals, rather than communicating with them all in the same manner. It can then generate expectations of its teammates' behavior and optimize its own communication based on the current state of the trust relationship it has with them. In this work, we examine how an agent can generate accurate expectations given observations of only the teammate's trust-related behaviors (e.g., did the person follow or ignore its advice?). In addition to this limited input, we also seek a specific output: accurately predicting its human teammate's future trust behavior (e.g., will the person follow or ignore my next suggestion?). In this investigation, we construct a model capable of generating such expectations using data gathered in a human-subject study of behavior in a simulated human-robot interaction (HRI) scenario. We first analyze the ability of measures from a pre-survey on trust-related traits to accurately predict subsequent trust behaviors. However, as the interaction progresses, this effect is dwarfed by the direct experience. We therefore analyze the ability of sequences of prior behavior by the teammate to accurately predict subsequent trust behaviors. Such behavioral sequences have shown to be indicative of the subjective beliefs of other teammates, and we show here that they have a predictive power as well.
信任校准是人代理交互(HAI)成功的关键。然而,在人们与自治系统的信任关系中,个体差异是普遍存在的。为了帮助其异质的人类队友校准他们对它的信任,代理必须首先将他们作为个体动态建模,而不是以相同的方式与他们所有人进行通信。然后,它可以对其队友的行为产生期望,并根据与他们的信任关系的当前状态优化自己的通信。在这项工作中,我们研究了一个代理如何在只观察队友的信任相关行为的情况下产生准确的期望(例如,这个人是遵循还是忽略了他的建议?)除了这个有限的输入,我们还寻求一个特定的输出:准确地预测它的人类队友未来的信任行为(例如,这个人会遵循还是忽略我的下一个建议?)在本研究中,我们构建了一个模型,利用在模拟人机交互(HRI)场景中人类主体行为研究中收集的数据,能够产生这样的期望。我们首先分析了信任相关特征的预调查测量准确预测后续信任行为的能力。然而,随着互动的进行,这种效果与直接体验相比就显得微不足道了。因此,我们分析了队友的先验行为序列准确预测后续信任行为的能力。这样的行为序列表明了其他队友的主观信念,我们在这里展示了他们也有预测能力。
{"title":"A Markovian Method for Predicting Trust Behavior in Human-Agent Interaction","authors":"D. Pynadath, Ning Wang, Sreekar Kamireddy","doi":"10.1145/3349537.3351905","DOIUrl":"https://doi.org/10.1145/3349537.3351905","url":null,"abstract":"Trust calibration is critical to the success of human-agent interaction (HAI). However, individual differences are ubiquitous in people's trust relationships with autonomous systems. To assist its heterogeneous human teammates calibrate their trust in it, an agent must first dynamically model them as individuals, rather than communicating with them all in the same manner. It can then generate expectations of its teammates' behavior and optimize its own communication based on the current state of the trust relationship it has with them. In this work, we examine how an agent can generate accurate expectations given observations of only the teammate's trust-related behaviors (e.g., did the person follow or ignore its advice?). In addition to this limited input, we also seek a specific output: accurately predicting its human teammate's future trust behavior (e.g., will the person follow or ignore my next suggestion?). In this investigation, we construct a model capable of generating such expectations using data gathered in a human-subject study of behavior in a simulated human-robot interaction (HRI) scenario. We first analyze the ability of measures from a pre-survey on trust-related traits to accurately predict subsequent trust behaviors. However, as the interaction progresses, this effect is dwarfed by the direct experience. We therefore analyze the ability of sequences of prior behavior by the teammate to accurately predict subsequent trust behaviors. Such behavioral sequences have shown to be indicative of the subjective beliefs of other teammates, and we show here that they have a predictive power as well.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130097004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
A Model of Social Explanations for a Conversational Movie Recommendation System 会话式电影推荐系统的社会解释模型
Pub Date : 2019-09-25 DOI: 10.1145/3349537.3351899
Florian Pecune, Shruti Murali, Vivian Tsai, Yoichi Matsuyama, Justine Cassell
A critical aspect of any recommendation process is explaining the reasoning behind each recommendation. These explanations can not only improve users' experiences, but also change their perception of the recommendation quality. This work describes our human-centered design for our conversational movie recommendation agent, which explains its decisions as humans would. After exploring and analyzing a corpus of dyadic interactions, we developed a computational model of explanations. We then incorporated this model in the architecture of a conversational agent and evaluated the resulting system via a user experiment. Our results show that social explanations can improve the perceived quality of both the system and the interaction, regardless of the intrinsic quality of the recommendations.
任何推荐过程的一个关键方面是解释每个推荐背后的原因。这些解释不仅可以改善用户体验,还可以改变用户对推荐质量的感知。这项工作描述了我们以人为中心的对话电影推荐代理的设计,它像人类一样解释它的决定。在探索和分析了二元相互作用的语料库之后,我们开发了一个解释的计算模型。然后,我们将该模型合并到会话代理的体系结构中,并通过用户实验评估结果系统。我们的研究结果表明,无论推荐的内在质量如何,社会解释都可以提高系统和交互的感知质量。
{"title":"A Model of Social Explanations for a Conversational Movie Recommendation System","authors":"Florian Pecune, Shruti Murali, Vivian Tsai, Yoichi Matsuyama, Justine Cassell","doi":"10.1145/3349537.3351899","DOIUrl":"https://doi.org/10.1145/3349537.3351899","url":null,"abstract":"A critical aspect of any recommendation process is explaining the reasoning behind each recommendation. These explanations can not only improve users' experiences, but also change their perception of the recommendation quality. This work describes our human-centered design for our conversational movie recommendation agent, which explains its decisions as humans would. After exploring and analyzing a corpus of dyadic interactions, we developed a computational model of explanations. We then incorporated this model in the architecture of a conversational agent and evaluated the resulting system via a user experiment. Our results show that social explanations can improve the perceived quality of both the system and the interaction, regardless of the intrinsic quality of the recommendations.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130216113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
The Social Psychology of Human-agent Interaction 人-代理互动的社会心理学
Pub Date : 2019-09-25 DOI: 10.1145/3349537.3351909
J. Gratch
Designers of human-agent systems often assume that users interact with machines as if they are interacting with another person. As a consequences, fidelity to human behavior is often viewed as the gold standard for judging agent design, and theories of human social psychology are often accepted without question as a framework for informing human-agent interaction. This assumption was given strength by the pioneering work of Cliff Nass showing that many of the effects studied within social psychology seem to apply to human-machine interaction. In this talk, I will illustrate that these social effects are much weaker than widely supposed, and that the differences in how people treat machines are arguably more interesting than the similarities. These differences can lead to novel insights into human social cognition and unique technological solutions to intractable social problems. I will discuss this in the context of our research on education and mental health. Thus, rather copying human behavior, I will argue that HAI researchers should aim to transcend conventional forms of social interaction, and work towards novel theoretical frameworks that address the novel psychology of human-agent interaction.
人类代理系统的设计者通常假设用户与机器交互就像他们与另一个人交互一样。因此,对人类行为的忠实度通常被视为判断智能体设计的黄金标准,人类社会心理学理论通常被毫无疑问地接受为人类与智能体互动的框架。克里夫·纳斯(Cliff Nass)的开创性工作为这一假设提供了支持,该工作表明,社会心理学中研究的许多效应似乎适用于人机交互。在这次演讲中,我将说明这些社会效应比人们普遍认为的要弱得多,而且人们对待机器的方式的差异可以说比相似之处更有趣。这些差异可以导致对人类社会认知的新见解,以及解决棘手社会问题的独特技术解决方案。我将在我们的教育和心理健康研究的背景下讨论这个问题。因此,与其复制人类行为,我认为人工智能研究人员应该致力于超越传统的社会互动形式,并致力于解决人类-代理互动的新心理学的新理论框架。
{"title":"The Social Psychology of Human-agent Interaction","authors":"J. Gratch","doi":"10.1145/3349537.3351909","DOIUrl":"https://doi.org/10.1145/3349537.3351909","url":null,"abstract":"Designers of human-agent systems often assume that users interact with machines as if they are interacting with another person. As a consequences, fidelity to human behavior is often viewed as the gold standard for judging agent design, and theories of human social psychology are often accepted without question as a framework for informing human-agent interaction. This assumption was given strength by the pioneering work of Cliff Nass showing that many of the effects studied within social psychology seem to apply to human-machine interaction. In this talk, I will illustrate that these social effects are much weaker than widely supposed, and that the differences in how people treat machines are arguably more interesting than the similarities. These differences can lead to novel insights into human social cognition and unique technological solutions to intractable social problems. I will discuss this in the context of our research on education and mental health. Thus, rather copying human behavior, I will argue that HAI researchers should aim to transcend conventional forms of social interaction, and work towards novel theoretical frameworks that address the novel psychology of human-agent interaction.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"153 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121398772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
Proceedings of the 7th International Conference on Human-Agent Interaction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1