会话式电影推荐系统的社会解释模型

Florian Pecune, Shruti Murali, Vivian Tsai, Yoichi Matsuyama, Justine Cassell
{"title":"会话式电影推荐系统的社会解释模型","authors":"Florian Pecune, Shruti Murali, Vivian Tsai, Yoichi Matsuyama, Justine Cassell","doi":"10.1145/3349537.3351899","DOIUrl":null,"url":null,"abstract":"A critical aspect of any recommendation process is explaining the reasoning behind each recommendation. These explanations can not only improve users' experiences, but also change their perception of the recommendation quality. This work describes our human-centered design for our conversational movie recommendation agent, which explains its decisions as humans would. After exploring and analyzing a corpus of dyadic interactions, we developed a computational model of explanations. We then incorporated this model in the architecture of a conversational agent and evaluated the resulting system via a user experiment. Our results show that social explanations can improve the perceived quality of both the system and the interaction, regardless of the intrinsic quality of the recommendations.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"36","resultStr":"{\"title\":\"A Model of Social Explanations for a Conversational Movie Recommendation System\",\"authors\":\"Florian Pecune, Shruti Murali, Vivian Tsai, Yoichi Matsuyama, Justine Cassell\",\"doi\":\"10.1145/3349537.3351899\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A critical aspect of any recommendation process is explaining the reasoning behind each recommendation. These explanations can not only improve users' experiences, but also change their perception of the recommendation quality. This work describes our human-centered design for our conversational movie recommendation agent, which explains its decisions as humans would. After exploring and analyzing a corpus of dyadic interactions, we developed a computational model of explanations. We then incorporated this model in the architecture of a conversational agent and evaluated the resulting system via a user experiment. Our results show that social explanations can improve the perceived quality of both the system and the interaction, regardless of the intrinsic quality of the recommendations.\",\"PeriodicalId\":188834,\"journal\":{\"name\":\"Proceedings of the 7th International Conference on Human-Agent Interaction\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-09-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"36\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 7th International Conference on Human-Agent Interaction\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3349537.3351899\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 7th International Conference on Human-Agent Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3349537.3351899","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 36

摘要

任何推荐过程的一个关键方面是解释每个推荐背后的原因。这些解释不仅可以改善用户体验,还可以改变用户对推荐质量的感知。这项工作描述了我们以人为中心的对话电影推荐代理的设计,它像人类一样解释它的决定。在探索和分析了二元相互作用的语料库之后,我们开发了一个解释的计算模型。然后,我们将该模型合并到会话代理的体系结构中,并通过用户实验评估结果系统。我们的研究结果表明,无论推荐的内在质量如何,社会解释都可以提高系统和交互的感知质量。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A Model of Social Explanations for a Conversational Movie Recommendation System
A critical aspect of any recommendation process is explaining the reasoning behind each recommendation. These explanations can not only improve users' experiences, but also change their perception of the recommendation quality. This work describes our human-centered design for our conversational movie recommendation agent, which explains its decisions as humans would. After exploring and analyzing a corpus of dyadic interactions, we developed a computational model of explanations. We then incorporated this model in the architecture of a conversational agent and evaluated the resulting system via a user experiment. Our results show that social explanations can improve the perceived quality of both the system and the interaction, regardless of the intrinsic quality of the recommendations.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
The Design Method of the Virtual Teacher Are We Having Fun Yet?: Designing for Fun in Artificial Intelligence That Is Multicultural and Multiplatform A Conversational Robotic Approach to Dementia Symptoms: Measuring Its Effect on Older Adults Let Me Get To Know You Better: Can Interactions Help to Overcome Uncanny Feelings? Factors Influencing Empathic Behaviors for Virtual Agents: -Examining about the Effect of Embodiment-
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1