与人类认知偏见的个性化

Georgios Theocharous, Jennifer Healey, S. Mahadevan, Michele A. Saad
{"title":"与人类认知偏见的个性化","authors":"Georgios Theocharous, Jennifer Healey, S. Mahadevan, Michele A. Saad","doi":"10.1145/3314183.3323453","DOIUrl":null,"url":null,"abstract":"Human cognitive biases are numerous and well established. Due to inherent limitations in our knowledge of the world, and computational constraints, our judgments and decisions do not rigidly adhere to the principle of maximizing expected utility. We frequently employ cognitive shortcuts, ignoring relevant information, and make errors in how we store and retrieve items from memory. Human decisions are additionally influenced by moral, emotional and cultural parameters. People often perceive value in a way that is very different from well-established decision-theoretic frameworks, but much of the work on personalization does not capture human cognitive biases. Our central hypothesis is that a new generation of recommendation systems can be designed by explicitly modeling human cognitive biases such as contrast, decoy, distinction, and framing. We are just now beginning to see explicit non-linear models of human risk perception being incorporated into machine learning algorithms, and we believe this trend will accelerate in the near future. In this paper we review today's recommendation systems, give an analysis of their limitations and make an argument for why future recommendation systems should incorporate explicit models of human cognitive bias.","PeriodicalId":240482,"journal":{"name":"Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":"{\"title\":\"Personalizing with Human Cognitive Biases\",\"authors\":\"Georgios Theocharous, Jennifer Healey, S. Mahadevan, Michele A. Saad\",\"doi\":\"10.1145/3314183.3323453\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Human cognitive biases are numerous and well established. Due to inherent limitations in our knowledge of the world, and computational constraints, our judgments and decisions do not rigidly adhere to the principle of maximizing expected utility. We frequently employ cognitive shortcuts, ignoring relevant information, and make errors in how we store and retrieve items from memory. Human decisions are additionally influenced by moral, emotional and cultural parameters. People often perceive value in a way that is very different from well-established decision-theoretic frameworks, but much of the work on personalization does not capture human cognitive biases. Our central hypothesis is that a new generation of recommendation systems can be designed by explicitly modeling human cognitive biases such as contrast, decoy, distinction, and framing. We are just now beginning to see explicit non-linear models of human risk perception being incorporated into machine learning algorithms, and we believe this trend will accelerate in the near future. In this paper we review today's recommendation systems, give an analysis of their limitations and make an argument for why future recommendation systems should incorporate explicit models of human cognitive bias.\",\"PeriodicalId\":240482,\"journal\":{\"name\":\"Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization\",\"volume\":\"19 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-06-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"11\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3314183.3323453\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3314183.3323453","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11

摘要

人类的认知偏差很多,而且已经确立。由于我们对世界认识的固有局限性和计算约束,我们的判断和决策并不严格遵循期望效用最大化的原则。我们经常使用认知捷径,忽略相关信息,并在如何存储和检索记忆中犯错误。人类的决定还受到道德、情感和文化参数的影响。人们通常以一种与成熟的决策理论框架非常不同的方式来感知价值,但许多关于个性化的工作并没有捕捉到人类的认知偏见。我们的中心假设是,新一代的推荐系统可以通过明确地模拟人类的认知偏差(如对比、诱饵、区分和框架)来设计。我们刚刚开始看到人类风险感知的明确非线性模型被纳入机器学习算法,我们相信这一趋势将在不久的将来加速。在本文中,我们回顾了当今的推荐系统,分析了它们的局限性,并提出了为什么未来的推荐系统应该包含人类认知偏见的明确模型的论点。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Personalizing with Human Cognitive Biases
Human cognitive biases are numerous and well established. Due to inherent limitations in our knowledge of the world, and computational constraints, our judgments and decisions do not rigidly adhere to the principle of maximizing expected utility. We frequently employ cognitive shortcuts, ignoring relevant information, and make errors in how we store and retrieve items from memory. Human decisions are additionally influenced by moral, emotional and cultural parameters. People often perceive value in a way that is very different from well-established decision-theoretic frameworks, but much of the work on personalization does not capture human cognitive biases. Our central hypothesis is that a new generation of recommendation systems can be designed by explicitly modeling human cognitive biases such as contrast, decoy, distinction, and framing. We are just now beginning to see explicit non-linear models of human risk perception being incorporated into machine learning algorithms, and we believe this trend will accelerate in the near future. In this paper we review today's recommendation systems, give an analysis of their limitations and make an argument for why future recommendation systems should incorporate explicit models of human cognitive bias.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Shaping the Reaction: Community Characteristics and Emotional Tone of Citizen Responses to Robotics Videos at TED versus YouTube Supporting the Exploration of Cultural Heritage Information via Search Behavior Analysis Exer-model: A User Model for Scrutinising Long-term Models of Physical Activity from Multiple Sensors NEAR: A Partner to Explain Any Factorised Recommender System Tikkoun Sofrim: A WebApp for Personalization and Adaptation of Crowdsourcing Transcriptions
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1