Georgios Theocharous, Jennifer Healey, S. Mahadevan, Michele A. Saad
{"title":"与人类认知偏见的个性化","authors":"Georgios Theocharous, Jennifer Healey, S. Mahadevan, Michele A. Saad","doi":"10.1145/3314183.3323453","DOIUrl":null,"url":null,"abstract":"Human cognitive biases are numerous and well established. Due to inherent limitations in our knowledge of the world, and computational constraints, our judgments and decisions do not rigidly adhere to the principle of maximizing expected utility. We frequently employ cognitive shortcuts, ignoring relevant information, and make errors in how we store and retrieve items from memory. Human decisions are additionally influenced by moral, emotional and cultural parameters. People often perceive value in a way that is very different from well-established decision-theoretic frameworks, but much of the work on personalization does not capture human cognitive biases. Our central hypothesis is that a new generation of recommendation systems can be designed by explicitly modeling human cognitive biases such as contrast, decoy, distinction, and framing. We are just now beginning to see explicit non-linear models of human risk perception being incorporated into machine learning algorithms, and we believe this trend will accelerate in the near future. In this paper we review today's recommendation systems, give an analysis of their limitations and make an argument for why future recommendation systems should incorporate explicit models of human cognitive bias.","PeriodicalId":240482,"journal":{"name":"Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":"{\"title\":\"Personalizing with Human Cognitive Biases\",\"authors\":\"Georgios Theocharous, Jennifer Healey, S. Mahadevan, Michele A. Saad\",\"doi\":\"10.1145/3314183.3323453\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Human cognitive biases are numerous and well established. Due to inherent limitations in our knowledge of the world, and computational constraints, our judgments and decisions do not rigidly adhere to the principle of maximizing expected utility. We frequently employ cognitive shortcuts, ignoring relevant information, and make errors in how we store and retrieve items from memory. Human decisions are additionally influenced by moral, emotional and cultural parameters. People often perceive value in a way that is very different from well-established decision-theoretic frameworks, but much of the work on personalization does not capture human cognitive biases. Our central hypothesis is that a new generation of recommendation systems can be designed by explicitly modeling human cognitive biases such as contrast, decoy, distinction, and framing. We are just now beginning to see explicit non-linear models of human risk perception being incorporated into machine learning algorithms, and we believe this trend will accelerate in the near future. In this paper we review today's recommendation systems, give an analysis of their limitations and make an argument for why future recommendation systems should incorporate explicit models of human cognitive bias.\",\"PeriodicalId\":240482,\"journal\":{\"name\":\"Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization\",\"volume\":\"19 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-06-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"11\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3314183.3323453\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3314183.3323453","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Human cognitive biases are numerous and well established. Due to inherent limitations in our knowledge of the world, and computational constraints, our judgments and decisions do not rigidly adhere to the principle of maximizing expected utility. We frequently employ cognitive shortcuts, ignoring relevant information, and make errors in how we store and retrieve items from memory. Human decisions are additionally influenced by moral, emotional and cultural parameters. People often perceive value in a way that is very different from well-established decision-theoretic frameworks, but much of the work on personalization does not capture human cognitive biases. Our central hypothesis is that a new generation of recommendation systems can be designed by explicitly modeling human cognitive biases such as contrast, decoy, distinction, and framing. We are just now beginning to see explicit non-linear models of human risk perception being incorporated into machine learning algorithms, and we believe this trend will accelerate in the near future. In this paper we review today's recommendation systems, give an analysis of their limitations and make an argument for why future recommendation systems should incorporate explicit models of human cognitive bias.