{"title":"人类推荐系统:从基准数据到基准认知模型","authors":"Patrick Shafto, O. Nasraoui","doi":"10.1145/2959100.2959188","DOIUrl":null,"url":null,"abstract":"We bring to the fore of the recommender system research community, an inconvenient truth about the current state of understanding how recommender system algorithms and humans influence one another, both computationally and cognitively. Unlike the great variety of supervised machine learning algorithms which traditionally rely on expert input labels and are typically used for decision making by an expert, recommender systems specifically rely on data input from non-expert or casual users and are meant to be used directly by these same non-expert users on an every day basis. Furthermore, the advances in online machine learning, data generation, and predictive model learning have become increasingly interdependent, such that each one feeds on the other in an iterative cycle. Research in psychology suggests that people's choices are (1) contextually dependent, and (2) dependent on interaction history. Thus, while standard methods of training and assessing performance of recommender systems rely on benchmark datasets, we suggest that a critical step in the evolution of recommender systems is the development of benchmark models of human behavior that capture contextual and dynamic aspects of human behavior. It is important to emphasize that even extensive real life user-tests may not be sufficient to make up for this gap in benchmarking validity because user tests are typically done with either a focus on user satisfaction or engagement (clicks, sales, likes, etc) with whatever the recommender algorithm suggests to the user, and thus ignore the human cognitive aspect. We conclude by highlighting the interdisciplinary implications of this endeavor.","PeriodicalId":315651,"journal":{"name":"Proceedings of the 10th ACM Conference on Recommender Systems","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"18","resultStr":"{\"title\":\"Human-Recommender Systems: From Benchmark Data to Benchmark Cognitive Models\",\"authors\":\"Patrick Shafto, O. Nasraoui\",\"doi\":\"10.1145/2959100.2959188\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We bring to the fore of the recommender system research community, an inconvenient truth about the current state of understanding how recommender system algorithms and humans influence one another, both computationally and cognitively. Unlike the great variety of supervised machine learning algorithms which traditionally rely on expert input labels and are typically used for decision making by an expert, recommender systems specifically rely on data input from non-expert or casual users and are meant to be used directly by these same non-expert users on an every day basis. Furthermore, the advances in online machine learning, data generation, and predictive model learning have become increasingly interdependent, such that each one feeds on the other in an iterative cycle. Research in psychology suggests that people's choices are (1) contextually dependent, and (2) dependent on interaction history. Thus, while standard methods of training and assessing performance of recommender systems rely on benchmark datasets, we suggest that a critical step in the evolution of recommender systems is the development of benchmark models of human behavior that capture contextual and dynamic aspects of human behavior. It is important to emphasize that even extensive real life user-tests may not be sufficient to make up for this gap in benchmarking validity because user tests are typically done with either a focus on user satisfaction or engagement (clicks, sales, likes, etc) with whatever the recommender algorithm suggests to the user, and thus ignore the human cognitive aspect. We conclude by highlighting the interdisciplinary implications of this endeavor.\",\"PeriodicalId\":315651,\"journal\":{\"name\":\"Proceedings of the 10th ACM Conference on Recommender Systems\",\"volume\":\"3 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-09-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"18\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 10th ACM Conference on Recommender Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2959100.2959188\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 10th ACM Conference on Recommender Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2959100.2959188","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Human-Recommender Systems: From Benchmark Data to Benchmark Cognitive Models
We bring to the fore of the recommender system research community, an inconvenient truth about the current state of understanding how recommender system algorithms and humans influence one another, both computationally and cognitively. Unlike the great variety of supervised machine learning algorithms which traditionally rely on expert input labels and are typically used for decision making by an expert, recommender systems specifically rely on data input from non-expert or casual users and are meant to be used directly by these same non-expert users on an every day basis. Furthermore, the advances in online machine learning, data generation, and predictive model learning have become increasingly interdependent, such that each one feeds on the other in an iterative cycle. Research in psychology suggests that people's choices are (1) contextually dependent, and (2) dependent on interaction history. Thus, while standard methods of training and assessing performance of recommender systems rely on benchmark datasets, we suggest that a critical step in the evolution of recommender systems is the development of benchmark models of human behavior that capture contextual and dynamic aspects of human behavior. It is important to emphasize that even extensive real life user-tests may not be sufficient to make up for this gap in benchmarking validity because user tests are typically done with either a focus on user satisfaction or engagement (clicks, sales, likes, etc) with whatever the recommender algorithm suggests to the user, and thus ignore the human cognitive aspect. We conclude by highlighting the interdisciplinary implications of this endeavor.