首页 > 最新文献

RepSys '13最新文献

英文 中文
How to improve the statistical power of the 10-fold cross validation scheme in recommender systems 如何提高推荐系统中10倍交叉验证方案的统计能力
Pub Date : 2013-10-12 DOI: 10.1145/2532508.2532510
A. Košir, Ante Odic, M. Tkalcic
At this stage development of recommender systems (RS), an evaluation of competing approaches (methods) yielding similar performances in terms of experiment reproduction is of crucial importance in order to direct the further development toward the most promising direction. These comparisons are usually based on the 10-fold cross validation scheme. Since the compared performances are often similar to each other, the application of statistical significance testing is inevitable in order to not to get misled by randomly caused differences of achieved performances. For the same reason, to reproduce experiments on a different set of experimental data, the most powerful significance testing should be applied. In this work we provide guidelines on how to achieve the highest power in the comparison of RS and we demonstrate them on a comparison of RS performances when different variables are contextualized.
在推荐系统(RS)发展的这个阶段,对在实验再现方面产生相似性能的竞争方法(方法)进行评估是至关重要的,以便指导进一步发展朝着最有希望的方向发展。这些比较通常基于10倍交叉验证方案。由于比较的性能往往是相似的,为了不被随机产生的性能差异所误导,应用统计显著性检验是不可避免的。出于同样的原因,为了在不同的实验数据集上重现实验,应该使用最强大的显著性检验。在这项工作中,我们提供了关于如何在RS比较中实现最高功率的指导方针,并在不同变量上下文化时对RS性能进行了比较。
{"title":"How to improve the statistical power of the 10-fold cross validation scheme in recommender systems","authors":"A. Košir, Ante Odic, M. Tkalcic","doi":"10.1145/2532508.2532510","DOIUrl":"https://doi.org/10.1145/2532508.2532510","url":null,"abstract":"At this stage development of recommender systems (RS), an evaluation of competing approaches (methods) yielding similar performances in terms of experiment reproduction is of crucial importance in order to direct the further development toward the most promising direction. These comparisons are usually based on the 10-fold cross validation scheme. Since the compared performances are often similar to each other, the application of statistical significance testing is inevitable in order to not to get misled by randomly caused differences of achieved performances. For the same reason, to reproduce experiments on a different set of experimental data, the most powerful significance testing should be applied. In this work we provide guidelines on how to achieve the highest power in the comparison of RS and we demonstrate them on a comparison of RS performances when different variables are contextualized.","PeriodicalId":398648,"journal":{"name":"RepSys '13","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134370032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Offline evaluation of recommender systems: all pain and no gain? 推荐系统的离线评估:所有的痛苦都没有收获?
Pub Date : 2013-10-12 DOI: 10.1145/2532508.2532509
M. Levy
A large-scale offline evaluation -- with a big money prize attached -- established recommender systems as a niche discipline worth researching, and one where robust and reproducible experiments would be easy. But since then critiques within academia have shown up shortcomings in the most appealingly objective evaluation metrics, war stories from the commercial front line have suggested that correlation between offline metrics and bottom line gains in production may be non-existent, and several subsequent academic competitions have come under fierce criticism from both advisors and participants. In this talk I will draw on practical experience at Last.fm and Mendeley, as well as insights from others, to offer some opinions about offline evaluation of recommender systems: whether we still need it all, what value we can hope to draw from it, how best to do it if we have to, and how to make the experience less painful than it is right now.
一项大规模的线下评估——附带巨额奖金——将推荐系统确立为一个值得研究的利基学科,并且很容易进行稳健且可重复的实验。但从那以后,学术界的批评显示了最吸引人的客观评估指标的缺陷,来自商业前线的战争故事表明,线下指标与生产中底线收益之间的相关性可能不存在,随后的几次学术竞赛受到了顾问和参与者的激烈批评。在这次演讲中,我最后将借鉴实际经验。fm和Mendeley,以及其他人的见解,对推荐系统的离线评估提出了一些看法:我们是否仍然需要这一切,我们希望从中获得什么价值,如果我们必须这样做,如何最好地做到这一点,以及如何使体验比现在更痛苦。
{"title":"Offline evaluation of recommender systems: all pain and no gain?","authors":"M. Levy","doi":"10.1145/2532508.2532509","DOIUrl":"https://doi.org/10.1145/2532508.2532509","url":null,"abstract":"A large-scale offline evaluation -- with a big money prize attached -- established recommender systems as a niche discipline worth researching, and one where robust and reproducible experiments would be easy. But since then critiques within academia have shown up shortcomings in the most appealingly objective evaluation metrics, war stories from the commercial front line have suggested that correlation between offline metrics and bottom line gains in production may be non-existent, and several subsequent academic competitions have come under fierce criticism from both advisors and participants.\u0000 In this talk I will draw on practical experience at Last.fm and Mendeley, as well as insights from others, to offer some opinions about offline evaluation of recommender systems: whether we still need it all, what value we can hope to draw from it, how best to do it if we have to, and how to make the experience less painful than it is right now.","PeriodicalId":398648,"journal":{"name":"RepSys '13","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130528621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A comparative analysis of offline and online evaluations and discussion of research paper recommender system evaluation 对线下和线上评价进行对比分析,并对科研论文推荐系统评价进行探讨
Pub Date : 2013-10-12 DOI: 10.1145/2532508.2532511
J. Beel, Marcel Genzmehr, Stefan Langer, A. Nürnberger, Bela Gipp
Offline evaluations are the most common evaluation method for research paper recommender systems. However, no thorough discussion on the appropriateness of offline evaluations has taken place, despite some voiced criticism. We conducted a study in which we evaluated various recommendation approaches with both offline and online evaluations. We found that results of offline and online evaluations often contradict each other. We discuss this finding in detail and conclude that offline evaluations may be inappropriate for evaluating research paper recommender systems, in many settings.
离线评价是科研论文推荐系统中最常用的评价方法。然而,尽管有人提出了批评,但并没有对线下评价的适当性进行深入的讨论。我们进行了一项研究,通过离线和在线评估来评估各种推荐方法。我们发现,线下和线上评估的结果常常相互矛盾。我们详细讨论了这一发现,并得出结论,在许多情况下,离线评估可能不适合评估研究论文推荐系统。
{"title":"A comparative analysis of offline and online evaluations and discussion of research paper recommender system evaluation","authors":"J. Beel, Marcel Genzmehr, Stefan Langer, A. Nürnberger, Bela Gipp","doi":"10.1145/2532508.2532511","DOIUrl":"https://doi.org/10.1145/2532508.2532511","url":null,"abstract":"Offline evaluations are the most common evaluation method for research paper recommender systems. However, no thorough discussion on the appropriateness of offline evaluations has taken place, despite some voiced criticism. We conducted a study in which we evaluated various recommendation approaches with both offline and online evaluations. We found that results of offline and online evaluations often contradict each other. We discuss this finding in detail and conclude that offline evaluations may be inappropriate for evaluating research paper recommender systems, in many settings.","PeriodicalId":398648,"journal":{"name":"RepSys '13","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131337362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 141
Research paper recommender system evaluation: a quantitative literature survey 论文推荐系统评价研究:定量文献综述
Pub Date : 2013-10-12 DOI: 10.1145/2532508.2532512
Joeran Beel, Stefan Langer, Marcel Genzmehr, Bela Gipp, Corinna Breitinger, A. Nürnberger
Over 80 approaches for academic literature recommendation exist today. The approaches were introduced and evaluated in more than 170 research articles, as well as patents, presentations and blogs. We reviewed these approaches and found most evaluations to contain major shortcomings. Of the approaches proposed, 21% were not evaluated. Among the evaluated approaches, 19% were not evaluated against a baseline. Of the user studies performed, 60% had 15 or fewer participants or did not report on the number of participants. Information on runtime and coverage was rarely provided. Due to these and several other shortcomings described in this paper, we conclude that it is currently not possible to determine which recommendation approaches for academic literature are the most promising. However, there is little value in the existence of more than 80 approaches if the best performing approaches are unknown.
目前存在80多种学术文献推荐方法。这些方法在170多篇研究论文、专利、演讲和博客中被介绍和评估。我们回顾了这些方法,发现大多数评估都存在重大缺陷。在提出的方法中,21%没有进行评估。在评估的方法中,19%没有根据基线进行评估。在进行的用户研究中,60%的参与者少于15人,或者没有报告参与者的数量。关于运行时和覆盖率的信息很少被提供。由于本文中描述的这些和其他几个缺点,我们得出结论,目前不可能确定哪种学术文献推荐方法最有前途。然而,如果最好的方法是未知的,那么超过80种方法的存在就没有什么价值。
{"title":"Research paper recommender system evaluation: a quantitative literature survey","authors":"Joeran Beel, Stefan Langer, Marcel Genzmehr, Bela Gipp, Corinna Breitinger, A. Nürnberger","doi":"10.1145/2532508.2532512","DOIUrl":"https://doi.org/10.1145/2532508.2532512","url":null,"abstract":"Over 80 approaches for academic literature recommendation exist today. The approaches were introduced and evaluated in more than 170 research articles, as well as patents, presentations and blogs. We reviewed these approaches and found most evaluations to contain major shortcomings. Of the approaches proposed, 21% were not evaluated. Among the evaluated approaches, 19% were not evaluated against a baseline. Of the user studies performed, 60% had 15 or fewer participants or did not report on the number of participants. Information on runtime and coverage was rarely provided. Due to these and several other shortcomings described in this paper, we conclude that it is currently not possible to determine which recommendation approaches for academic literature are the most promising. However, there is little value in the existence of more than 80 approaches if the best performing approaches are unknown.","PeriodicalId":398648,"journal":{"name":"RepSys '13","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123466537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 149
Toward identification and adoption of best practices in algorithmic recommender systems research 对算法推荐系统研究中最佳实践的识别和采用
Pub Date : 2013-10-12 DOI: 10.1145/2532508.2532513
J. Konstan, G. Adomavicius
One of the goals of data-intensive research, in any field of study, is to grow knowledge over time as additional studies contribute to collective knowledge and understanding. Two steps are critical to making such research cumulative -- the individual research results need to be documented thoroughly and conducted on data made available to others (to allow replication and meta-analysis), and the individual research needs to be carried out correctly, following standards and best practices for coding, missing data, algorithm choices, algorithm implementations, metrics, and statistics. This work aims to address a growing concern that the Recommender Systems research community (which is uniquely equipped to address many important challenges in electronic commerce, social networks, social media, and big data settings) is facing a crisis where a significant number of research papers lack the rigor and evaluation to be properly judged and, therefore, have little to contribute to collective knowledge. We advocate that this issue can be addressed through development and dissemination (to authors, reviewers, and editors) of best-practice research methodologies, resulting in specific guidelines and checklists, as well as through tool development to support effective research. We also plan to assess the impact on the field with an eye toward supporting such efforts in other data-intensive specialties.
在任何研究领域,数据密集型研究的目标之一是随着时间的推移而增长知识,因为额外的研究有助于集体知识和理解。要使此类研究累积起来,有两个关键步骤——个人研究结果需要被彻底记录下来,并在可供他人使用的数据基础上进行(以允许复制和荟萃分析),个人研究需要正确执行,遵循编码、缺失数据、算法选择、算法实现、指标和统计方面的标准和最佳实践。这项工作旨在解决一个日益增长的担忧,即推荐系统研究界(它是解决电子商务、社交网络、社交媒体和大数据设置中的许多重要挑战的独特装备)正面临着一场危机,其中大量研究论文缺乏严谨性和评估,无法得到适当的判断,因此,对集体知识的贡献很小。我们主张这个问题可以通过发展和传播(对作者、审稿人和编辑)最佳实践研究方法来解决,从而产生具体的指导方针和检查清单,以及通过工具开发来支持有效的研究。我们还计划评估对该领域的影响,着眼于支持其他数据密集型专业的此类努力。
{"title":"Toward identification and adoption of best practices in algorithmic recommender systems research","authors":"J. Konstan, G. Adomavicius","doi":"10.1145/2532508.2532513","DOIUrl":"https://doi.org/10.1145/2532508.2532513","url":null,"abstract":"One of the goals of data-intensive research, in any field of study, is to grow knowledge over time as additional studies contribute to collective knowledge and understanding. Two steps are critical to making such research cumulative -- the individual research results need to be documented thoroughly and conducted on data made available to others (to allow replication and meta-analysis), and the individual research needs to be carried out correctly, following standards and best practices for coding, missing data, algorithm choices, algorithm implementations, metrics, and statistics. This work aims to address a growing concern that the Recommender Systems research community (which is uniquely equipped to address many important challenges in electronic commerce, social networks, social media, and big data settings) is facing a crisis where a significant number of research papers lack the rigor and evaluation to be properly judged and, therefore, have little to contribute to collective knowledge. We advocate that this issue can be addressed through development and dissemination (to authors, reviewers, and editors) of best-practice research methodologies, resulting in specific guidelines and checklists, as well as through tool development to support effective research. We also plan to assess the impact on the field with an eye toward supporting such efforts in other data-intensive specialties.","PeriodicalId":398648,"journal":{"name":"RepSys '13","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132114095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
期刊
RepSys '13
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1