Recommender Systems Algorithm Selection for Ranking Prediction on Implicit Feedback Datasets

Lukas Wegmeth, Tobias Vente, Joeran Beel
{"title":"Recommender Systems Algorithm Selection for Ranking Prediction on Implicit Feedback Datasets","authors":"Lukas Wegmeth, Tobias Vente, Joeran Beel","doi":"arxiv-2409.05461","DOIUrl":null,"url":null,"abstract":"The recommender systems algorithm selection problem for ranking prediction on\nimplicit feedback datasets is under-explored. Traditional approaches in\nrecommender systems algorithm selection focus predominantly on rating\nprediction on explicit feedback datasets, leaving a research gap for ranking\nprediction on implicit feedback datasets. Algorithm selection is a critical\nchallenge for nearly every practitioner in recommender systems. In this work,\nwe take the first steps toward addressing this research gap. We evaluate the\nNDCG@10 of 24 recommender systems algorithms, each with two hyperparameter\nconfigurations, on 72 recommender systems datasets. We train four optimized\nmachine-learning meta-models and one automated machine-learning meta-model with\nthree different settings on the resulting meta-dataset. Our results show that\nthe predictions of all tested meta-models exhibit a median Spearman correlation\nranging from 0.857 to 0.918 with the ground truth. We show that the median\nSpearman correlation between meta-model predictions and the ground truth\nincreases by an average of 0.124 when the meta-model is optimized to predict\nthe ranking of algorithms instead of their performance. Furthermore, in terms\nof predicting the best algorithm for an unknown dataset, we demonstrate that\nthe best optimized traditional meta-model, e.g., XGBoost, achieves a recall of\n48.6%, outperforming the best tested automated machine learning meta-model,\ne.g., AutoGluon, which achieves a recall of 47.2%.","PeriodicalId":501281,"journal":{"name":"arXiv - CS - Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Information Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.05461","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The recommender systems algorithm selection problem for ranking prediction on implicit feedback datasets is under-explored. Traditional approaches in recommender systems algorithm selection focus predominantly on rating prediction on explicit feedback datasets, leaving a research gap for ranking prediction on implicit feedback datasets. Algorithm selection is a critical challenge for nearly every practitioner in recommender systems. In this work, we take the first steps toward addressing this research gap. We evaluate the NDCG@10 of 24 recommender systems algorithms, each with two hyperparameter configurations, on 72 recommender systems datasets. We train four optimized machine-learning meta-models and one automated machine-learning meta-model with three different settings on the resulting meta-dataset. Our results show that the predictions of all tested meta-models exhibit a median Spearman correlation ranging from 0.857 to 0.918 with the ground truth. We show that the median Spearman correlation between meta-model predictions and the ground truth increases by an average of 0.124 when the meta-model is optimized to predict the ranking of algorithms instead of their performance. Furthermore, in terms of predicting the best algorithm for an unknown dataset, we demonstrate that the best optimized traditional meta-model, e.g., XGBoost, achieves a recall of 48.6%, outperforming the best tested automated machine learning meta-model, e.g., AutoGluon, which achieves a recall of 47.2%.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
在隐式反馈数据集上进行排名预测的推荐系统算法选择
针对隐式反馈数据集排名预测的推荐系统算法选择问题还未得到充分探讨。推荐系统算法选择的传统方法主要侧重于显式反馈数据集上的评级预测,而对隐式反馈数据集上的排名预测则存在研究空白。对于几乎所有推荐系统从业者来说,算法选择都是一个严峻的挑战。在这项工作中,我们迈出了解决这一研究空白的第一步。我们在 72 个推荐系统数据集上评估了 24 种推荐系统算法的NDCG@10,每种算法有两种超参数配置。我们在生成的元数据集上使用三种不同设置训练了四个优化机器学习元模型和一个自动机器学习元模型。我们的结果表明,所有测试过的元模型的预测结果与地面实况的中位数斯皮尔曼相关性在 0.857 到 0.918 之间。我们发现,当元模型优化为预测算法排名而非性能时,元模型预测与地面实况之间的中位斯皮尔曼相关性平均增加了 0.124。此外,在预测未知数据集的最佳算法方面,我们证明了经过优化的最佳传统元模型(如 XGBoost)的召回率为 48.6%,超过了经过测试的最佳自动机器学习元模型(如 AutoGluon),后者的召回率为 47.2%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Decoding Style: Efficient Fine-Tuning of LLMs for Image-Guided Outfit Recommendation with Preference Retrieve, Annotate, Evaluate, Repeat: Leveraging Multimodal LLMs for Large-Scale Product Retrieval Evaluation Active Reconfigurable Intelligent Surface Empowered Synthetic Aperture Radar Imaging FLARE: Fusing Language Models and Collaborative Architectures for Recommender Enhancement Basket-Enhanced Heterogenous Hypergraph for Price-Sensitive Next Basket Recommendation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1