首页 > 最新文献

Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval最新文献

英文 中文
Enhancing Recommendation Diversity using Determinantal Point Processes on Knowledge Graphs 基于知识图的确定性点过程增强推荐多样性
Lu Gan, Diana Nurbakova, Léa Laporte, S. Calabretto
Top-N recommendations are widely applied in various real life domains and keep attracting intense attention from researchers and industry due to available multi-type information, new advances in AI models and deeper understanding of user satisfaction. Whileaccuracy has been the prevailing issue of the recommendation problem for the last decades, other facets of the problem, namelydiversity andexplainability, have received much less attention. In this paper, we focus on enhancing diversity of top-N recommendation, while ensuring the trade-off between accuracy and diversity. Thus, we propose an effective framework DivKG leveraging knowledge graph embedding and determinantal point processes (DPP). First, we capture different kinds of relations among users, items and additional entities through a knowledge graph structure. Then, we represent both entities and relations as k-dimensional vectors by optimizing a margin-based loss with all kinds of historical interactions. We use these representations to construct kernel matrices of DPP in order to make top-N diversified predictions. We evaluate our framework on MovieLens datasets coupled with IMDb dataset. Our empirical results show substantial improvement over the state-of-the-art regarding both accuracy and diversity metrics.
Top-N推荐广泛应用于现实生活的各个领域,由于可获得的多类型信息、人工智能模型的新进展以及对用户满意度的深入理解,Top-N推荐不断引起研究人员和业界的高度关注。虽然在过去的几十年里,准确性一直是推荐问题的主要问题,但问题的其他方面,即多样性和可解释性,受到的关注要少得多。在本文中,我们的重点是增强top-N推荐的多样性,同时确保准确性和多样性之间的权衡。因此,我们提出了一个利用知识图嵌入和确定点过程(DPP)的有效框架DivKG。首先,我们通过知识图结构捕获用户、项目和附加实体之间的不同类型的关系。然后,我们通过优化具有各种历史交互的基于边际的损失,将实体和关系表示为k维向量。我们使用这些表示来构造DPP的核矩阵,以便做出top-N的多样化预测。我们在MovieLens数据集和IMDb数据集上评估我们的框架。我们的实证结果显示,在准确性和多样性指标方面,比最先进的技术有了实质性的改进。
{"title":"Enhancing Recommendation Diversity using Determinantal Point Processes on Knowledge Graphs","authors":"Lu Gan, Diana Nurbakova, Léa Laporte, S. Calabretto","doi":"10.1145/3397271.3401213","DOIUrl":"https://doi.org/10.1145/3397271.3401213","url":null,"abstract":"Top-N recommendations are widely applied in various real life domains and keep attracting intense attention from researchers and industry due to available multi-type information, new advances in AI models and deeper understanding of user satisfaction. Whileaccuracy has been the prevailing issue of the recommendation problem for the last decades, other facets of the problem, namelydiversity andexplainability, have received much less attention. In this paper, we focus on enhancing diversity of top-N recommendation, while ensuring the trade-off between accuracy and diversity. Thus, we propose an effective framework DivKG leveraging knowledge graph embedding and determinantal point processes (DPP). First, we capture different kinds of relations among users, items and additional entities through a knowledge graph structure. Then, we represent both entities and relations as k-dimensional vectors by optimizing a margin-based loss with all kinds of historical interactions. We use these representations to construct kernel matrices of DPP in order to make top-N diversified predictions. We evaluate our framework on MovieLens datasets coupled with IMDb dataset. Our empirical results show substantial improvement over the state-of-the-art regarding both accuracy and diversity metrics.","PeriodicalId":252050,"journal":{"name":"Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131302845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Knowledge Enhanced Personalized Search 知识增强个性化搜索
Shuqi Lu, Zhicheng Dou, Chenyan Xiong, Xiaojie Wang, Ji-rong Wen
This paper presents a knowledge graph enhanced personalized search model, KEPS. For each user and her queries, KEPS first con- ducts personalized entity linking on the queries and forms better intent representations; then it builds a knowledge enhanced profile for the user, using memory networks to store the predicted search intents and linked entities in her search history. The knowledge enhanced user profile and intent representation are then utilized by KEPS for better, knowledge enhanced, personalized search. Furthermore, after providing personalized search for each query, KEPS leverages user's feedback (click on documents) to post-adjust the entity linking on previous queries. This fixes previous linking errors and improves ranking quality for future queries. Experiments on the public AOL search log demonstrate the advantage of knowledge in personalized search: personalized entity linking better reflects user's search intent, the memory networks better maintain user's subtle preferences, and the post linking adjustment fixes some linking errors with the received feedback signals. The three components together lead to a significantly better ranking accuracy of KEPS.
提出了一种知识图谱增强的个性化搜索模型KEPS。对于每个用户及其查询,KEPS首先在查询上进行个性化的实体链接,形成更好的意图表示;然后,它为用户建立一个知识增强的配置文件,使用记忆网络存储预测的搜索意图和搜索历史中的链接实体。知识增强的用户档案和意图表示随后被KEPS用于更好的、知识增强的、个性化的搜索。此外,在为每个查询提供个性化搜索之后,KEPS利用用户的反馈(点击文档)对先前查询上的实体链接进行后期调整。这修复了以前的链接错误,并提高了未来查询的排名质量。在AOL公共搜索日志上的实验证明了知识在个性化搜索中的优势:个性化实体链接能更好地反映用户的搜索意图,记忆网络能更好地维护用户的微妙偏好,后链接调整能根据接收到的反馈信号修正一些链接错误。这三个组成部分共同提高了KEPS的排序精度。
{"title":"Knowledge Enhanced Personalized Search","authors":"Shuqi Lu, Zhicheng Dou, Chenyan Xiong, Xiaojie Wang, Ji-rong Wen","doi":"10.1145/3397271.3401089","DOIUrl":"https://doi.org/10.1145/3397271.3401089","url":null,"abstract":"This paper presents a knowledge graph enhanced personalized search model, KEPS. For each user and her queries, KEPS first con- ducts personalized entity linking on the queries and forms better intent representations; then it builds a knowledge enhanced profile for the user, using memory networks to store the predicted search intents and linked entities in her search history. The knowledge enhanced user profile and intent representation are then utilized by KEPS for better, knowledge enhanced, personalized search. Furthermore, after providing personalized search for each query, KEPS leverages user's feedback (click on documents) to post-adjust the entity linking on previous queries. This fixes previous linking errors and improves ranking quality for future queries. Experiments on the public AOL search log demonstrate the advantage of knowledge in personalized search: personalized entity linking better reflects user's search intent, the memory networks better maintain user's subtle preferences, and the post linking adjustment fixes some linking errors with the received feedback signals. The three components together lead to a significantly better ranking accuracy of KEPS.","PeriodicalId":252050,"journal":{"name":"Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126741394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Agent Dialogue: A Platform for Conversational Information Seeking Experimentation Agent对话:对话信息寻求实验平台
A. Czyzewski, Jeffrey Dalton, A. Leuski
Conversational Information Seeking (CIS) is an emerging area of Information Retrieval focused on interactive search systems. As a result there is a need for new benchmark datasets and tools to enable their creation. In this demo we present the Agent Dialogue (AD) platform, an open-source system developed for researchers to perform Wizard-of-Oz CIS experiments. AD is a scalable cloud-native platform developed with Docker and Kubernetes with a flexible and modular micro-service architecture built on production-grade state-of-the-art open-source tools (Kubernetes, gRPC streaming, React, and Firebase). It supports varied front-ends and has the ability to interface with multiple existing agent systems, including Google Assistant and open-source search libraries. It includes support for centralized structure logging as well as offline relevance annotation.
会话信息搜索(CIS)是信息检索的一个新兴领域,主要关注交互式搜索系统。因此,需要新的基准数据集和工具来创建它们。在这个演示中,我们展示了Agent Dialogue (AD)平台,这是一个为研究人员开发的开源系统,用于执行Wizard-of-Oz CIS实验。AD是一个可扩展的云原生平台,由Docker和Kubernetes开发,具有灵活和模块化的微服务架构,构建在生产级最先进的开源工具(Kubernetes, gRPC streaming, React和Firebase)上。它支持各种前端,并且能够与多个现有代理系统(包括Google Assistant和开源搜索库)进行交互。它包括对集中式结构日志记录以及脱机相关注释的支持。
{"title":"Agent Dialogue: A Platform for Conversational Information Seeking Experimentation","authors":"A. Czyzewski, Jeffrey Dalton, A. Leuski","doi":"10.1145/3397271.3401397","DOIUrl":"https://doi.org/10.1145/3397271.3401397","url":null,"abstract":"Conversational Information Seeking (CIS) is an emerging area of Information Retrieval focused on interactive search systems. As a result there is a need for new benchmark datasets and tools to enable their creation. In this demo we present the Agent Dialogue (AD) platform, an open-source system developed for researchers to perform Wizard-of-Oz CIS experiments. AD is a scalable cloud-native platform developed with Docker and Kubernetes with a flexible and modular micro-service architecture built on production-grade state-of-the-art open-source tools (Kubernetes, gRPC streaming, React, and Firebase). It supports varied front-ends and has the ability to interface with multiple existing agent systems, including Google Assistant and open-source search libraries. It includes support for centralized structure logging as well as offline relevance annotation.","PeriodicalId":252050,"journal":{"name":"Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121618052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Investigating Reading Behavior in Fine-grained Relevance Judgment 细粒度关联判断中的阅读行为研究
Zhijing Wu, Jiaxin Mao, Yiqun Liu, Min Zhang, Shaoping Ma
A better understanding of users' reading behavior helps improve many information retrieval (IR) tasks, such as relevance estimation and document ranking. Existing research has already leveraged eye movement information to investigate user's reading process during document-level relevance judgments and the findings were adopted to build more effective ranking models. Recently, fine-grained (e.g., passage or sentence level) relevance judgments have been paid much attention to with the requirements in conversational search and QA systems. However, there is still a lack of thorough investigation on user's reading behavior during these kinds of interaction processes. To shed light on this research question, we investigate how users allocate their attention to passages of a document during the relevance judgment process. With the eye-tracking data collected in a laboratory study, we show that users pay more attention to the "key" passages which contain key useful information. Users tend to revisit these key passages several times to accumulate and verify the gathered information. With both content and user behavior features, we find that key passages can be predicted with supervised learning. We believe that this work contributes to better understanding users' reading behavior and may provide more explainability for relevance estimation.
更好地理解用户的阅读行为有助于改进许多信息检索(IR)任务,如相关性估计和文档排序。已有研究利用眼动信息来考察用户在文档级相关性判断过程中的阅读过程,并利用研究结果构建更有效的排序模型。近年来,细粒度(如段落或句子级别)的相关性判断在会话搜索和问答系统中得到了广泛的关注。然而,对于用户在这些交互过程中的阅读行为,目前还缺乏深入的研究。为了阐明这个研究问题,我们调查了用户在相关性判断过程中如何分配他们对文件段落的注意力。利用实验室采集的眼动数据,我们发现用户更关注包含关键有用信息的“关键”段落。用户倾向于多次访问这些关键段落,以积累和验证收集到的信息。结合内容和用户行为特征,我们发现可以用监督学习来预测关键段落。我们相信这项工作有助于更好地理解用户的阅读行为,并可能为相关性估计提供更多的可解释性。
{"title":"Investigating Reading Behavior in Fine-grained Relevance Judgment","authors":"Zhijing Wu, Jiaxin Mao, Yiqun Liu, Min Zhang, Shaoping Ma","doi":"10.1145/3397271.3401305","DOIUrl":"https://doi.org/10.1145/3397271.3401305","url":null,"abstract":"A better understanding of users' reading behavior helps improve many information retrieval (IR) tasks, such as relevance estimation and document ranking. Existing research has already leveraged eye movement information to investigate user's reading process during document-level relevance judgments and the findings were adopted to build more effective ranking models. Recently, fine-grained (e.g., passage or sentence level) relevance judgments have been paid much attention to with the requirements in conversational search and QA systems. However, there is still a lack of thorough investigation on user's reading behavior during these kinds of interaction processes. To shed light on this research question, we investigate how users allocate their attention to passages of a document during the relevance judgment process. With the eye-tracking data collected in a laboratory study, we show that users pay more attention to the \"key\" passages which contain key useful information. Users tend to revisit these key passages several times to accumulate and verify the gathered information. With both content and user behavior features, we find that key passages can be predicted with supervised learning. We believe that this work contributes to better understanding users' reading behavior and may provide more explainability for relevance estimation.","PeriodicalId":252050,"journal":{"name":"Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123008839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Lightweight Environment for Learning Experimental IR Research Practices 学习实验IR研究实践的轻量级环境
Zeynep Akkalyoncu Yilmaz, C. Clarke, Jimmy J. Lin
Tools, computing environments, and datasets form the three critical ingredients for teaching and learning the practical aspects of experimental IR research. Assembling these ingredients can often be challenging, particularly in the context of short courses that cannot afford large startup costs. As an initial attempt to address these issues, we describe materials that we have developed for the "Introduction to IR" session at the ACM SIGIR/SIGKDD Africa Summer School on Machine Learning for Data Mining and Search (AFIRM 2020), which builds on three components: the open-source Lucene search library, cloud-based notebooks, and the MS MARCO dataset. We offer a self-reflective evaluation of our efforts and hope that our lessons shared can benefit future efforts.
工具、计算环境和数据集构成了IR实验研究实践方面教学的三个关键要素。整合这些要素通常是具有挑战性的,特别是在短期课程的背景下,这些课程无法承担大量的启动成本。作为解决这些问题的初步尝试,我们描述了我们为ACM SIGIR/SIGKDD非洲暑期学校关于数据挖掘和搜索机器学习(AFIRM 2020)的“IR入门”会议开发的材料,它建立在三个组成部分:开源Lucene搜索库,基于云的笔记本电脑和MS MARCO数据集。我们对我们的努力进行了自我反思,并希望我们分享的经验教训对未来的努力有益。
{"title":"A Lightweight Environment for Learning Experimental IR Research Practices","authors":"Zeynep Akkalyoncu Yilmaz, C. Clarke, Jimmy J. Lin","doi":"10.1145/3397271.3401395","DOIUrl":"https://doi.org/10.1145/3397271.3401395","url":null,"abstract":"Tools, computing environments, and datasets form the three critical ingredients for teaching and learning the practical aspects of experimental IR research. Assembling these ingredients can often be challenging, particularly in the context of short courses that cannot afford large startup costs. As an initial attempt to address these issues, we describe materials that we have developed for the \"Introduction to IR\" session at the ACM SIGIR/SIGKDD Africa Summer School on Machine Learning for Data Mining and Search (AFIRM 2020), which builds on three components: the open-source Lucene search library, cloud-based notebooks, and the MS MARCO dataset. We offer a self-reflective evaluation of our efforts and hope that our lessons shared can benefit future efforts.","PeriodicalId":252050,"journal":{"name":"Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122229730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Using Exploration to Alleviate Closed Loop Effects in Recommender Systems 利用探索缓解推荐系统中的闭环效应
A. H. Jadidinejad, C. Macdonald, I. Ounis
Recommendation systems are often trained and evaluated based on users' interactions obtained through the use of an existing, already deployed, recommendation system. Hence the deployed recommendation systems will recommend some items and not others, and items will have varying levels of exposure to users. As a result, the collected feedback dataset (including most public datasets) can be skewed towards the particular items favored by the deployed model. In this manner, training new recommender systems from interaction data obtained from a previous model creates a feedback loop, i.e. a closed loop feedback. In this paper, we first introduce the closed loop feedback and then investigate the effect of closed loop feedback in both the training and offline evaluation of recommendation models, in contrast to a further exploration of the users' preferences (obtained from the randomly presented items). To achieve this, we make use of open loop datasets, where randomly selected items are presented to users for feedback. Our experiments using an open loop Yahoo! dataset reveal that there is a strong correlation between the deployed model and a new model that is trained based on the closed loop feedback. Moreover, with the aid of exploration we can decrease the effect of closed loop feedback and obtain new and better generalizable models.
推荐系统通常是根据用户通过使用现有的、已经部署的推荐系统获得的交互来训练和评估的。因此,部署的推荐系统将推荐一些项目而不推荐其他项目,这些项目对用户的曝光程度也会有所不同。因此,收集到的反馈数据集(包括大多数公共数据集)可能会偏向于部署模型所青睐的特定项目。通过这种方式,从先前模型获得的交互数据中训练新的推荐系统创建了一个反馈回路,即闭环反馈。在本文中,我们首先引入了闭环反馈,然后研究了闭环反馈在推荐模型的训练和离线评估中的效果,而不是进一步探索用户的偏好(从随机呈现的项目中获得)。为了实现这一点,我们使用开环数据集,其中随机选择的项目呈现给用户反馈。我们的实验使用开环Yahoo!数据集显示,部署的模型与基于闭环反馈训练的新模型之间存在很强的相关性。此外,借助探索可以减少闭环反馈的影响,得到新的更好的可泛化模型。
{"title":"Using Exploration to Alleviate Closed Loop Effects in Recommender Systems","authors":"A. H. Jadidinejad, C. Macdonald, I. Ounis","doi":"10.1145/3397271.3401230","DOIUrl":"https://doi.org/10.1145/3397271.3401230","url":null,"abstract":"Recommendation systems are often trained and evaluated based on users' interactions obtained through the use of an existing, already deployed, recommendation system. Hence the deployed recommendation systems will recommend some items and not others, and items will have varying levels of exposure to users. As a result, the collected feedback dataset (including most public datasets) can be skewed towards the particular items favored by the deployed model. In this manner, training new recommender systems from interaction data obtained from a previous model creates a feedback loop, i.e. a closed loop feedback. In this paper, we first introduce the closed loop feedback and then investigate the effect of closed loop feedback in both the training and offline evaluation of recommendation models, in contrast to a further exploration of the users' preferences (obtained from the randomly presented items). To achieve this, we make use of open loop datasets, where randomly selected items are presented to users for feedback. Our experiments using an open loop Yahoo! dataset reveal that there is a strong correlation between the deployed model and a new model that is trained based on the closed loop feedback. Moreover, with the aid of exploration we can decrease the effect of closed loop feedback and obtain new and better generalizable models.","PeriodicalId":252050,"journal":{"name":"Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122245847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
The New TREC Track on Podcast Search and Summarization 播客搜索和总结的新TREC轨道
R. Jones
Podcasts are exploding in popularity. As this medium grows, it becomes increasingly important to understand the content of podcasts (e.g. what exactly is being covered, by whom, and how?), and how we can use this to connect users to shows that align with their interests. Given the explosion of new material, how do listeners find the needle in the haystack, and connect to those shows or episodes that speak to them? Furthermore, once they are presented with potential podcasts to listen to, how can they decide if this is what they want? To move the needle forward more rapidly toward this goal, we've introduced the Spotify Podcasts Dataset [1] and TREC shared task [2]. This dataset represents the first large-scale set of podcasts, with transcripts, released to the research community. The accompanying shared task is part of the TREC 2020 Conference, run by the US National Institute of Standards and Technology. The challenge is planned to run for several years, with progressively more demanding tasks: this first year, the challenge involves a search-related task and a task to automatically generate summaries, both based on transcripts of the audio. In this talk I will describe the task and dataset, outlining how the dataset is orders of magnitude larger than previous spoken document datasets, and how the tasks take us beyond previous shared tasks both in spoken document retrieval and NLP.
播客正迅速流行起来。随着这种媒体的发展,理解播客的内容变得越来越重要(例如,到底是什么内容,由谁来覆盖,以及如何覆盖?),以及我们如何利用这些内容将用户与符合他们兴趣的节目联系起来。考虑到新材料的爆炸式增长,听众如何在大海捞针中找到针,并与那些与他们说话的节目或剧集联系起来?此外,一旦有潜在的播客供他们收听,他们如何决定这是否是他们想要的?为了更快地实现这一目标,我们引入了Spotify Podcasts Dataset[1]和TREC共享任务[2]。这个数据集代表了第一个大规模的播客集,并有转录本,发布给研究社区。伴随的共同任务是TREC 2020会议的一部分,由美国国家标准与技术研究所主办。这项挑战计划持续数年,任务要求越来越高:第一年的挑战包括搜索相关任务和自动生成摘要的任务,两者都是基于音频的文本。在这次演讲中,我将描述任务和数据集,概述数据集如何比以前的语音文档数据集大几个数量级,以及任务如何使我们超越以前在语音文档检索和NLP中的共享任务。
{"title":"The New TREC Track on Podcast Search and Summarization","authors":"R. Jones","doi":"10.1145/3397271.3402431","DOIUrl":"https://doi.org/10.1145/3397271.3402431","url":null,"abstract":"Podcasts are exploding in popularity. As this medium grows, it becomes increasingly important to understand the content of podcasts (e.g. what exactly is being covered, by whom, and how?), and how we can use this to connect users to shows that align with their interests. Given the explosion of new material, how do listeners find the needle in the haystack, and connect to those shows or episodes that speak to them? Furthermore, once they are presented with potential podcasts to listen to, how can they decide if this is what they want? To move the needle forward more rapidly toward this goal, we've introduced the Spotify Podcasts Dataset [1] and TREC shared task [2]. This dataset represents the first large-scale set of podcasts, with transcripts, released to the research community. The accompanying shared task is part of the TREC 2020 Conference, run by the US National Institute of Standards and Technology. The challenge is planned to run for several years, with progressively more demanding tasks: this first year, the challenge involves a search-related task and a task to automatically generate summaries, both based on transcripts of the audio. In this talk I will describe the task and dataset, outlining how the dataset is orders of magnitude larger than previous spoken document datasets, and how the tasks take us beyond previous shared tasks both in spoken document retrieval and NLP.","PeriodicalId":252050,"journal":{"name":"Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123864904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Beyond User Embedding Matrix: Learning to Hash for Modeling Large-Scale Users in Recommendation 超越用户嵌入矩阵:学习哈希算法在推荐中建模大规模用户
Shaoyun Shi, Weizhi Ma, Min Zhang, Yongfeng Zhang, Xinxing Yu, Houzhi Shan, Yiqun Liu, Shaoping Ma
Modeling large scale and rare-interaction users are the two major challenges in recommender systems, which derives big gaps between researches and applications. Facing to millions or even billions of users, it is hard to store and leverage personalized preferences with a user embedding matrix in real scenarios. And many researches pay attention to users with rich histories, while users with only one or several interactions are the biggest part in real systems. Previous studies make efforts to handle one of the above issues but rarely tackle efficiency and cold-start problems together. In this work, a novel user preference representation called Preference Hash (PreHash) is proposed to model large scale users, including rare-interaction ones. In PreHash, a series of buckets are generated based on users' historical interactions. Users with similar preferences are assigned into the same buckets automatically, including warm and cold ones. Representations of the buckets are learned accordingly. Contributing to the designed hash buckets, only limited parameters are stored, which saves a lot of memory for more efficient modeling. Furthermore, when new interactions are made by a user, his buckets and representations will be dynamically updated, which enables more effective understanding and modeling of the user. It is worth mentioning that PreHash is flexible to work with various recommendation algorithms by taking the place of previous user embedding matrices. We combine it with multiple state-of-the-art recommendation methods and conduct various experiments. Comparative results on public datasets show that it not only improves the recommendation performance but also significantly reduces the number of model parameters. To summarize, PreHash has achieved significant improvements in both efficiency and effectiveness for recommender systems.
对大规模用户和少交互用户进行建模是推荐系统面临的两大挑战,这导致了研究与应用之间的巨大差距。面对数百万甚至数十亿的用户,很难在实际场景中使用用户嵌入矩阵来存储和利用个性化偏好。许多研究都关注具有丰富历史的用户,而只有一次或多次交互的用户在实际系统中所占的比例最大。以往的研究努力解决上述问题之一,但很少将效率和冷启动问题结合起来解决。在这项工作中,提出了一种新的用户偏好表示,称为偏好哈希(PreHash),用于模拟大规模用户,包括很少交互的用户。在PreHash中,一系列的bucket是基于用户的历史交互生成的。具有相似偏好的用户被自动分配到相同的桶中,包括热桶和冷桶。桶的表示被相应地学习。在设计的散列桶中,只存储有限的参数,这为更有效的建模节省了大量内存。此外,当用户进行新的交互时,他的桶和表示将被动态更新,从而能够更有效地理解和建模用户。值得一提的是,通过取代以前的用户嵌入矩阵,prepash可以灵活地与各种推荐算法一起工作。我们将其与多种最先进的推荐方法相结合,并进行了各种实验。在公共数据集上的对比结果表明,该方法不仅提高了推荐性能,而且显著减少了模型参数的数量。总而言之,prepash在推荐系统的效率和有效性方面都取得了显著的进步。
{"title":"Beyond User Embedding Matrix: Learning to Hash for Modeling Large-Scale Users in Recommendation","authors":"Shaoyun Shi, Weizhi Ma, Min Zhang, Yongfeng Zhang, Xinxing Yu, Houzhi Shan, Yiqun Liu, Shaoping Ma","doi":"10.1145/3397271.3401119","DOIUrl":"https://doi.org/10.1145/3397271.3401119","url":null,"abstract":"Modeling large scale and rare-interaction users are the two major challenges in recommender systems, which derives big gaps between researches and applications. Facing to millions or even billions of users, it is hard to store and leverage personalized preferences with a user embedding matrix in real scenarios. And many researches pay attention to users with rich histories, while users with only one or several interactions are the biggest part in real systems. Previous studies make efforts to handle one of the above issues but rarely tackle efficiency and cold-start problems together. In this work, a novel user preference representation called Preference Hash (PreHash) is proposed to model large scale users, including rare-interaction ones. In PreHash, a series of buckets are generated based on users' historical interactions. Users with similar preferences are assigned into the same buckets automatically, including warm and cold ones. Representations of the buckets are learned accordingly. Contributing to the designed hash buckets, only limited parameters are stored, which saves a lot of memory for more efficient modeling. Furthermore, when new interactions are made by a user, his buckets and representations will be dynamically updated, which enables more effective understanding and modeling of the user. It is worth mentioning that PreHash is flexible to work with various recommendation algorithms by taking the place of previous user embedding matrices. We combine it with multiple state-of-the-art recommendation methods and conduct various experiments. Comparative results on public datasets show that it not only improves the recommendation performance but also significantly reduces the number of model parameters. To summarize, PreHash has achieved significant improvements in both efficiency and effectiveness for recommender systems.","PeriodicalId":252050,"journal":{"name":"Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122812976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Fashion Compatibility Modeling through a Multi-modal Try-on-guided Scheme 通过多模态引导模式进行时尚兼容性建模
Xue Dong, Jianlong Wu, Xuemeng Song, Hongjun Dai, Liqiang Nie
Recent years have witnessed a growing trend of fashion compatibility modeling, which scores the matching degree of the given outfit and then provides people with some dressing advice. Existing methods have primarily solved this problem by analyzing the discrete interaction among multiple complementary items. However, the fashion items would present certain occlusion and deformation when they are worn on the body. Therefore, the discrete item interaction cannot capture the fashion compatibility in a combined manner due to the neglect of a crucial factor: the overall try-on appearance. In light of this, we propose a multi-modal try-on-guided compatibility modeling scheme to jointly characterize the discrete interaction and try-on appearance of the outfit. In particular, we first propose a multi-modal try-on template generator to automatically generate a try-on template from the visual and textual information of the outfit, depicting the overall look of its composing fashion items. Then, we introduce a new compatibility modeling scheme which integrates the outfit try-on appearance into the traditional discrete item interaction modeling. To fulfill the proposal, we construct a large-scale real-world dataset from SSENSE, named FOTOS, consisting of 11,000 well-matched outfits and their corresponding realistic try-on images. Extensive experiments have demonstrated its superiority to state-of-the-arts.
近年来,时尚兼容性模型越来越流行,这种模型对给定服装的匹配程度进行评分,然后为人们提供一些穿着建议。现有的方法主要是通过分析多个互补项之间的离散相互作用来解决这一问题。但时尚单品穿在身上会出现一定的遮挡和变形。因此,由于忽略了一个关键因素:整体试穿外观,离散的物品交互无法以组合的方式捕捉时尚兼容性。鉴于此,我们提出了一种多模态引导试戴兼容性建模方案,以共同表征服装的离散交互和试戴外观。特别地,我们首先提出了一个多模态试穿模板生成器,从服装的视觉和文本信息自动生成试穿模板,描绘其组成时尚单品的整体外观。然后,我们引入了一种新的兼容建模方案,将服装试穿外观集成到传统的离散项目交互建模中。为了实现这一提议,我们从SSENSE构建了一个大规模的真实世界数据集,名为FOTOS,由11,000套匹配良好的服装和相应的逼真试穿图像组成。大量的实验证明了它比最先进的技术优越。
{"title":"Fashion Compatibility Modeling through a Multi-modal Try-on-guided Scheme","authors":"Xue Dong, Jianlong Wu, Xuemeng Song, Hongjun Dai, Liqiang Nie","doi":"10.1145/3397271.3401047","DOIUrl":"https://doi.org/10.1145/3397271.3401047","url":null,"abstract":"Recent years have witnessed a growing trend of fashion compatibility modeling, which scores the matching degree of the given outfit and then provides people with some dressing advice. Existing methods have primarily solved this problem by analyzing the discrete interaction among multiple complementary items. However, the fashion items would present certain occlusion and deformation when they are worn on the body. Therefore, the discrete item interaction cannot capture the fashion compatibility in a combined manner due to the neglect of a crucial factor: the overall try-on appearance. In light of this, we propose a multi-modal try-on-guided compatibility modeling scheme to jointly characterize the discrete interaction and try-on appearance of the outfit. In particular, we first propose a multi-modal try-on template generator to automatically generate a try-on template from the visual and textual information of the outfit, depicting the overall look of its composing fashion items. Then, we introduce a new compatibility modeling scheme which integrates the outfit try-on appearance into the traditional discrete item interaction modeling. To fulfill the proposal, we construct a large-scale real-world dataset from SSENSE, named FOTOS, consisting of 11,000 well-matched outfits and their corresponding realistic try-on images. Extensive experiments have demonstrated its superiority to state-of-the-arts.","PeriodicalId":252050,"journal":{"name":"Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115163237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Identifying Tasks from Mobile App Usage Patterns 从移动应用程序使用模式中识别任务
Yuan Tian, K. Zhou, M. Lalmas, D. Pelleg
Mobile devices have become an increasingly ubiquitous part of our everyday life. We use mobile services to perform a broad range of tasks (e.g. booking travel or office work), leading to often lengthy interactions within distinct apps and services. Existing mobile systems handle mostly simple user needs, where a single app is taken as the unit of interaction. To understand users' expectations and to provide context-aware services, it is important to model users' interactions in the task space. In this work, we first propose and evaluate a method for the automated segmentation of users' app usage logs into task units. We focus on two problems: (i) given a sequential pair of app usage logs, identify if there exists a task boundary, and (ii) given any pair of two app usage logs, identify if they belong to the same task. We model these as classification problems that use features from three aspects of app usage patterns: temporal, similarity, and log sequence. Our classifiers improve on traditional timeout segmentation, achieving over 89% performance for both problems. Secondly, we use our best task classifier on a large-scale data set of commercial mobile app usage logs to identify common tasks. We observe that users' performed common tasks ranging from regular information checking to entertainment and booking dinner. Our proposed task identification approach provides the means to evaluate mobile services and applications with respect to task completion.
移动设备已经成为我们日常生活中越来越普遍的一部分。我们使用移动服务来执行广泛的任务(例如预订旅行或办公室工作),导致不同应用程序和服务之间的交互通常很长。现有的移动系统主要处理简单的用户需求,其中单个应用程序被视为交互单元。为了理解用户的期望并提供上下文感知的服务,在任务空间中对用户的交互建模是很重要的。在这项工作中,我们首先提出并评估了一种将用户应用程序使用日志自动分割为任务单元的方法。我们关注两个问题:(i)给定一对连续的应用程序使用日志,确定是否存在任务边界;(ii)给定任意一对两个应用程序使用日志,确定它们是否属于同一任务。我们将这些问题建模为使用应用使用模式三个方面特征的分类问题:时间、相似性和日志序列。我们的分类器改进了传统的超时分割,在这两个问题上都实现了89%以上的性能。其次,我们在商业移动应用程序使用日志的大规模数据集上使用我们最好的任务分类器来识别常见任务。我们观察到,用户执行了从定期信息查看到娱乐和预订晚餐等常见任务。我们提出的任务识别方法提供了评估与任务完成相关的移动服务和应用程序的方法。
{"title":"Identifying Tasks from Mobile App Usage Patterns","authors":"Yuan Tian, K. Zhou, M. Lalmas, D. Pelleg","doi":"10.1145/3397271.3401441","DOIUrl":"https://doi.org/10.1145/3397271.3401441","url":null,"abstract":"Mobile devices have become an increasingly ubiquitous part of our everyday life. We use mobile services to perform a broad range of tasks (e.g. booking travel or office work), leading to often lengthy interactions within distinct apps and services. Existing mobile systems handle mostly simple user needs, where a single app is taken as the unit of interaction. To understand users' expectations and to provide context-aware services, it is important to model users' interactions in the task space. In this work, we first propose and evaluate a method for the automated segmentation of users' app usage logs into task units. We focus on two problems: (i) given a sequential pair of app usage logs, identify if there exists a task boundary, and (ii) given any pair of two app usage logs, identify if they belong to the same task. We model these as classification problems that use features from three aspects of app usage patterns: temporal, similarity, and log sequence. Our classifiers improve on traditional timeout segmentation, achieving over 89% performance for both problems. Secondly, we use our best task classifier on a large-scale data set of commercial mobile app usage logs to identify common tasks. We observe that users' performed common tasks ranging from regular information checking to entertainment and booking dinner. Our proposed task identification approach provides the means to evaluate mobile services and applications with respect to task completion.","PeriodicalId":252050,"journal":{"name":"Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133624884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
期刊
Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1