评估信息获取系统的用户模拟

IF 8.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Foundations and Trends in Information Retrieval Pub Date : 2024-06-12 DOI:10.1561/1500000098
Krisztian Balog, ChengXiang Zhai
{"title":"评估信息获取系统的用户模拟","authors":"Krisztian Balog, ChengXiang Zhai","doi":"10.1561/1500000098","DOIUrl":null,"url":null,"abstract":"<p>Information access systems, such as search engines, recommender\nsystems, and conversational assistants, have become\nintegral to our daily lives as they help us satisfy our information\nneeds. However, evaluating the effectiveness of\nthese systems presents a long-standing and complex scientific\nchallenge. This challenge is rooted in the difficulty of\nassessing a system’s overall effectiveness in assisting users\nto complete tasks through interactive support, and further\nexacerbated by the substantial variation in user behaviour\nand preferences. To address this challenge, user simulation\nemerges as a promising solution.<p>This monograph focuses on providing a thorough understanding\nof user simulation techniques designed specifically\nfor evaluation purposes. We begin with a background of information\naccess system evaluation and explore the diverse\napplications of user simulation. Subsequently, we systematically\nreview the major research progress in user simulation,\ncovering both general frameworks for designing user simulators,\nutilizing user simulation for evaluation, and specific\nmodels and algorithms for simulating user interactions with\nsearch engines, recommender systems, and conversational\nassistants. Realizing that user simulation is an interdisciplinary\nresearch topic, whenever possible, we attempt to\nestablish connections with related fields, including machine\nlearning, dialogue systems, user modeling, and economics.\nWe end the monograph with a broad discussion of important\nfuture research directions, many of which extend beyond the\nevaluation of information access systems and are expected\nto have broader impact on how to evaluate interactive intelligent\nsystems in general.</p></p>","PeriodicalId":48829,"journal":{"name":"Foundations and Trends in Information Retrieval","volume":"33 1","pages":""},"PeriodicalIF":8.3000,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"User Simulation for Evaluating Information Access Systems\",\"authors\":\"Krisztian Balog, ChengXiang Zhai\",\"doi\":\"10.1561/1500000098\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Information access systems, such as search engines, recommender\\nsystems, and conversational assistants, have become\\nintegral to our daily lives as they help us satisfy our information\\nneeds. However, evaluating the effectiveness of\\nthese systems presents a long-standing and complex scientific\\nchallenge. This challenge is rooted in the difficulty of\\nassessing a system’s overall effectiveness in assisting users\\nto complete tasks through interactive support, and further\\nexacerbated by the substantial variation in user behaviour\\nand preferences. To address this challenge, user simulation\\nemerges as a promising solution.<p>This monograph focuses on providing a thorough understanding\\nof user simulation techniques designed specifically\\nfor evaluation purposes. We begin with a background of information\\naccess system evaluation and explore the diverse\\napplications of user simulation. Subsequently, we systematically\\nreview the major research progress in user simulation,\\ncovering both general frameworks for designing user simulators,\\nutilizing user simulation for evaluation, and specific\\nmodels and algorithms for simulating user interactions with\\nsearch engines, recommender systems, and conversational\\nassistants. Realizing that user simulation is an interdisciplinary\\nresearch topic, whenever possible, we attempt to\\nestablish connections with related fields, including machine\\nlearning, dialogue systems, user modeling, and economics.\\nWe end the monograph with a broad discussion of important\\nfuture research directions, many of which extend beyond the\\nevaluation of information access systems and are expected\\nto have broader impact on how to evaluate interactive intelligent\\nsystems in general.</p></p>\",\"PeriodicalId\":48829,\"journal\":{\"name\":\"Foundations and Trends in Information Retrieval\",\"volume\":\"33 1\",\"pages\":\"\"},\"PeriodicalIF\":8.3000,\"publicationDate\":\"2024-06-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Foundations and Trends in Information Retrieval\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1561/1500000098\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Foundations and Trends in Information Retrieval","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1561/1500000098","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

信息获取系统,如搜索引擎、推荐系统和对话助手,已经成为我们日常生活中不可或缺的一部分,因为它们能帮助我们满足信息需求。然而,评估这些系统的有效性是一项长期而复杂的科学挑战。这一挑战的根源在于难以评估系统在通过交互支持协助用户完成任务方面的整体有效性,而用户行为和偏好的巨大差异又进一步加剧了这一挑战。为了应对这一挑战,用户模拟成为一种很有前途的解决方案。本专著的重点是全面介绍专为评估目的而设计的用户模拟技术。我们首先介绍了信息访问系统评估的背景,并探讨了用户模拟的各种应用。随后,我们系统地回顾了用户模拟的主要研究进展,包括设计用户模拟器的一般框架、利用用户模拟进行评估,以及模拟用户与搜索引擎、推荐系统和会话助手交互的具体模型和算法。认识到用户模拟是一个跨学科的研究课题,我们尽可能地尝试与相关领域建立联系,包括机器学习、对话系统、用户建模和经济学。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
User Simulation for Evaluating Information Access Systems

Information access systems, such as search engines, recommender systems, and conversational assistants, have become integral to our daily lives as they help us satisfy our information needs. However, evaluating the effectiveness of these systems presents a long-standing and complex scientific challenge. This challenge is rooted in the difficulty of assessing a system’s overall effectiveness in assisting users to complete tasks through interactive support, and further exacerbated by the substantial variation in user behaviour and preferences. To address this challenge, user simulation emerges as a promising solution.

This monograph focuses on providing a thorough understanding of user simulation techniques designed specifically for evaluation purposes. We begin with a background of information access system evaluation and explore the diverse applications of user simulation. Subsequently, we systematically review the major research progress in user simulation, covering both general frameworks for designing user simulators, utilizing user simulation for evaluation, and specific models and algorithms for simulating user interactions with search engines, recommender systems, and conversational assistants. Realizing that user simulation is an interdisciplinary research topic, whenever possible, we attempt to establish connections with related fields, including machine learning, dialogue systems, user modeling, and economics. We end the monograph with a broad discussion of important future research directions, many of which extend beyond the evaluation of information access systems and are expected to have broader impact on how to evaluate interactive intelligent systems in general.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Foundations and Trends in Information Retrieval
Foundations and Trends in Information Retrieval COMPUTER SCIENCE, INFORMATION SYSTEMS-
CiteScore
39.10
自引率
0.00%
发文量
3
期刊介绍: The surge in research across all domains in the past decade has resulted in a plethora of new publications, causing an exponential growth in published research. Navigating through this extensive literature and staying current has become a time-consuming challenge. While electronic publishing provides instant access to more articles than ever, discerning the essential ones for a comprehensive understanding of any topic remains an issue. To tackle this, Foundations and Trends® in Information Retrieval - FnTIR - addresses the problem by publishing high-quality survey and tutorial monographs in the field. Each issue of Foundations and Trends® in Information Retrieval - FnT IR features a 50-100 page monograph authored by research leaders, covering tutorial subjects, research retrospectives, and survey papers that provide state-of-the-art reviews within the scope of the journal.
期刊最新文献
Multi-hop Question Answering User Simulation for Evaluating Information Access Systems Conversational Information Seeking Perspectives of Neurodiverse Participants in Interactive Information Retrieval Efficient and Effective Tree-based and Neural Learning to Rank
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1