{"title":"User Simulation for Evaluating Information Access Systems","authors":"Krisztian Balog, ChengXiang Zhai","doi":"10.1561/1500000098","DOIUrl":null,"url":null,"abstract":"<p>Information access systems, such as search engines, recommender\nsystems, and conversational assistants, have become\nintegral to our daily lives as they help us satisfy our information\nneeds. However, evaluating the effectiveness of\nthese systems presents a long-standing and complex scientific\nchallenge. This challenge is rooted in the difficulty of\nassessing a system’s overall effectiveness in assisting users\nto complete tasks through interactive support, and further\nexacerbated by the substantial variation in user behaviour\nand preferences. To address this challenge, user simulation\nemerges as a promising solution.<p>This monograph focuses on providing a thorough understanding\nof user simulation techniques designed specifically\nfor evaluation purposes. We begin with a background of information\naccess system evaluation and explore the diverse\napplications of user simulation. Subsequently, we systematically\nreview the major research progress in user simulation,\ncovering both general frameworks for designing user simulators,\nutilizing user simulation for evaluation, and specific\nmodels and algorithms for simulating user interactions with\nsearch engines, recommender systems, and conversational\nassistants. Realizing that user simulation is an interdisciplinary\nresearch topic, whenever possible, we attempt to\nestablish connections with related fields, including machine\nlearning, dialogue systems, user modeling, and economics.\nWe end the monograph with a broad discussion of important\nfuture research directions, many of which extend beyond the\nevaluation of information access systems and are expected\nto have broader impact on how to evaluate interactive intelligent\nsystems in general.</p></p>","PeriodicalId":48829,"journal":{"name":"Foundations and Trends in Information Retrieval","volume":null,"pages":null},"PeriodicalIF":8.3000,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Foundations and Trends in Information Retrieval","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1561/1500000098","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Information access systems, such as search engines, recommender
systems, and conversational assistants, have become
integral to our daily lives as they help us satisfy our information
needs. However, evaluating the effectiveness of
these systems presents a long-standing and complex scientific
challenge. This challenge is rooted in the difficulty of
assessing a system’s overall effectiveness in assisting users
to complete tasks through interactive support, and further
exacerbated by the substantial variation in user behaviour
and preferences. To address this challenge, user simulation
emerges as a promising solution.
This monograph focuses on providing a thorough understanding
of user simulation techniques designed specifically
for evaluation purposes. We begin with a background of information
access system evaluation and explore the diverse
applications of user simulation. Subsequently, we systematically
review the major research progress in user simulation,
covering both general frameworks for designing user simulators,
utilizing user simulation for evaluation, and specific
models and algorithms for simulating user interactions with
search engines, recommender systems, and conversational
assistants. Realizing that user simulation is an interdisciplinary
research topic, whenever possible, we attempt to
establish connections with related fields, including machine
learning, dialogue systems, user modeling, and economics.
We end the monograph with a broad discussion of important
future research directions, many of which extend beyond the
evaluation of information access systems and are expected
to have broader impact on how to evaluate interactive intelligent
systems in general.
期刊介绍:
The surge in research across all domains in the past decade has resulted in a plethora of new publications, causing an exponential growth in published research. Navigating through this extensive literature and staying current has become a time-consuming challenge. While electronic publishing provides instant access to more articles than ever, discerning the essential ones for a comprehensive understanding of any topic remains an issue. To tackle this, Foundations and Trends® in Information Retrieval - FnTIR - addresses the problem by publishing high-quality survey and tutorial monographs in the field.
Each issue of Foundations and Trends® in Information Retrieval - FnT IR features a 50-100 page monograph authored by research leaders, covering tutorial subjects, research retrospectives, and survey papers that provide state-of-the-art reviews within the scope of the journal.