Albatool Wazzan, Stephen MacNeil, Richard Souvenir
Web search engines have long served as indispensable tools for information retrieval; user behavior and query formulation strategies have been well studied. The introduction of search engines powered by large language models (LLMs) suggested more conversational search and new types of query strategies. In this paper, we compare traditional and LLM-based search for the task of image geolocation, i.e., determining the location where an image was captured. Our work examines user interactions, with a particular focus on query formulation strategies. In our study, 60 participants were assigned either traditional or LLM-based search engines as assistants for geolocation. Participants using traditional search more accurately predicted the location of the image compared to those using the LLM-based search. Distinct strategies emerged between users depending on the type of assistant. Participants using the LLM-based search issued longer, more natural language queries, but had shorter search sessions. When reformulating their search queries, traditional search participants tended to add more terms to their initial queries, whereas participants using the LLM-based search consistently rephrased their initial queries.
{"title":"Comparing Traditional and LLM-based Search for Image Geolocation","authors":"Albatool Wazzan, Stephen MacNeil, Richard Souvenir","doi":"10.1145/3627508.3638305","DOIUrl":"https://doi.org/10.1145/3627508.3638305","url":null,"abstract":"Web search engines have long served as indispensable tools for information retrieval; user behavior and query formulation strategies have been well studied. The introduction of search engines powered by large language models (LLMs) suggested more conversational search and new types of query strategies. In this paper, we compare traditional and LLM-based search for the task of image geolocation, i.e., determining the location where an image was captured. Our work examines user interactions, with a particular focus on query formulation strategies. In our study, 60 participants were assigned either traditional or LLM-based search engines as assistants for geolocation. Participants using traditional search more accurately predicted the location of the image compared to those using the LLM-based search. Distinct strategies emerged between users depending on the type of assistant. Participants using the LLM-based search issued longer, more natural language queries, but had shorter search sessions. When reformulating their search queries, traditional search participants tended to add more terms to their initial queries, whereas participants using the LLM-based search consistently rephrased their initial queries.","PeriodicalId":220434,"journal":{"name":"Conference on Human Information Interaction and Retrieval","volume":"1 3","pages":"291-302"},"PeriodicalIF":0.0,"publicationDate":"2024-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140503845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When evaluating interactive information retrieval (IIR) interfaces, it is common to collect data using subjective measures such as satisfaction, ease of use, usefulness, and user engagement. However, as these are collected post-task, they serve as surrogate measures for what occurred in the midst of the search activities. Further, such approaches may be subject to recency effects, where the last action in the search process influences the searchers’ opinions about the overall process. With recent improvements in facial emotion classification approaches, we propose that measuring emotional responses may provide a better indication of what is happening throughout search tasks. In this research, we present an approach for collecting real-time emotional responses during a search activity using consumer-grade front-facing cameras and a method of aligning these with search interface feature use. To validate the effectiveness of the approach, we have conducted a controlled laboratory study in which we manipulated the quality of the search results in order to determine if we can detect expected emotional responses, whether search behaviours influencing these emotional responses, and whether recency effects are present in post-task measures. The preliminary results of this study show that our approach is reliable for detecting emotional responses when searchers experience positive and negative emotions throughout the search process, isolate which interactive elements were used when positive and negative emotional responses were experienced, and illustrate how recency effects are present in post-task measures. Our upcoming study will investigate how our approach can be used to evaluate novel search interfaces. We will develop a novel search interface and evaluate it using our approach. Finally, we will create a dashboard to monitor academic literature. Using the same approach, we will demonstrate our approach can extend beyond traditional search interfaces and into more general interface assessment.
{"title":"Measuring In-Task Emotional Responses to Address Issues in Post-Task Questionnaires","authors":"Abbas Pirmoradi Bezanjani","doi":"10.1145/3576840.3578284","DOIUrl":"https://doi.org/10.1145/3576840.3578284","url":null,"abstract":"When evaluating interactive information retrieval (IIR) interfaces, it is common to collect data using subjective measures such as satisfaction, ease of use, usefulness, and user engagement. However, as these are collected post-task, they serve as surrogate measures for what occurred in the midst of the search activities. Further, such approaches may be subject to recency effects, where the last action in the search process influences the searchers’ opinions about the overall process. With recent improvements in facial emotion classification approaches, we propose that measuring emotional responses may provide a better indication of what is happening throughout search tasks. In this research, we present an approach for collecting real-time emotional responses during a search activity using consumer-grade front-facing cameras and a method of aligning these with search interface feature use. To validate the effectiveness of the approach, we have conducted a controlled laboratory study in which we manipulated the quality of the search results in order to determine if we can detect expected emotional responses, whether search behaviours influencing these emotional responses, and whether recency effects are present in post-task measures. The preliminary results of this study show that our approach is reliable for detecting emotional responses when searchers experience positive and negative emotions throughout the search process, isolate which interactive elements were used when positive and negative emotional responses were experienced, and illustrate how recency effects are present in post-task measures. Our upcoming study will investigate how our approach can be used to evaluate novel search interfaces. We will develop a novel search interface and evaluate it using our approach. Finally, we will create a dashboard to monitor academic literature. Using the same approach, we will demonstrate our approach can extend beyond traditional search interfaces and into more general interface assessment.","PeriodicalId":220434,"journal":{"name":"Conference on Human Information Interaction and Retrieval","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125902811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we present a new benchmark collection that shows how 404 users’ knowledge changed over the course of a search-as-learning session. We estimate the knowledge a user gains from each visited document and monitor knowledge change on a document-by-document basis. We describe the specifics of how this collection was created and provide a use case that illustrates potential future applications.
{"title":"The Evolution of User Knowledge during Search-as-Learning Sessions: A Benchmark and Baseline","authors":"Dima El Zein, C. Pereira","doi":"10.1145/3576840.3578273","DOIUrl":"https://doi.org/10.1145/3576840.3578273","url":null,"abstract":"In this paper, we present a new benchmark collection that shows how 404 users’ knowledge changed over the course of a search-as-learning session. We estimate the knowledge a user gains from each visited document and monitor knowledge change on a document-by-document basis. We describe the specifics of how this collection was created and provide a use case that illustrates potential future applications.","PeriodicalId":220434,"journal":{"name":"Conference on Human Information Interaction and Retrieval","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117287338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}