Johannes Kiesel, Arefeh Bahrami, Benno Stein, Avishek Anand, Matthias Hagen
Queries containing false memories (i.e., attributes the user misremembered about a searched item) represent a challenge for search systems. A query with a false memory will match inadequate results or even no result, and an automatic query correction is necessary to satisfy the user expectations. For voice-based search interfaces, which aim at a natural, dialog-based search experience, a sensible answer to this kind of unintentionally ill-posed queries is even more crucial. However, the usual solutions in display-based interfaces for queries without matches (e.g., suggesting to drop some query terms) cannot really be transferred to the voice-based setting. Based on the assumption that false memory queries could be identified---a research problem in its own right---, we present the first user study on how voice-based search systems may communicate the respective corrections to a user. Our study compares the user satisfaction in a voice-based search setting for three kinds of false memory clarifications and a baseline case where the system just answers "I don't know.'' Our findings suggest that (1)~users are more satisfied when they receive a clarification that and how the system corrected a false memory, (2)~users even prefer failed correction attempts over no such attempt, and (3)~the tone of the clarification has to be considered for the best possible user satisfaction as well.
{"title":"Clarifying False Memories in Voice-based Search","authors":"Johannes Kiesel, Arefeh Bahrami, Benno Stein, Avishek Anand, Matthias Hagen","doi":"10.1145/3295750.3298961","DOIUrl":"https://doi.org/10.1145/3295750.3298961","url":null,"abstract":"Queries containing false memories (i.e., attributes the user misremembered about a searched item) represent a challenge for search systems. A query with a false memory will match inadequate results or even no result, and an automatic query correction is necessary to satisfy the user expectations. For voice-based search interfaces, which aim at a natural, dialog-based search experience, a sensible answer to this kind of unintentionally ill-posed queries is even more crucial. However, the usual solutions in display-based interfaces for queries without matches (e.g., suggesting to drop some query terms) cannot really be transferred to the voice-based setting. Based on the assumption that false memory queries could be identified---a research problem in its own right---, we present the first user study on how voice-based search systems may communicate the respective corrections to a user. Our study compares the user satisfaction in a voice-based search setting for three kinds of false memory clarifications and a baseline case where the system just answers \"I don't know.'' Our findings suggest that (1)~users are more satisfied when they receive a clarification that and how the system corrected a false memory, (2)~users even prefer failed correction attempts over no such attempt, and (3)~the tone of the clarification has to be considered for the best possible user satisfaction as well.","PeriodicalId":187771,"journal":{"name":"Proceedings of the 2019 Conference on Human Information Interaction and Retrieval","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131351920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Speech-based user interfaces and, in particular, voice-activated digital assistants are gaining popularity. Assistants provide their users with an opportunity for hands-free interaction, and present an additional accessibility level for people who are blind. According to prior research, informational searches form a noticeable fraction of user interactions with the assistants. All major commercially available assistants handle factoid questions well by providing an answer that is quick, concise, and to-the-point. However, for complex information seeking intents, when a deeper exploration and multi-turn interaction may be required, the assistants often do not produce the desired results. One of the main challenges for designing a voice-based web search system is the higher cognitive load for audio perception compared to visual perception. Additionally, close attention should be paid at differences in designing for different user groups, as their information seeking styles and design needs and may differ. In this work, we discuss the challenges of designing systems for non-visual ad-hoc web search and exploration and outline a set of proposed experiments tackling various aspects of non-visual web search.
{"title":"Towards Non-Visual Web Search","authors":"Alexandra Vtyurina","doi":"10.1145/3295750.3298976","DOIUrl":"https://doi.org/10.1145/3295750.3298976","url":null,"abstract":"Speech-based user interfaces and, in particular, voice-activated digital assistants are gaining popularity. Assistants provide their users with an opportunity for hands-free interaction, and present an additional accessibility level for people who are blind. According to prior research, informational searches form a noticeable fraction of user interactions with the assistants. All major commercially available assistants handle factoid questions well by providing an answer that is quick, concise, and to-the-point. However, for complex information seeking intents, when a deeper exploration and multi-turn interaction may be required, the assistants often do not produce the desired results. One of the main challenges for designing a voice-based web search system is the higher cognitive load for audio perception compared to visual perception. Additionally, close attention should be paid at differences in designing for different user groups, as their information seeking styles and design needs and may differ. In this work, we discuss the challenges of designing systems for non-visual ad-hoc web search and exploration and outline a set of proposed experiments tackling various aspects of non-visual web search.","PeriodicalId":187771,"journal":{"name":"Proceedings of the 2019 Conference on Human Information Interaction and Retrieval","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129606283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An important contribution in the development of interactive information retrieval as a research discipline has been the specification of information seeking models. A variety of such models have been documented, some of which apply generally to a broad set of search settings, and others which are specific to settings such as academic search. Within the domain of academic search, it is unclear to what extent searchers employ the strategies specified in such models when faced with different types of information needs (ranging from fact verification to knowledge discovery). Using an online questionnaire that presented four different academic search scenarios, we collected data on the self-reported likelihood of researchers (professors, graduate students) to use specific strategies from each of five different information seeking models. Preliminary analysis of data from a pilot study (n=10) has revealed differences in which of the strategies are employed depending on the type of search scenario as well as the level of expertise of the searcher.
{"title":"A Study of Academic Search Scenarios and Information Seeking Behaviour","authors":"O. Hoeber, Dolinkumar Patel, D. Storie","doi":"10.1145/3295750.3298943","DOIUrl":"https://doi.org/10.1145/3295750.3298943","url":null,"abstract":"An important contribution in the development of interactive information retrieval as a research discipline has been the specification of information seeking models. A variety of such models have been documented, some of which apply generally to a broad set of search settings, and others which are specific to settings such as academic search. Within the domain of academic search, it is unclear to what extent searchers employ the strategies specified in such models when faced with different types of information needs (ranging from fact verification to knowledge discovery). Using an online questionnaire that presented four different academic search scenarios, we collected data on the self-reported likelihood of researchers (professors, graduate students) to use specific strategies from each of five different information seeking models. Preliminary analysis of data from a pilot study (n=10) has revealed differences in which of the strategies are employed depending on the type of search scenario as well as the level of expertise of the searcher.","PeriodicalId":187771,"journal":{"name":"Proceedings of the 2019 Conference on Human Information Interaction and Retrieval","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134080083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
People often travel in groups where information seeking occurs throughout the whole course of travel before various decisions can be made. This qualitative study investigates how small groups of young Chinese leisure tourists conduct collaborative information seeking (CIS) to support their joint decision-making as travelling to Australia through a grounded theory approach. Most of existing literature in CIS are limited to workplace contexts. In addition, previous studies often failed to include the outcome of information seeking to better understand collaboration as a process of joint decision-making. This study aims to develop new models and theories of tourist CIS, propose appropriate methods to study CIS in leisure contexts and provide practical implication regarding the design of CIS tools and systems for tourists. This research contributes to existing understandings of CIS by exploring the understudied leisure context, investigating it in a broader framework of joint decision-making, and looking at a comprehensive project where CIS occurs instead of individual information seeking tasks.
{"title":"Collaborative Information Seeking in Tourism: A Study of Young Chinese Leisure Tourists Visiting Australia","authors":"Mouda Ye","doi":"10.1145/3295750.3298979","DOIUrl":"https://doi.org/10.1145/3295750.3298979","url":null,"abstract":"People often travel in groups where information seeking occurs throughout the whole course of travel before various decisions can be made. This qualitative study investigates how small groups of young Chinese leisure tourists conduct collaborative information seeking (CIS) to support their joint decision-making as travelling to Australia through a grounded theory approach. Most of existing literature in CIS are limited to workplace contexts. In addition, previous studies often failed to include the outcome of information seeking to better understand collaboration as a process of joint decision-making. This study aims to develop new models and theories of tourist CIS, propose appropriate methods to study CIS in leisure contexts and provide practical implication regarding the design of CIS tools and systems for tourists. This research contributes to existing understandings of CIS by exploring the understudied leisure context, investigating it in a broader framework of joint decision-making, and looking at a comprehensive project where CIS occurs instead of individual information seeking tasks.","PeriodicalId":187771,"journal":{"name":"Proceedings of the 2019 Conference on Human Information Interaction and Retrieval","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116263201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. M. Estrada, M. Koolen, K. Beelen, Hugo C. Huurdeman, Mari Wigham, C. Martinez-Ortiz, Jaap Blom, R. Ordelman
The practices of digital humanists are evolving, highly diversified and experimental. There is also a lack of agreement about whether or not digital humanists should have data and programming skills. Thus, their underlying needs for higher levels of flexibility and transparency may be contradicted by their explicit requests for user-friendly graphic user interfaces (GUIs), creating challenges for designing information systems in the digital humanities. This paper describes the experience of designing the Media Suite, which provides access to important Dutch audiovisual collections and is part of the Dutch infrastructure for digital humanities. We outline a solution to the conflicting needs of scholars, by combining a semi-traditional GUI with Jupyter Notebooks. This solution tackles the needs of both novice and advanced users in digital research methods in the humanities. This demonstration paper explains how the Media Suite and the Jupyter notebooks work together, and elaborates on the rationale behind the design choices. We also outline the implications this hybrid and extensible approach has for interface design for the information science and scholarly community.
{"title":"The CLARIAH Media Suite: a Hybrid Approach to System Design in the Humanities","authors":"L. M. Estrada, M. Koolen, K. Beelen, Hugo C. Huurdeman, Mari Wigham, C. Martinez-Ortiz, Jaap Blom, R. Ordelman","doi":"10.1145/3295750.3298918","DOIUrl":"https://doi.org/10.1145/3295750.3298918","url":null,"abstract":"The practices of digital humanists are evolving, highly diversified and experimental. There is also a lack of agreement about whether or not digital humanists should have data and programming skills. Thus, their underlying needs for higher levels of flexibility and transparency may be contradicted by their explicit requests for user-friendly graphic user interfaces (GUIs), creating challenges for designing information systems in the digital humanities. This paper describes the experience of designing the Media Suite, which provides access to important Dutch audiovisual collections and is part of the Dutch infrastructure for digital humanities. We outline a solution to the conflicting needs of scholars, by combining a semi-traditional GUI with Jupyter Notebooks. This solution tackles the needs of both novice and advanced users in digital research methods in the humanities. This demonstration paper explains how the Media Suite and the Jupyter notebooks work together, and elaborates on the rationale behind the design choices. We also outline the implications this hybrid and extensible approach has for interface design for the information science and scholarly community.","PeriodicalId":187771,"journal":{"name":"Proceedings of the 2019 Conference on Human Information Interaction and Retrieval","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123867764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haitao Yu, A. Jatowt, Roi Blanco, J. Jose, K. Zhou
Query logs contain rich feedback information from a large number of users interacting with search engines. Various click models have been developed to decode users' search behavior and to extract useful knowledge from query logs. Although the state-of-the-art neural click models have been shown to be very effective in click modeling, the input representations of queries and documents rely on either manually crafted features or on automatic methods suffering from the high-dimensionality issue. Moreover, these neural click models are still rather restrictive when coping with commonly biased user clicks. In this paper, we investigate how to effectively deploy a neural network model for decoding users' click behavior. First, we present two novel rank-biased neural network models ($RBNN$ and $RBNN^* $) for click modeling. The key idea is to deploy different weight matrices across different rank positions. Second, we introduce a new method ($QDmymathhyphen DCCA$) for automatically learning the vector representations for both queries and documents within the same low-dimensional space, which provides high-quality inputs for $RBNN$ and $RBNN^* $. Finally, a series of experiments are conducted on two different real query logs to validate the effectiveness and efficiency of the proposed neural click models. The experiments demonstrate that: (1) The proposed models can achieve substantially improved performance over the state-of-the-art baseline on two datasets across multiple metrics. By incorporating rank-specific weight matrices, $RBNN$ and $RBNN^* $ are more capable of dealing with the position-bias problem. (2) The input representations of queries, documents and context information significantly affect the performance of neural click models. Thanks to the application of $QDmymathhyphen DCCA$, not only $RBNN$ and $RBNN^* $ but also the baseline method exhibit enhanced performance. Furthermore, the training cost under the proposed models is greatly reduced.
{"title":"A Rank-biased Neural Network Model for Click Modeling","authors":"Haitao Yu, A. Jatowt, Roi Blanco, J. Jose, K. Zhou","doi":"10.1145/3295750.3298920","DOIUrl":"https://doi.org/10.1145/3295750.3298920","url":null,"abstract":"Query logs contain rich feedback information from a large number of users interacting with search engines. Various click models have been developed to decode users' search behavior and to extract useful knowledge from query logs. Although the state-of-the-art neural click models have been shown to be very effective in click modeling, the input representations of queries and documents rely on either manually crafted features or on automatic methods suffering from the high-dimensionality issue. Moreover, these neural click models are still rather restrictive when coping with commonly biased user clicks. In this paper, we investigate how to effectively deploy a neural network model for decoding users' click behavior. First, we present two novel rank-biased neural network models ($RBNN$ and $RBNN^* $) for click modeling. The key idea is to deploy different weight matrices across different rank positions. Second, we introduce a new method ($QDmymathhyphen DCCA$) for automatically learning the vector representations for both queries and documents within the same low-dimensional space, which provides high-quality inputs for $RBNN$ and $RBNN^* $. Finally, a series of experiments are conducted on two different real query logs to validate the effectiveness and efficiency of the proposed neural click models. The experiments demonstrate that: (1) The proposed models can achieve substantially improved performance over the state-of-the-art baseline on two datasets across multiple metrics. By incorporating rank-specific weight matrices, $RBNN$ and $RBNN^* $ are more capable of dealing with the position-bias problem. (2) The input representations of queries, documents and context information significantly affect the performance of neural click models. Thanks to the application of $QDmymathhyphen DCCA$, not only $RBNN$ and $RBNN^* $ but also the baseline method exhibit enhanced performance. Furthermore, the training cost under the proposed models is greatly reduced.","PeriodicalId":187771,"journal":{"name":"Proceedings of the 2019 Conference on Human Information Interaction and Retrieval","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115920827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A/B testing has become the de facto standard for optimizing design, helping designers craft more effective user experiences by leveraging data. A typical A/B test involves dividing user traffic between two experimental conditions (A and B), and looking for statistically significant differences in performance indicators (e.g., conversion rates) between them. While this technique is popular, there are other, powerful data-driven methods --- complementary to A/B testing --- that can tie design choices to desired outcomes. Mining data from existing designs can expose designers to a greater space of divergent solutions than A/B testing [1,4] ,RICO:2017. Since companies cannot predict a priori if the engineering effort for creating alternatives will be commensurate with a performance increase, they often test small changes, along gradients to local optima. With the millions of websites and mobile apps available today, it is likely that almost any UX problem a designer encounters has already been considered and solved by someone. The challenges are finding relevant existing solutions, measuring their performance, and correlating these metrics with design features. Recent systems that capture and aggregate interaction data from third-party Android apps --- with zero code integration --- open-source analytics that were previously locked away in each app, allowing designers to test and compare UI/UX patterns found in the wild: [2,3] 2017. Lightweight prototypes with tight user feedback loops, or experimentation engines, can bootstrap product design involving technologies that are actively being developed (e.g., artificial intelligence, virtual/augmented reality), where both use cases and capabilities are not well-understood [5]. These systems afford staged automation: initially, "Wizard of Oz'' techniques can scaffold needfinding, and eventually be replaced with automated solutions informed by the collected data. For example, a chatbot deployed on social media can serve as an experimentation engine for automating fashion advice [7]. At first, a pool of personal stylists can power the chatbot to collect organic conversations revealing common fashion problems, effective interaction patterns for addressing them, and design considerations for automation. Once technologies are developed to scale useful interventions [8,9], the chatbot platform provides a testbed for iteratively refining them. Generative models trained on a set of effective design examples can support predictive workflows that allow designers to rapidly prototype new, performant solutions [6]. Models such as generative adversarial networks and variational autoencoders can produce designs based on high-level constraints, or complete them given partial specifications. For example, a mobile wireframing tool backed by such a model could suggest adding "username" and "password" input fields to a screen with a centrally placed "login" button.
{"title":"Data-Driven Design: Beyond A/B Testing","authors":"Ranjitha Kumar","doi":"10.1145/3295750.3300046","DOIUrl":"https://doi.org/10.1145/3295750.3300046","url":null,"abstract":"A/B testing has become the de facto standard for optimizing design, helping designers craft more effective user experiences by leveraging data. A typical A/B test involves dividing user traffic between two experimental conditions (A and B), and looking for statistically significant differences in performance indicators (e.g., conversion rates) between them. While this technique is popular, there are other, powerful data-driven methods --- complementary to A/B testing --- that can tie design choices to desired outcomes. Mining data from existing designs can expose designers to a greater space of divergent solutions than A/B testing [1,4] ,RICO:2017. Since companies cannot predict a priori if the engineering effort for creating alternatives will be commensurate with a performance increase, they often test small changes, along gradients to local optima. With the millions of websites and mobile apps available today, it is likely that almost any UX problem a designer encounters has already been considered and solved by someone. The challenges are finding relevant existing solutions, measuring their performance, and correlating these metrics with design features. Recent systems that capture and aggregate interaction data from third-party Android apps --- with zero code integration --- open-source analytics that were previously locked away in each app, allowing designers to test and compare UI/UX patterns found in the wild: [2,3] 2017. Lightweight prototypes with tight user feedback loops, or experimentation engines, can bootstrap product design involving technologies that are actively being developed (e.g., artificial intelligence, virtual/augmented reality), where both use cases and capabilities are not well-understood [5]. These systems afford staged automation: initially, \"Wizard of Oz'' techniques can scaffold needfinding, and eventually be replaced with automated solutions informed by the collected data. For example, a chatbot deployed on social media can serve as an experimentation engine for automating fashion advice [7]. At first, a pool of personal stylists can power the chatbot to collect organic conversations revealing common fashion problems, effective interaction patterns for addressing them, and design considerations for automation. Once technologies are developed to scale useful interventions [8,9], the chatbot platform provides a testbed for iteratively refining them. Generative models trained on a set of effective design examples can support predictive workflows that allow designers to rapidly prototype new, performant solutions [6]. Models such as generative adversarial networks and variational autoencoders can produce designs based on high-level constraints, or complete them given partial specifications. For example, a mobile wireframing tool backed by such a model could suggest adding \"username\" and \"password\" input fields to a screen with a centrally placed \"login\" button.","PeriodicalId":187771,"journal":{"name":"Proceedings of the 2019 Conference on Human Information Interaction and Retrieval","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123425187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Designing user studies in the interactive information retrieval (IIR) paradigm on people with impairments may sometimes require different methodological considerations than for other users. Consequently, there may be a tension between what the community regards as being a rigorous methodology against what researchers can do ethically with their users. This paper discusses issues to consider when designing IIR studies involving people with dyslexia, such as sampling, informed consent and data collection. The conclusion is that conducting user studies on participants with dyslexia requires special considerations at all stages of the experimental design. The purpose of this paper is to raise awareness and understanding in the research community about experimental methods involving users with dyslexia, and addresses researchers, as well as editors and reviewers. Several of the issues raised do not only apply to people with dyslexia, but have implications when researching other groups, for instance elderly people and users with learning, cognitive, sensory or motor impairments.
{"title":"Experimental Methods in IIR: The Tension between Rigour and Ethics in Studies Involving Users with Dyslexia","authors":"G. Berget, A. MacFarlane","doi":"10.1145/3295750.3298939","DOIUrl":"https://doi.org/10.1145/3295750.3298939","url":null,"abstract":"Designing user studies in the interactive information retrieval (IIR) paradigm on people with impairments may sometimes require different methodological considerations than for other users. Consequently, there may be a tension between what the community regards as being a rigorous methodology against what researchers can do ethically with their users. This paper discusses issues to consider when designing IIR studies involving people with dyslexia, such as sampling, informed consent and data collection. The conclusion is that conducting user studies on participants with dyslexia requires special considerations at all stages of the experimental design. The purpose of this paper is to raise awareness and understanding in the research community about experimental methods involving users with dyslexia, and addresses researchers, as well as editors and reviewers. Several of the issues raised do not only apply to people with dyslexia, but have implications when researching other groups, for instance elderly people and users with learning, cognitive, sensory or motor impairments.","PeriodicalId":187771,"journal":{"name":"Proceedings of the 2019 Conference on Human Information Interaction and Retrieval","volume":"5 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128916545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We investigate the relationship between search behavior, eye -tracking measures, and learning. We conducted a user study where 30 participants performed searches on the web. We measured their verbal knowledge before and after each task in a content-independent manner, by assessing the semantic similarity of their entries to expert vocabulary. We hypothesize that differences in verbal knowledge-change of participants are reflected in their search behaviors and eye-gaze measures related to acquiring information and reading. Our results show that participants with higher change in verbal knowledge differ by reading significantly less, and entering more sophisticated queries, compared to those with lower change in knowledge. However, we do not find significant differences in other search interactions like page visits, and number of queries.
{"title":"Measuring Learning During Search: Differences in Interactions, Eye-Gaze, and Semantic Similarity to Expert Knowledge","authors":"Nilavra Bhattacharya, J. Gwizdka","doi":"10.1145/3295750.3298926","DOIUrl":"https://doi.org/10.1145/3295750.3298926","url":null,"abstract":"We investigate the relationship between search behavior, eye -tracking measures, and learning. We conducted a user study where 30 participants performed searches on the web. We measured their verbal knowledge before and after each task in a content-independent manner, by assessing the semantic similarity of their entries to expert vocabulary. We hypothesize that differences in verbal knowledge-change of participants are reflected in their search behaviors and eye-gaze measures related to acquiring information and reading. Our results show that participants with higher change in verbal knowledge differ by reading significantly less, and entering more sophisticated queries, compared to those with lower change in knowledge. However, we do not find significant differences in other search interactions like page visits, and number of queries.","PeriodicalId":187771,"journal":{"name":"Proceedings of the 2019 Conference on Human Information Interaction and Retrieval","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128608437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Toine Bogers, Samuel Dodson, Luanne Freund, Maria Gäde, M. Hall, M. Koolen, Vivien Petras, N. Pharo, M. Skov
ACM Reference Format: Toine Bogers, Samuel Dodson, Luanne Freund, Maria Gäde, Mark Hall, Marijn Koolen, Vivien Petras, Nils Pharo, and Mette Skov. 2019. Workshop on Barriers to Interactive IR Resources Re-use (BIIRRR 2019). In 2019 Conference on Human Information Interaction and Retrieval (CHIIR ’19), March 10–14, 2019, Glasgow, United Kingdom. ACM, New York, NY, USA, 4 pages. https://doi.org/10.1145/3295750.3298965
ACM参考格式:Toine Bogers, Samuel Dodson, Luanne Freund, Maria Gäde, Mark Hall, Marijn Koolen, Vivien Petras, Nils Pharo和Mette Skov。2019。互动红外资源再利用障碍研讨会(BIIRRR 2019)。2019人类信息交互与检索会议(CHIIR ' 19), 2019年3月10-14日,英国格拉斯哥。ACM,纽约,美国,4页。https://doi.org/10.1145/3295750.3298965
{"title":"Workshop on Barriers to Interactive IR Resources Re-use (BIIRRR 2019)","authors":"Toine Bogers, Samuel Dodson, Luanne Freund, Maria Gäde, M. Hall, M. Koolen, Vivien Petras, N. Pharo, M. Skov","doi":"10.1145/3295750.3298965","DOIUrl":"https://doi.org/10.1145/3295750.3298965","url":null,"abstract":"ACM Reference Format: Toine Bogers, Samuel Dodson, Luanne Freund, Maria Gäde, Mark Hall, Marijn Koolen, Vivien Petras, Nils Pharo, and Mette Skov. 2019. Workshop on Barriers to Interactive IR Resources Re-use (BIIRRR 2019). In 2019 Conference on Human Information Interaction and Retrieval (CHIIR ’19), March 10–14, 2019, Glasgow, United Kingdom. ACM, New York, NY, USA, 4 pages. https://doi.org/10.1145/3295750.3298965","PeriodicalId":187771,"journal":{"name":"Proceedings of the 2019 Conference on Human Information Interaction and Retrieval","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125421889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}