Retrieval systems based on machine learning require both positive and negative examples to perform inference, which is usually obtained through relevance feedback. Unfortunately, explicit negative relevance feedback is thought to have poor user experience. Instead, systems typically rely on implicit negative feedback. In this study, we confirm that, in the case of binary relevance feedback, users prefer giving positive feedback (and implicit negative feedback) over negative feedback (and implicit positive feedback). These two feedback mechanisms are functionally equivalent, capturing the same information from the user, but differ in how they are framed. Despite users' preference for positive feedback, there were no significant differences in behaviour. As users were not shown how feedback influenced search results, we hypothesise that previously reported results could, at least in part, be due to cognitive biases related to user perception of negative feedback.
{"title":"How Relevance Feedback is Framed Affects User Experience, but not Behaviour","authors":"Dhruv Tripathi, A. Medlar, D. Glowacka","doi":"10.1145/3295750.3298957","DOIUrl":"https://doi.org/10.1145/3295750.3298957","url":null,"abstract":"Retrieval systems based on machine learning require both positive and negative examples to perform inference, which is usually obtained through relevance feedback. Unfortunately, explicit negative relevance feedback is thought to have poor user experience. Instead, systems typically rely on implicit negative feedback. In this study, we confirm that, in the case of binary relevance feedback, users prefer giving positive feedback (and implicit negative feedback) over negative feedback (and implicit positive feedback). These two feedback mechanisms are functionally equivalent, capturing the same information from the user, but differ in how they are framed. Despite users' preference for positive feedback, there were no significant differences in behaviour. As users were not shown how feedback influenced search results, we hypothesise that previously reported results could, at least in part, be due to cognitive biases related to user perception of negative feedback.","PeriodicalId":187771,"journal":{"name":"Proceedings of the 2019 Conference on Human Information Interaction and Retrieval","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124106893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Motivation : The past 25 years have seen a constant increase in the use of information technology to deliver digital content in cultural heritage settings. Museums have experimented with multimedia PCs, PDAs and phones, table-tops, Google Glass and now VR. The aim has always been to provide more information despite the fact that only a minority of visitors consumes the information on offer. Failing to engage visitors should direct our concerns on the 'receiving' side rather than on the 'delivering' side, that is to say to look at the visitors' experience rather than the technology [1]. Problem statement : The problem lays in the way the interactive experience is designed: too often it is as an 'add on' to the physical exhibition rather than an integral part of the experience. The emerging Internet of Things bridges the gap between the physical and the digital and enables to seamless integrate the digital content with the material collection or the historical space. Via embedded technology it is possible to collect and exploit visitors' data opening up new possibilities to create engaging and personalised visitors' experiences onsite and online. Approach : Using a number of case studies of exhibitions and installations used by over 20,000 visitors across Europe, I will show how the interaction with information can be designed as part of multisensory exhibitions that engages the visitor at many levels and generate emotion. The approach is collaborative and requires the equal contribution of technologists, designers and content experts throughout the whole process, from early conception to the final implementation. The response of the visitors goes well beyond expectations opening up new opportunities for long-term visitors' engagement.
{"title":"From Delivering Facts to Generating Emotions: The Complex Relationship between Museums and Information","authors":"Daniela Petrelli","doi":"10.1145/3295750.3300047","DOIUrl":"https://doi.org/10.1145/3295750.3300047","url":null,"abstract":"Motivation : The past 25 years have seen a constant increase in the use of information technology to deliver digital content in cultural heritage settings. Museums have experimented with multimedia PCs, PDAs and phones, table-tops, Google Glass and now VR. The aim has always been to provide more information despite the fact that only a minority of visitors consumes the information on offer. Failing to engage visitors should direct our concerns on the 'receiving' side rather than on the 'delivering' side, that is to say to look at the visitors' experience rather than the technology [1]. Problem statement : The problem lays in the way the interactive experience is designed: too often it is as an 'add on' to the physical exhibition rather than an integral part of the experience. The emerging Internet of Things bridges the gap between the physical and the digital and enables to seamless integrate the digital content with the material collection or the historical space. Via embedded technology it is possible to collect and exploit visitors' data opening up new possibilities to create engaging and personalised visitors' experiences onsite and online. Approach : Using a number of case studies of exhibitions and installations used by over 20,000 visitors across Europe, I will show how the interaction with information can be designed as part of multisensory exhibitions that engages the visitor at many levels and generate emotion. The approach is collaborative and requires the equal contribution of technologists, designers and content experts throughout the whole process, from early conception to the final implementation. The response of the visitors goes well beyond expectations opening up new opportunities for long-term visitors' engagement.","PeriodicalId":187771,"journal":{"name":"Proceedings of the 2019 Conference on Human Information Interaction and Retrieval","volume":"244 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115186499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fanghui Xiao, Daqing He, Yu Chi, Wei Jeng, C. Tomer
The importance of open government data is often associated with increased public trust, civic engagement, and accountable administrations. While there is a myriad of benefits, the existing literature suggests that many open government datasets lack accessibility and usability for diverse users. This study seeks to explore what contextual information users require when they access these datasets. Using mixed methods, we aim to discover the challenges of accessing data, and the necessary contextual information needed by the users to overcome these challenges. As the outcome of this study, we propose a framework called "Data Guides", which is composed of the identified important contextual information. In future work, we will test the effectiveness of the Data Guide in aiding users' accessing and understanding open government data.
{"title":"Challenges and Supports for Accessing Open Government Datasets: Data Guide for Better Open Data Access and Uses","authors":"Fanghui Xiao, Daqing He, Yu Chi, Wei Jeng, C. Tomer","doi":"10.1145/3295750.3298958","DOIUrl":"https://doi.org/10.1145/3295750.3298958","url":null,"abstract":"The importance of open government data is often associated with increased public trust, civic engagement, and accountable administrations. While there is a myriad of benefits, the existing literature suggests that many open government datasets lack accessibility and usability for diverse users. This study seeks to explore what contextual information users require when they access these datasets. Using mixed methods, we aim to discover the challenges of accessing data, and the necessary contextual information needed by the users to overcome these challenges. As the outcome of this study, we propose a framework called \"Data Guides\", which is composed of the identified important contextual information. In future work, we will test the effectiveness of the Data Guide in aiding users' accessing and understanding open government data.","PeriodicalId":187771,"journal":{"name":"Proceedings of the 2019 Conference on Human Information Interaction and Retrieval","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127022036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Hienert, Dagmar Kern, M. Mitsui, C. Shah, N. Belkin
In Interactive Information Retrieval (IIR) experiments the user's gaze motion on web pages is often recorded with eye tracking. The data is used to analyze gaze behavior or to identify Areas of Interest (AOI) the user has looked at. So far, tools for analyzing eye tracking data have certain limitations in supporting the analysis of gaze behavior in IIR experiments. Experiments often consist of a huge number of different visited web pages. In existing analysis tools the data can only be analyzed in videos or images and AOIs for every single web page have to be specified by hand, in a very time consuming process. In this work, we propose the reading protocol software which breaks eye tracking data down to the textual level by considering the HTML structure of the web pages. This has a lot of advantages for the analyst. First and foremost, it can easily be identified on a large scale what has actually been viewed and read on the stimuli pages by the subjects. Second, the web page structure can be used to filter to AOIs. Third, gaze data of multiple users can be presented on the same page, and fourth, fixation times on text can be exported and further processed in other tools. We present the software, its validation, and example use cases with data from three existing IIR experiments.
在交互式信息检索(Interactive Information Retrieval, IIR)实验中,通常使用眼动仪记录用户在网页上的注视运动。这些数据用于分析凝视行为或识别用户看过的兴趣区域(AOI)。目前,眼动追踪数据分析工具在支持IIR实验中注视行为分析方面存在一定的局限性。实验通常包含大量不同的访问过的网页。在现有的分析工具中,数据只能在视频或图像中进行分析,每个网页的aoi都必须手工指定,这是一个非常耗时的过程。在这项工作中,我们提出了一种阅读协议软件,该软件通过考虑网页的HTML结构,将眼动追踪数据分解到文本级别。这对分析师来说有很多好处。首先,它可以很容易地大规模识别出受试者在刺激页面上实际查看和阅读的内容。其次,可以利用网页结构对aoi进行过滤。第三,可以将多个用户的注视数据呈现在同一个页面上;第四,可以导出文本的注视次数,并在其他工具中进行进一步处理。我们介绍了该软件,它的验证,以及来自三个现有IIR实验数据的示例用例。
{"title":"Reading Protocol: Understanding what has been Read in Interactive Information Retrieval Tasks","authors":"Daniel Hienert, Dagmar Kern, M. Mitsui, C. Shah, N. Belkin","doi":"10.1145/3295750.3298921","DOIUrl":"https://doi.org/10.1145/3295750.3298921","url":null,"abstract":"In Interactive Information Retrieval (IIR) experiments the user's gaze motion on web pages is often recorded with eye tracking. The data is used to analyze gaze behavior or to identify Areas of Interest (AOI) the user has looked at. So far, tools for analyzing eye tracking data have certain limitations in supporting the analysis of gaze behavior in IIR experiments. Experiments often consist of a huge number of different visited web pages. In existing analysis tools the data can only be analyzed in videos or images and AOIs for every single web page have to be specified by hand, in a very time consuming process. In this work, we propose the reading protocol software which breaks eye tracking data down to the textual level by considering the HTML structure of the web pages. This has a lot of advantages for the analyst. First and foremost, it can easily be identified on a large scale what has actually been viewed and read on the stimuli pages by the subjects. Second, the web page structure can be used to filter to AOIs. Third, gaze data of multiple users can be presented on the same page, and fourth, fixation times on text can be exported and further processed in other tools. We present the software, its validation, and example use cases with data from three existing IIR experiments.","PeriodicalId":187771,"journal":{"name":"Proceedings of the 2019 Conference on Human Information Interaction and Retrieval","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126909241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matthias Schildwächter, Alexander Bondarenko, Julian Zenker, Matthias Hagen, Chris Biemann, Alexander Panchenko
We present CAM (comparative argumentative machine), a novel open-domain IR system to argumentatively compare objects with respect to information extracted from the Common Crawl. In a user study, the participants obtained 15% more accurate answers using CAM compared to a "traditional" keyword-based search and were 20% faster in finding the answer to comparative questions.
{"title":"Answering Comparative Questions: Better than Ten-Blue-Links?","authors":"Matthias Schildwächter, Alexander Bondarenko, Julian Zenker, Matthias Hagen, Chris Biemann, Alexander Panchenko","doi":"10.1145/3295750.3298916","DOIUrl":"https://doi.org/10.1145/3295750.3298916","url":null,"abstract":"We present CAM (comparative argumentative machine), a novel open-domain IR system to argumentatively compare objects with respect to information extracted from the Common Crawl. In a user study, the participants obtained 15% more accurate answers using CAM compared to a \"traditional\" keyword-based search and were 20% faster in finding the answer to comparative questions.","PeriodicalId":187771,"journal":{"name":"Proceedings of the 2019 Conference on Human Information Interaction and Retrieval","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114149901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chen Qu, Liu Yang, W. Bruce Croft, Yongfeng Zhang, Johanne R. Trippas, Minghui Qiu
Conversational assistants are being progressively adopted by the general population. However, they are not capable of handling complicated information-seeking tasks that involve multiple turns of information exchange. Due to the limited communication bandwidth in conversational search, it is important for conversational assistants to accurately detect and predict user intent in information-seeking conversations. In this paper, we investigate two aspects of user intent prediction in an information-seeking setting. First, we extract features based on the content, structural, and sentiment characteristics of a given utterance, and use classic machine learning methods to perform user intent prediction. We then conduct an in-depth feature importance analysis to identify key features in this prediction task. We find that structural features contribute most to the prediction performance. Given this finding, we construct neural classifiers to incorporate context information and achieve better performance without feature engineering. Our findings can provide insights into the important factors and effective methods of user intent prediction in information-seeking conversations.
{"title":"User Intent Prediction in Information-seeking Conversations","authors":"Chen Qu, Liu Yang, W. Bruce Croft, Yongfeng Zhang, Johanne R. Trippas, Minghui Qiu","doi":"10.1145/3295750.3298924","DOIUrl":"https://doi.org/10.1145/3295750.3298924","url":null,"abstract":"Conversational assistants are being progressively adopted by the general population. However, they are not capable of handling complicated information-seeking tasks that involve multiple turns of information exchange. Due to the limited communication bandwidth in conversational search, it is important for conversational assistants to accurately detect and predict user intent in information-seeking conversations. In this paper, we investigate two aspects of user intent prediction in an information-seeking setting. First, we extract features based on the content, structural, and sentiment characteristics of a given utterance, and use classic machine learning methods to perform user intent prediction. We then conduct an in-depth feature importance analysis to identify key features in this prediction task. We find that structural features contribute most to the prediction performance. Given this finding, we construct neural classifiers to incorporate context information and achieve better performance without feature engineering. Our findings can provide insights into the important factors and effective methods of user intent prediction in information-seeking conversations.","PeriodicalId":187771,"journal":{"name":"Proceedings of the 2019 Conference on Human Information Interaction and Retrieval","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114916876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chen Qu, Liu Yang, W. Bruce Croft, Falk Scholer, Yongfeng Zhang
Information retrieval systems are evolving from document retrieval to answer retrieval. Web search logs provide large amounts of data about how people interact with ranked lists of documents, but very little is known about interaction with answer texts. In this paper, we use Amazon Mechanical Turk to investigate three answer presentation and interaction approaches in a non-factoid question answering setting. We find that people perceive and react to good and bad answers very differently, and can identify good answers relatively quickly. Our results provide the basis for further investigation of effective answer interaction and feedback methods.
{"title":"Answer Interaction in Non-factoid Question Answering Systems","authors":"Chen Qu, Liu Yang, W. Bruce Croft, Falk Scholer, Yongfeng Zhang","doi":"10.1145/3295750.3298946","DOIUrl":"https://doi.org/10.1145/3295750.3298946","url":null,"abstract":"Information retrieval systems are evolving from document retrieval to answer retrieval. Web search logs provide large amounts of data about how people interact with ranked lists of documents, but very little is known about interaction with answer texts. In this paper, we use Amazon Mechanical Turk to investigate three answer presentation and interaction approaches in a non-factoid question answering setting. We find that people perceive and react to good and bad answers very differently, and can identify good answers relatively quickly. Our results provide the basis for further investigation of effective answer interaction and feedback methods.","PeriodicalId":187771,"journal":{"name":"Proceedings of the 2019 Conference on Human Information Interaction and Retrieval","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133570678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohammad Aliannejadi, Morgan Harvey, Luca Costa, Matthew Pointon, F. Crestani
Improvements in mobile technologies have led to a dramatic change in how and when people access and use information, and is having a profound impact on how users address their daily information needs. Smart phones are rapidly becoming our main method of accessing information and are frequently used to perform "on-the-go'' search tasks. As research into information retrieval continues to evolve, evaluating search behaviour in context is relatively new. Previous research has studied the effects of context through either self-reported diary studies or quantitative log analysis; however, neither approach is able to accurately capture context of use at the time of searching. In this study, we aim to gain a better understanding of task relevance and search behaviour via a task-based user study (n=31) employing a bespoke Android app. The app allowed us to accurately capture the user's context when completing tasks at different times of the day over the period of a week. Through analysis of the collected data, we gain a better understanding of how using smart phones on the go impacts search behaviour, search performance and task relevance and whether or not the actual context is an important factor.
{"title":"Understanding Mobile Search Task Relevance and User Behaviour in Context","authors":"Mohammad Aliannejadi, Morgan Harvey, Luca Costa, Matthew Pointon, F. Crestani","doi":"10.1145/3295750.3298923","DOIUrl":"https://doi.org/10.1145/3295750.3298923","url":null,"abstract":"Improvements in mobile technologies have led to a dramatic change in how and when people access and use information, and is having a profound impact on how users address their daily information needs. Smart phones are rapidly becoming our main method of accessing information and are frequently used to perform \"on-the-go'' search tasks. As research into information retrieval continues to evolve, evaluating search behaviour in context is relatively new. Previous research has studied the effects of context through either self-reported diary studies or quantitative log analysis; however, neither approach is able to accurately capture context of use at the time of searching. In this study, we aim to gain a better understanding of task relevance and search behaviour via a task-based user study (n=31) employing a bespoke Android app. The app allowed us to accurately capture the user's context when completing tasks at different times of the day over the period of a week. Through analysis of the collected data, we gain a better understanding of how using smart phones on the go impacts search behaviour, search performance and task relevance and whether or not the actual context is an important factor.","PeriodicalId":187771,"journal":{"name":"Proceedings of the 2019 Conference on Human Information Interaction and Retrieval","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117043015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}