Daniel Hienert, Dagmar Kern, M. Mitsui, C. Shah, N. Belkin
{"title":"Reading Protocol: Understanding what has been Read in Interactive Information Retrieval Tasks","authors":"Daniel Hienert, Dagmar Kern, M. Mitsui, C. Shah, N. Belkin","doi":"10.1145/3295750.3298921","DOIUrl":null,"url":null,"abstract":"In Interactive Information Retrieval (IIR) experiments the user's gaze motion on web pages is often recorded with eye tracking. The data is used to analyze gaze behavior or to identify Areas of Interest (AOI) the user has looked at. So far, tools for analyzing eye tracking data have certain limitations in supporting the analysis of gaze behavior in IIR experiments. Experiments often consist of a huge number of different visited web pages. In existing analysis tools the data can only be analyzed in videos or images and AOIs for every single web page have to be specified by hand, in a very time consuming process. In this work, we propose the reading protocol software which breaks eye tracking data down to the textual level by considering the HTML structure of the web pages. This has a lot of advantages for the analyst. First and foremost, it can easily be identified on a large scale what has actually been viewed and read on the stimuli pages by the subjects. Second, the web page structure can be used to filter to AOIs. Third, gaze data of multiple users can be presented on the same page, and fourth, fixation times on text can be exported and further processed in other tools. We present the software, its validation, and example use cases with data from three existing IIR experiments.","PeriodicalId":187771,"journal":{"name":"Proceedings of the 2019 Conference on Human Information Interaction and Retrieval","volume":"36 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2019 Conference on Human Information Interaction and Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3295750.3298921","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
In Interactive Information Retrieval (IIR) experiments the user's gaze motion on web pages is often recorded with eye tracking. The data is used to analyze gaze behavior or to identify Areas of Interest (AOI) the user has looked at. So far, tools for analyzing eye tracking data have certain limitations in supporting the analysis of gaze behavior in IIR experiments. Experiments often consist of a huge number of different visited web pages. In existing analysis tools the data can only be analyzed in videos or images and AOIs for every single web page have to be specified by hand, in a very time consuming process. In this work, we propose the reading protocol software which breaks eye tracking data down to the textual level by considering the HTML structure of the web pages. This has a lot of advantages for the analyst. First and foremost, it can easily be identified on a large scale what has actually been viewed and read on the stimuli pages by the subjects. Second, the web page structure can be used to filter to AOIs. Third, gaze data of multiple users can be presented on the same page, and fourth, fixation times on text can be exported and further processed in other tools. We present the software, its validation, and example use cases with data from three existing IIR experiments.
在交互式信息检索(Interactive Information Retrieval, IIR)实验中,通常使用眼动仪记录用户在网页上的注视运动。这些数据用于分析凝视行为或识别用户看过的兴趣区域(AOI)。目前,眼动追踪数据分析工具在支持IIR实验中注视行为分析方面存在一定的局限性。实验通常包含大量不同的访问过的网页。在现有的分析工具中,数据只能在视频或图像中进行分析,每个网页的aoi都必须手工指定,这是一个非常耗时的过程。在这项工作中,我们提出了一种阅读协议软件,该软件通过考虑网页的HTML结构,将眼动追踪数据分解到文本级别。这对分析师来说有很多好处。首先,它可以很容易地大规模识别出受试者在刺激页面上实际查看和阅读的内容。其次,可以利用网页结构对aoi进行过滤。第三,可以将多个用户的注视数据呈现在同一个页面上;第四,可以导出文本的注视次数,并在其他工具中进行进一步处理。我们介绍了该软件,它的验证,以及来自三个现有IIR实验数据的示例用例。