Aini Putkonen , Yue Jiang , Jingchun Zeng , Olli Tammilehto , Jussi P.P. Jokinen , Antti Oulasvirta
{"title":"Understanding visual search in graphical user interfaces","authors":"Aini Putkonen , Yue Jiang , Jingchun Zeng , Olli Tammilehto , Jussi P.P. Jokinen , Antti Oulasvirta","doi":"10.1016/j.ijhcs.2025.103483","DOIUrl":null,"url":null,"abstract":"<div><div>How do we find items within graphical user interfaces (GUIs)? Current understanding of this issue relies on studies using symbol matrices, natural scenes, and other non-GUI stimuli. To understand whether the effects discovered in those environments extend to mobile, desktop, and web interfaces, this paper reports on visual search performance and eye movements with 900 real-world GUIs. In an eye-tracking study, participants (<span><math><mrow><mi>N</mi><mo>=</mo><mn>84</mn></mrow></math></span>) were given a cue (textual or image) describing a target to find within a GUI. The study found that the type of GUI, the absence/presence of the target, and cue type affected search time more than visual complexity did. We also compared visual search to free-viewing in GUIs, concluding that these two tasks are distinctly different. Synthesis of the results points to a Guess-Scan-Confirm pattern in visual search: in the first few fixations, gaze is frequently directed toward the top-left corner of the screen, a pattern possibly related to the top-left being a statistically likely location of the target or of information that could aid in finding it; attention then gets more selectively guided, in line with the GUI’s structure and the features of the target; and, finally, the user must confirm whether the target has been identified or, instead, that no target is visible. The <span>VSGUI10K</span> eye-tracking dataset (10,282 trials) is released for study and modeling of visual search.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"199 ","pages":"Article 103483"},"PeriodicalIF":5.3000,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Human-Computer Studies","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1071581925000400","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
引用次数: 0
Abstract
How do we find items within graphical user interfaces (GUIs)? Current understanding of this issue relies on studies using symbol matrices, natural scenes, and other non-GUI stimuli. To understand whether the effects discovered in those environments extend to mobile, desktop, and web interfaces, this paper reports on visual search performance and eye movements with 900 real-world GUIs. In an eye-tracking study, participants () were given a cue (textual or image) describing a target to find within a GUI. The study found that the type of GUI, the absence/presence of the target, and cue type affected search time more than visual complexity did. We also compared visual search to free-viewing in GUIs, concluding that these two tasks are distinctly different. Synthesis of the results points to a Guess-Scan-Confirm pattern in visual search: in the first few fixations, gaze is frequently directed toward the top-left corner of the screen, a pattern possibly related to the top-left being a statistically likely location of the target or of information that could aid in finding it; attention then gets more selectively guided, in line with the GUI’s structure and the features of the target; and, finally, the user must confirm whether the target has been identified or, instead, that no target is visible. The VSGUI10K eye-tracking dataset (10,282 trials) is released for study and modeling of visual search.
期刊介绍:
The International Journal of Human-Computer Studies publishes original research over the whole spectrum of work relevant to the theory and practice of innovative interactive systems. The journal is inherently interdisciplinary, covering research in computing, artificial intelligence, psychology, linguistics, communication, design, engineering, and social organization, which is relevant to the design, analysis, evaluation and application of innovative interactive systems. Papers at the boundaries of these disciplines are especially welcome, as it is our view that interdisciplinary approaches are needed for producing theoretical insights in this complex area and for effective deployment of innovative technologies in concrete user communities.
Research areas relevant to the journal include, but are not limited to:
• Innovative interaction techniques
• Multimodal interaction
• Speech interaction
• Graphic interaction
• Natural language interaction
• Interaction in mobile and embedded systems
• Interface design and evaluation methodologies
• Design and evaluation of innovative interactive systems
• User interface prototyping and management systems
• Ubiquitous computing
• Wearable computers
• Pervasive computing
• Affective computing
• Empirical studies of user behaviour
• Empirical studies of programming and software engineering
• Computer supported cooperative work
• Computer mediated communication
• Virtual reality
• Mixed and augmented Reality
• Intelligent user interfaces
• Presence
...