{"title":"Impending Success or Failure? An Investigation of Gaze-Based User Predictions During Interaction with Ontology Visualizations","authors":"Bo Fu, B. Steichen","doi":"10.1145/3531073.3531081","DOIUrl":null,"url":null,"abstract":"Designing and developing innovative visualizations to assist humans in the process of generating and understanding complex semantic data has become an important element in supporting effective human-ontology interaction, as visual cues are likely to provide clarity, promote insight, and amplify cognition. While recent research has indicated potential benefits of applying novel adaptive technologies, typical ontology visualization techniques have traditionally followed a one-size-fits-all approach that often ignores an individual user's preferences, abilities, and visual needs. In an effort to realize adaptive ontology visualization, this paper presents a potential solution to predict a user's likely success and failure in real time, and prior to task completion, by applying established machine learning models on eye gaze generated during an interactive session. These predictions are envisioned to inform future adaptive ontology visualizations that could potentially adjust its visual cues or recommend alternative visualizations in real time to improve individual user success. This paper presents findings from a series of experiments to demonstrate the feasibility of gaze-based success and failure predictions in real time that can be achieved with a number of off-the-shelf classifiers without the need of expert configurations in the presence of mixed user backgrounds and task domains across two commonly used fundamental ontology visualization techniques.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3531073.3531081","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Designing and developing innovative visualizations to assist humans in the process of generating and understanding complex semantic data has become an important element in supporting effective human-ontology interaction, as visual cues are likely to provide clarity, promote insight, and amplify cognition. While recent research has indicated potential benefits of applying novel adaptive technologies, typical ontology visualization techniques have traditionally followed a one-size-fits-all approach that often ignores an individual user's preferences, abilities, and visual needs. In an effort to realize adaptive ontology visualization, this paper presents a potential solution to predict a user's likely success and failure in real time, and prior to task completion, by applying established machine learning models on eye gaze generated during an interactive session. These predictions are envisioned to inform future adaptive ontology visualizations that could potentially adjust its visual cues or recommend alternative visualizations in real time to improve individual user success. This paper presents findings from a series of experiments to demonstrate the feasibility of gaze-based success and failure predictions in real time that can be achieved with a number of off-the-shelf classifiers without the need of expert configurations in the presence of mixed user backgrounds and task domains across two commonly used fundamental ontology visualization techniques.