{"title":"Computational approaches to apply the String Edit Algorithm to create accurate visual scan paths.","authors":"Ricardo Palma Fraga, Ziho Kang","doi":"10.16910/jemr.17.4.4","DOIUrl":null,"url":null,"abstract":"<p><p>Eye movement detection algorithms (e.g., I-VT) require the selection of thresholds to identify eye fixations and saccadic movements from gaze data. The choice of threshold is important, as thresholds too low or large may fail to accurately identify eye fixations and saccades. An inaccurate threshold might also affect the resulting visual scan path, the time-ordered sequence of eye fixations and saccades, carried out by the participant. Commonly used approaches to evaluate threshold accuracy can be manually laborious, or require information about the expected visual scan paths of participants, which might not be available. To address this issue, we propose two different computational approaches, labeled as \"between-participants comparisons\" and \"within-participants comparisons.\" The approaches were evaluated using the open-source Gazebase dataset, which contained a bullseyetarget tracking task, where participants were instructed to follow the movements of a bullseye-target. The predetermined path of the bullseye-target enabled us to evaluate our proposed approaches against the expected visual scan path. The approaches identified threshold values (220°/s and 210°/s) that were 83% similar to the expected visual scan path, outperforming a 30°/s benchmark threshold (41.5%). These methods might assist researchers in identifying accurate threshold values for the IVT algorithm or potentially other eye movement detection algorithms.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":"17 4","pages":""},"PeriodicalIF":1.3000,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11714258/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Eye Movement Research","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.16910/jemr.17.4.4","RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/1 0:00:00","PubModel":"eCollection","JCR":"Q3","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Eye movement detection algorithms (e.g., I-VT) require the selection of thresholds to identify eye fixations and saccadic movements from gaze data. The choice of threshold is important, as thresholds too low or large may fail to accurately identify eye fixations and saccades. An inaccurate threshold might also affect the resulting visual scan path, the time-ordered sequence of eye fixations and saccades, carried out by the participant. Commonly used approaches to evaluate threshold accuracy can be manually laborious, or require information about the expected visual scan paths of participants, which might not be available. To address this issue, we propose two different computational approaches, labeled as "between-participants comparisons" and "within-participants comparisons." The approaches were evaluated using the open-source Gazebase dataset, which contained a bullseyetarget tracking task, where participants were instructed to follow the movements of a bullseye-target. The predetermined path of the bullseye-target enabled us to evaluate our proposed approaches against the expected visual scan path. The approaches identified threshold values (220°/s and 210°/s) that were 83% similar to the expected visual scan path, outperforming a 30°/s benchmark threshold (41.5%). These methods might assist researchers in identifying accurate threshold values for the IVT algorithm or potentially other eye movement detection algorithms.
期刊介绍:
The Journal of Eye Movement Research is an open-access, peer-reviewed scientific periodical devoted to all aspects of oculomotor functioning including methodology of eye recording, neurophysiological and cognitive models, attention, reading, as well as applications in neurology, ergonomy, media research and other areas,