{"title":"Deep learning vs. manual annotation of eye movements","authors":"Mikhail Startsev, I. Agtzidis, M. Dorr","doi":"10.1145/3204493.3208346","DOIUrl":null,"url":null,"abstract":"Deep Learning models have revolutionized many research fields already. However, the raw eye movement data is still typically processed into discrete events via threshold-based algorithms or manual labelling. In this work, we describe a compact 1D CNN model, which we combined with BLSTM to achieve end-to-end sequence-to-sequence learning. We discuss the acquisition process for the ground truth that we use, as well as the performance of our approach, in comparison to various literature models and manual raters. Our deep method demonstrates superior performance, which brings us closer to human-level labelling quality.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"31 4","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3204493.3208346","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Deep Learning models have revolutionized many research fields already. However, the raw eye movement data is still typically processed into discrete events via threshold-based algorithms or manual labelling. In this work, we describe a compact 1D CNN model, which we combined with BLSTM to achieve end-to-end sequence-to-sequence learning. We discuss the acquisition process for the ground truth that we use, as well as the performance of our approach, in comparison to various literature models and manual raters. Our deep method demonstrates superior performance, which brings us closer to human-level labelling quality.