The saccadic suppression effect, in which visual sensitivity is reduced significantly during saccades, has been suggested as a mechanism for masking graphic updates in a 3D virtual environment. In this study, we investigate whether the degree of saccadic suppression depends on the type of image change, particularly between different natural 3D scene transformations. The user observed 3D scenes and made a horizontal saccade in response to the displacement of a target object in the scene. During this saccade the entire scene translated or rotated. We studied six directions of transformation corresponding to the canonical directions for the six degrees of freedom. Following each trial, the user made a forced-choice indication of direction of the scene change. Results show that during horizontal saccades, the most recognizable changes were rotations along the roll axis.
{"title":"Sensitivity to natural 3D image transformations during eye movements","authors":"Maryam Keyvanara, R. Allison","doi":"10.1145/3204493.3204583","DOIUrl":"https://doi.org/10.1145/3204493.3204583","url":null,"abstract":"The saccadic suppression effect, in which visual sensitivity is reduced significantly during saccades, has been suggested as a mechanism for masking graphic updates in a 3D virtual environment. In this study, we investigate whether the degree of saccadic suppression depends on the type of image change, particularly between different natural 3D scene transformations. The user observed 3D scenes and made a horizontal saccade in response to the displacement of a target object in the scene. During this saccade the entire scene translated or rotated. We studied six directions of transformation corresponding to the canonical directions for the six degrees of freedom. Following each trial, the user made a forced-choice indication of direction of the scene change. Results show that during horizontal saccades, the most recognizable changes were rotations along the roll axis.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126235375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Despite the rapid adoption of smartphones among fashion consumers, their dissatisfaction with retailers' mobile apps and websites also increases. This suggests that understanding how mobile consumers use smartphones for shopping is important in developing digital shopping platforms fulfilling consumers' expectations. Research to date has not focused on eye tracking consumer shopping behavior using smartphones. For this research, we employed mobile eye tracking experiments in order to develop unique shopping journeys for each fashion consumer accounting for differences and similarities in their behavior. Based on scan path visualizations and shopping journeys we developed a precise account about the areas the majority of fashion consumers look at when browsing and inspecting product pages. Based on the findings, we identified mobile consumers' behaviour patterns, usability issues of the mobile channel and established what features the mobile retail channel needs to have to satisfy fashion consumers' needs by offering pleasing customer user experiences.
{"title":"Mobile consumer shopping journey in fashion retail: eye tracking mobile apps and websites","authors":"Zofija Tupikovskaja-Omovie, D. Tyler","doi":"10.1145/3204493.3208335","DOIUrl":"https://doi.org/10.1145/3204493.3208335","url":null,"abstract":"Despite the rapid adoption of smartphones among fashion consumers, their dissatisfaction with retailers' mobile apps and websites also increases. This suggests that understanding how mobile consumers use smartphones for shopping is important in developing digital shopping platforms fulfilling consumers' expectations. Research to date has not focused on eye tracking consumer shopping behavior using smartphones. For this research, we employed mobile eye tracking experiments in order to develop unique shopping journeys for each fashion consumer accounting for differences and similarities in their behavior. Based on scan path visualizations and shopping journeys we developed a precise account about the areas the majority of fashion consumers look at when browsing and inspecting product pages. Based on the findings, we identified mobile consumers' behaviour patterns, usability issues of the mobile channel and established what features the mobile retail channel needs to have to satisfy fashion consumers' needs by offering pleasing customer user experiences.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114898412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Appearance-based gaze estimation is promising for unconstrained real-world settings, but the significant variability in head pose and user-camera distance poses significant challenges for training generic gaze estimators. Data normalization was proposed to cancel out this geometric variability by mapping input images and gaze labels to a normalized space. Although used successfully in prior works, the role and importance of data normalization remains unclear. To fill this gap, we study data normalization for the first time using principled evaluations on both simulated and real data. We propose a modification to the current data normalization formulation by removing the scaling factor and show that our new formulation performs significantly better (between 9.5% and 32.7%) in the different evaluation settings. Using images synthesized from a 3D face model, we demonstrate the benefit of data normalization for the efficiency of the model training. Experiments on real-world images confirm the advantages of data normalization in terms of gaze estimation performance.
{"title":"Revisiting data normalization for appearance-based gaze estimation","authors":"Xucong Zhang, Yusuke Sugano, A. Bulling","doi":"10.1145/3204493.3204548","DOIUrl":"https://doi.org/10.1145/3204493.3204548","url":null,"abstract":"Appearance-based gaze estimation is promising for unconstrained real-world settings, but the significant variability in head pose and user-camera distance poses significant challenges for training generic gaze estimators. Data normalization was proposed to cancel out this geometric variability by mapping input images and gaze labels to a normalized space. Although used successfully in prior works, the role and importance of data normalization remains unclear. To fill this gap, we study data normalization for the first time using principled evaluations on both simulated and real data. We propose a modification to the current data normalization formulation by removing the scaling factor and show that our new formulation performs significantly better (between 9.5% and 32.7%) in the different evaluation settings. Using images synthesized from a 3D face model, we demonstrate the benefit of data normalization for the efficiency of the model training. Experiments on real-world images confirm the advantages of data normalization in terms of gaze estimation performance.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116795551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael Barz, Florian Daiber, Daniel Sonntag, A. Bulling
Gaze estimation error can severely hamper usability and performance of mobile gaze-based interfaces given that the error varies constantly for different interaction positions. In this work, we explore error-aware gaze-based interfaces that estimate and adapt to gaze estimation error on-the-fly. We implement a sample error-aware user interface for gaze-based selection and different error compensation methods: a naïve approach that increases component size directly proportional to the absolute error, a recent model by Feit et al. that is based on the two-dimensional error distribution, and a novel predictive model that shifts gaze by a directional error estimate. We evaluate these models in a 12-participant user study and show that our predictive model significantly outperforms the others in terms of selection rate, particularly for small gaze targets. These results underline both the feasibility and potential of next generation error-aware gaze-based user interfaces.
{"title":"Error-aware gaze-based interfaces for robust mobile gaze interaction","authors":"Michael Barz, Florian Daiber, Daniel Sonntag, A. Bulling","doi":"10.1145/3204493.3204536","DOIUrl":"https://doi.org/10.1145/3204493.3204536","url":null,"abstract":"Gaze estimation error can severely hamper usability and performance of mobile gaze-based interfaces given that the error varies constantly for different interaction positions. In this work, we explore error-aware gaze-based interfaces that estimate and adapt to gaze estimation error on-the-fly. We implement a sample error-aware user interface for gaze-based selection and different error compensation methods: a naïve approach that increases component size directly proportional to the absolute error, a recent model by Feit et al. that is based on the two-dimensional error distribution, and a novel predictive model that shifts gaze by a directional error estimate. We evaluate these models in a 12-participant user study and show that our predictive model significantly outperforms the others in terms of selection rate, particularly for small gaze targets. These results underline both the feasibility and potential of next generation error-aware gaze-based user interfaces.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117318769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fixations are widely analysed in human vision, gaze-based interaction, and experimental psychology research. However, robust fixation detection in mobile settings is profoundly challenging given the prevalence of user and gaze target motion. These movements feign a shift in gaze estimates in the frame of reference defined by the eye tracker's scene camera. To address this challenge, we present a novel fixation detection method for head-mounted eye trackers. Our method exploits that, independent of user or gaze target motion, target appearance remains about the same during a fixation. It extracts image information from small regions around the current gaze position and analyses the appearance similarity of these gaze patches across video frames to detect fixations. We evaluate our method using fine-grained fixation annotations on a five-participant indoor dataset (MPIIEgoFixation) with more than 2,300 fixations in total. Our method outperforms commonly used velocity- and dispersion-based algorithms, which highlights its significant potential to analyse scene image information for eye movement detection.
{"title":"Fixation detection for head-mounted eye tracking based on visual similarity of gaze targets","authors":"Julian Steil, Michael Xuelin Huang, A. Bulling","doi":"10.1145/3204493.3204538","DOIUrl":"https://doi.org/10.1145/3204493.3204538","url":null,"abstract":"Fixations are widely analysed in human vision, gaze-based interaction, and experimental psychology research. However, robust fixation detection in mobile settings is profoundly challenging given the prevalence of user and gaze target motion. These movements feign a shift in gaze estimates in the frame of reference defined by the eye tracker's scene camera. To address this challenge, we present a novel fixation detection method for head-mounted eye trackers. Our method exploits that, independent of user or gaze target motion, target appearance remains about the same during a fixation. It extracts image information from small regions around the current gaze position and analyses the appearance similarity of these gaze patches across video frames to detect fixations. We evaluate our method using fine-grained fixation annotations on a five-participant indoor dataset (MPIIEgoFixation) with more than 2,300 fixations in total. Our method outperforms commonly used velocity- and dispersion-based algorithms, which highlights its significant potential to analyse scene image information for eye movement detection.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124807199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thomas Mattusch, Mahsa Mirzamohammad, M. Khamis, A. Bulling, Florian Alt
The idea behind gaze interaction using Pursuits is to leverage the human's smooth pursuit eye movements performed when following moving targets. However, humans can also anticipate where a moving target would reappear if it temporarily hides from their view. In this work, we investigate how well users can select targets using Pursuits in cases where the target's trajectory is partially invisible (HiddenPursuits): e.g., can users select a moving target that temporarily hides behind another object? Although HiddenPursuits was not studied in the context of interaction before, understanding how well users can perform HiddenPursuits presents numerous opportunities, particularly for small interfaces where a target's trajectory can cover area outside of the screen. We found that users can still select targets quickly via Pursuits even if their trajectory is up to 50% hidden, and at the expense of longer selection times when the hidden portion is larger. We discuss how gaze-based interfaces can leverage HiddenPursuits for an improved user experience.
{"title":"Hidden pursuits: evaluating gaze-selection via pursuits when the stimuli's trajectory is partially hidden","authors":"Thomas Mattusch, Mahsa Mirzamohammad, M. Khamis, A. Bulling, Florian Alt","doi":"10.1145/3204493.3204569","DOIUrl":"https://doi.org/10.1145/3204493.3204569","url":null,"abstract":"The idea behind gaze interaction using Pursuits is to leverage the human's smooth pursuit eye movements performed when following moving targets. However, humans can also anticipate where a moving target would reappear if it temporarily hides from their view. In this work, we investigate how well users can select targets using Pursuits in cases where the target's trajectory is partially invisible (HiddenPursuits): e.g., can users select a moving target that temporarily hides behind another object? Although HiddenPursuits was not studied in the context of interaction before, understanding how well users can perform HiddenPursuits presents numerous opportunities, particularly for small interfaces where a target's trajectory can cover area outside of the screen. We found that users can still select targets quickly via Pursuits even if their trajectory is up to 50% hidden, and at the expense of longer selection times when the hidden portion is larger. We discuss how gaze-based interfaces can leverage HiddenPursuits for an improved user experience.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128304104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deep Learning models have revolutionized many research fields already. However, the raw eye movement data is still typically processed into discrete events via threshold-based algorithms or manual labelling. In this work, we describe a compact 1D CNN model, which we combined with BLSTM to achieve end-to-end sequence-to-sequence learning. We discuss the acquisition process for the ground truth that we use, as well as the performance of our approach, in comparison to various literature models and manual raters. Our deep method demonstrates superior performance, which brings us closer to human-level labelling quality.
{"title":"Deep learning vs. manual annotation of eye movements","authors":"Mikhail Startsev, I. Agtzidis, M. Dorr","doi":"10.1145/3204493.3208346","DOIUrl":"https://doi.org/10.1145/3204493.3208346","url":null,"abstract":"Deep Learning models have revolutionized many research fields already. However, the raw eye movement data is still typically processed into discrete events via threshold-based algorithms or manual labelling. In this work, we describe a compact 1D CNN model, which we combined with BLSTM to achieve end-to-end sequence-to-sequence learning. We discuss the acquisition process for the ground truth that we use, as well as the performance of our approach, in comparison to various literature models and manual raters. Our deep method demonstrates superior performance, which brings us closer to human-level labelling quality.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"31 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120998896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In every quantitative eye tracking research study, researchers need to compare eye movements between subjects or conditions. For both static and dynamic tasks, there is a variety of metrics that could serve this purpose. It is important to explore the robustness of the metrics with respect to artificial noise. For dynamic tasks, where eye movement data is represented as scanpaths, there are currently no studies regarding the robustness of the metrics. In this study, we explored properties of five metrics (Levenshtein distance, correlation distance, Fréchet distance, mean and median distance) used for comparison of scanpaths. We systematically added noise by applying three transformations to the scanpaths: translation, rotation, and scaling. For each metric, we computed baseline similarity for two random scanpaths and explored the metrics' sensitivity. Our results allow other researchers to convert results between studies.
{"title":"Robustness of metrics used for scanpath comparison","authors":"F. Děchtěrenko, J. Lukavský","doi":"10.1145/3204493.3204580","DOIUrl":"https://doi.org/10.1145/3204493.3204580","url":null,"abstract":"In every quantitative eye tracking research study, researchers need to compare eye movements between subjects or conditions. For both static and dynamic tasks, there is a variety of metrics that could serve this purpose. It is important to explore the robustness of the metrics with respect to artificial noise. For dynamic tasks, where eye movement data is represented as scanpaths, there are currently no studies regarding the robustness of the metrics. In this study, we explored properties of five metrics (Levenshtein distance, correlation distance, Fréchet distance, mean and median distance) used for comparison of scanpaths. We systematically added noise by applying three transformations to the scanpaths: translation, rotation, and scaling. For each metric, we computed baseline similarity for two random scanpaths and explored the metrics' sensitivity. Our results allow other researchers to convert results between studies.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"286 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116453743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexandra Papoutsaki, Aaron Gokaslan, J. Tompkin, Yuze He, Jeff Huang
We examine the relationship between eye gaze and typing, focusing on the differences between touch and non-touch typists. To enable typing-based research, we created a 51-participant benchmark dataset for user input across multiple tasks, including user input data, screen recordings, webcam video of the participant's face, and eye tracking positions. There are patterns of eye movements that differ between the two types of typists, representing glances at the keyboard, which can be used to identify touch-.typed strokes with 92% accuracy. Then, we relate eye gaze with cursor activity, aligning both pointing and typing to eye gaze. One demonstrative application of the work is in extending WebGazer, a real-time web-browser-based webcam eye tracker. We show that incorporating typing behavior as a secondary signal improves eye tracking accuracy by 16% for touch typists, and 8% for non-touch typists.
{"title":"The eye of the typer: a benchmark and analysis of gaze behavior during typing","authors":"Alexandra Papoutsaki, Aaron Gokaslan, J. Tompkin, Yuze He, Jeff Huang","doi":"10.1145/3204493.3204552","DOIUrl":"https://doi.org/10.1145/3204493.3204552","url":null,"abstract":"We examine the relationship between eye gaze and typing, focusing on the differences between touch and non-touch typists. To enable typing-based research, we created a 51-participant benchmark dataset for user input across multiple tasks, including user input data, screen recordings, webcam video of the participant's face, and eye tracking positions. There are patterns of eye movements that differ between the two types of typists, representing glances at the keyboard, which can be used to identify touch-.typed strokes with 92% accuracy. Then, we relate eye gaze with cursor activity, aligning both pointing and typing to eye gaze. One demonstrative application of the work is in extending WebGazer, a real-time web-browser-based webcam eye tracker. We show that incorporating typing behavior as a secondary signal improves eye tracking accuracy by 16% for touch typists, and 8% for non-touch typists.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124394534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This research project addresses the understanding of attentional biases post-traumatic stress disorder (PTSD). This psychiatric condition is mainly characterized by symptoms of intrusion (flashbacks), avoidance, alteration of arousal and reactivity (hypervigilance), and negative mood and cognitions persisting one month after the exposure of a traumatic event [American Psychiatric Association 2013]. Clinical observations as well as empirical research highlighted the symptom of hypervigilance as being central in the PTSD symptomatology, considering that other clinical features could be maintained by it [Ehlers and Clark 2000]. Attentional Control theory has described the hypervigilance in anxious disorders as the co-occurrence of two cognitive processes : an enhanced detection of threatening information followed by difficulties to inhibit their processing [Eysenck et al. 2007]. Nevertheless, attentional control theory has never been applied to PTSD. This project aims at providing cognitive evidence of hypervigilance symptoms in PTSD using eye-tracking during the realization of reliable Miyake tasks [Eysenck and Derakshan 2011]. Therefore, our first aim is to model the co-occurring processes of hypervigilance using eye-tracking technology. Indeed, behavioral measures (as reaction time) do not allow a clear representation of cognitive processes occurring subconsciously in a few milliseconds [Felmingham 2016]. Therefore, eye-tracking technology is essential in our studies. Secondly, we aim to analyze the differential impact of trauma-related stimulus vs negative stimuli on PTSD patients, by conducting scan paths following both of those stimuli presentation. This research project is divided into four studies. The first one will be described is this doctoral symposium.
本研究旨在了解创伤后应激障碍(PTSD)的注意偏差。这种精神疾病的主要特征是在创伤性事件暴露后持续一个月的入侵(闪回)、逃避、觉醒和反应性改变(高度警惕)以及消极情绪和认知的症状[美国精神病学协会2013]。临床观察和实证研究都强调了过度警觉性是PTSD症状学的核心,并认为它可以维持其他临床特征[Ehlers和Clark 2000]。注意控制理论将焦虑障碍中的高警惕性描述为两种认知过程的共同发生:对威胁信息的检测增强,随后难以抑制其处理[Eysenck et al. 2007]。然而,注意控制理论从未被应用于PTSD。本项目旨在利用眼动追踪技术,为实现可靠的Miyake任务过程中PTSD高警惕性症状提供认知证据[Eysenck and Derakshan 2011]。因此,我们的第一个目标是使用眼动追踪技术来模拟过度警惕的共同发生过程。事实上,行为测量(如反应时间)并不能清楚地反映几毫秒内潜意识中发生的认知过程[Felmingham 2016]。因此,眼动追踪技术在我们的研究中是必不可少的。其次,我们的目的是分析创伤相关刺激与负面刺激对创伤后应激障碍患者的不同影响,通过扫描路径跟踪这两种刺激的表现。这个研究项目分为四个部分。首先要介绍的是这次博士研讨会。
{"title":"Automatic detection and inhibition of neutral and emotional stimuli in post-traumatic stress disorder: an eye-tracking study: eye-tracking data of an original antisaccade task","authors":"Wivine Blekić, M. Rossignol","doi":"10.1145/3204493.3207419","DOIUrl":"https://doi.org/10.1145/3204493.3207419","url":null,"abstract":"This research project addresses the understanding of attentional biases post-traumatic stress disorder (PTSD). This psychiatric condition is mainly characterized by symptoms of intrusion (flashbacks), avoidance, alteration of arousal and reactivity (hypervigilance), and negative mood and cognitions persisting one month after the exposure of a traumatic event [American Psychiatric Association 2013]. Clinical observations as well as empirical research highlighted the symptom of hypervigilance as being central in the PTSD symptomatology, considering that other clinical features could be maintained by it [Ehlers and Clark 2000]. Attentional Control theory has described the hypervigilance in anxious disorders as the co-occurrence of two cognitive processes : an enhanced detection of threatening information followed by difficulties to inhibit their processing [Eysenck et al. 2007]. Nevertheless, attentional control theory has never been applied to PTSD. This project aims at providing cognitive evidence of hypervigilance symptoms in PTSD using eye-tracking during the realization of reliable Miyake tasks [Eysenck and Derakshan 2011]. Therefore, our first aim is to model the co-occurring processes of hypervigilance using eye-tracking technology. Indeed, behavioral measures (as reaction time) do not allow a clear representation of cognitive processes occurring subconsciously in a few milliseconds [Felmingham 2016]. Therefore, eye-tracking technology is essential in our studies. Secondly, we aim to analyze the differential impact of trauma-related stimulus vs negative stimuli on PTSD patients, by conducting scan paths following both of those stimuli presentation. This research project is divided into four studies. The first one will be described is this doctoral symposium.","PeriodicalId":237808,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126619394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}