Yuanning Jia , Zhi Liu , Ying Lv , Xiaofeng Lu , Xuefeng Liu , Jie Chen
{"title":"Frequency-spatial interaction network for gaze estimation","authors":"Yuanning Jia , Zhi Liu , Ying Lv , Xiaofeng Lu , Xuefeng Liu , Jie Chen","doi":"10.1016/j.displa.2024.102878","DOIUrl":null,"url":null,"abstract":"<div><div>Gaze estimation is a fundamental task in the field of computer vision, which determines the direction a person is looking at. With advancements in Convolutional Neural Networks (CNNs) and the availability of large-scale datasets, appearance-based models have made significant progress. Nonetheless, CNNs exhibit limitations in extracting global information from features, resulting in a constraint on gaze estimation performance. Inspired by the properties of the Fourier transform in signal processing, we propose the Frequency-Spatial Interaction network for Gaze estimation (FSIGaze), which integrates residual modules and Frequency-Spatial Synergistic (FSS) modules. To be specific, its FSS module is a dual-branch structure with a spatial branch and a frequency branch. The frequency branch employs Fast Fourier Transformation to transfer a latent representation to the frequency domain and applies adaptive frequency filter to achieve an image-size receptive field. The spatial branch, on the other hand, can extract local detailed features. Acknowledging the synergistic benefits of global and local information in gaze estimation, we introduce a Dual-domain Interaction Block (DIB) to enhance the capability of the model. Furthermore, we implement a multi-task learning strategy, incorporating eye region detection as an auxiliary task to refine facial features. Extensive experiments demonstrate that our model surpasses other state-of-the-art gaze estimation models on three three-dimensional (3D) datasets and delivers competitive results on two two-dimensional (2D) datasets.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"86 ","pages":"Article 102878"},"PeriodicalIF":3.7000,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Displays","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0141938224002427","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Gaze estimation is a fundamental task in the field of computer vision, which determines the direction a person is looking at. With advancements in Convolutional Neural Networks (CNNs) and the availability of large-scale datasets, appearance-based models have made significant progress. Nonetheless, CNNs exhibit limitations in extracting global information from features, resulting in a constraint on gaze estimation performance. Inspired by the properties of the Fourier transform in signal processing, we propose the Frequency-Spatial Interaction network for Gaze estimation (FSIGaze), which integrates residual modules and Frequency-Spatial Synergistic (FSS) modules. To be specific, its FSS module is a dual-branch structure with a spatial branch and a frequency branch. The frequency branch employs Fast Fourier Transformation to transfer a latent representation to the frequency domain and applies adaptive frequency filter to achieve an image-size receptive field. The spatial branch, on the other hand, can extract local detailed features. Acknowledging the synergistic benefits of global and local information in gaze estimation, we introduce a Dual-domain Interaction Block (DIB) to enhance the capability of the model. Furthermore, we implement a multi-task learning strategy, incorporating eye region detection as an auxiliary task to refine facial features. Extensive experiments demonstrate that our model surpasses other state-of-the-art gaze estimation models on three three-dimensional (3D) datasets and delivers competitive results on two two-dimensional (2D) datasets.
期刊介绍:
Displays is the international journal covering the research and development of display technology, its effective presentation and perception of information, and applications and systems including display-human interface.
Technical papers on practical developments in Displays technology provide an effective channel to promote greater understanding and cross-fertilization across the diverse disciplines of the Displays community. Original research papers solving ergonomics issues at the display-human interface advance effective presentation of information. Tutorial papers covering fundamentals intended for display technologies and human factor engineers new to the field will also occasionally featured.