{"title":"基于多特征集成全卷积网络的增强现实辅助导航环境中动态视觉注意检测","authors":"Qiaosong Hei, Weihua Dong, Bowen Shi","doi":"10.1080/15230406.2022.2154271","DOIUrl":null,"url":null,"abstract":"ABSTRACT Visual attention detection, as an important concept for human visual behavior research, has been widely studied. However, previous studies seldom considered the feature integration mechanism to detect visual attention and rarely considered the differences due to different geographical scenes. In this paper, we use an augmented reality aided (AR-aided) navigation experimental dataset to study human visual behavior in a dynamic AR-aided environment. Then, we propose a multi-feature integration fully convolutional network (M-FCN) based on a self-adaptive environment weight (SEW) to integrate RGB-D, semantic, optical flow and spatial neighborhood features to detect human visual attention. The result shows that the M-FCN performs better than other state-of-the-art saliency models. In addition, the introduction of feature integration mechanism and the SEW can improve the accuracy and robustness of visual attention detection. Meanwhile, we find that RGB-D and semantic features perform best in different road routes and road types, but with the increase in road type complexity, the expressiveness of these two features weakens, and the expressiveness of optical flow and spatial neighborhood features increases. The research is helpful for AR-device navigation tool design and urban spatial planning.","PeriodicalId":47562,"journal":{"name":"Cartography and Geographic Information Science","volume":"50 1","pages":"63 - 78"},"PeriodicalIF":2.6000,"publicationDate":"2023-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Detecting dynamic visual attention in augmented reality aided navigation environment based on a multi-feature integration fully convolutional network\",\"authors\":\"Qiaosong Hei, Weihua Dong, Bowen Shi\",\"doi\":\"10.1080/15230406.2022.2154271\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ABSTRACT Visual attention detection, as an important concept for human visual behavior research, has been widely studied. However, previous studies seldom considered the feature integration mechanism to detect visual attention and rarely considered the differences due to different geographical scenes. In this paper, we use an augmented reality aided (AR-aided) navigation experimental dataset to study human visual behavior in a dynamic AR-aided environment. Then, we propose a multi-feature integration fully convolutional network (M-FCN) based on a self-adaptive environment weight (SEW) to integrate RGB-D, semantic, optical flow and spatial neighborhood features to detect human visual attention. The result shows that the M-FCN performs better than other state-of-the-art saliency models. In addition, the introduction of feature integration mechanism and the SEW can improve the accuracy and robustness of visual attention detection. Meanwhile, we find that RGB-D and semantic features perform best in different road routes and road types, but with the increase in road type complexity, the expressiveness of these two features weakens, and the expressiveness of optical flow and spatial neighborhood features increases. The research is helpful for AR-device navigation tool design and urban spatial planning.\",\"PeriodicalId\":47562,\"journal\":{\"name\":\"Cartography and Geographic Information Science\",\"volume\":\"50 1\",\"pages\":\"63 - 78\"},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2023-01-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cartography and Geographic Information Science\",\"FirstCategoryId\":\"89\",\"ListUrlMain\":\"https://doi.org/10.1080/15230406.2022.2154271\",\"RegionNum\":3,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"GEOGRAPHY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cartography and Geographic Information Science","FirstCategoryId":"89","ListUrlMain":"https://doi.org/10.1080/15230406.2022.2154271","RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"GEOGRAPHY","Score":null,"Total":0}
Detecting dynamic visual attention in augmented reality aided navigation environment based on a multi-feature integration fully convolutional network
ABSTRACT Visual attention detection, as an important concept for human visual behavior research, has been widely studied. However, previous studies seldom considered the feature integration mechanism to detect visual attention and rarely considered the differences due to different geographical scenes. In this paper, we use an augmented reality aided (AR-aided) navigation experimental dataset to study human visual behavior in a dynamic AR-aided environment. Then, we propose a multi-feature integration fully convolutional network (M-FCN) based on a self-adaptive environment weight (SEW) to integrate RGB-D, semantic, optical flow and spatial neighborhood features to detect human visual attention. The result shows that the M-FCN performs better than other state-of-the-art saliency models. In addition, the introduction of feature integration mechanism and the SEW can improve the accuracy and robustness of visual attention detection. Meanwhile, we find that RGB-D and semantic features perform best in different road routes and road types, but with the increase in road type complexity, the expressiveness of these two features weakens, and the expressiveness of optical flow and spatial neighborhood features increases. The research is helpful for AR-device navigation tool design and urban spatial planning.
期刊介绍:
Cartography and Geographic Information Science (CaGIS) is the official publication of the Cartography and Geographic Information Society (CaGIS), a member organization of the American Congress on Surveying and Mapping (ACSM). The Cartography and Geographic Information Society supports research, education, and practices that improve the understanding, creation, analysis, and use of maps and geographic information. The society serves as a forum for the exchange of original concepts, techniques, approaches, and experiences by those who design, implement, and use geospatial technologies through the publication of authoritative articles and international papers.