Qianli Xu, Liyuan Li, Joo-Hwee Lim, Cheston Tan, Michal Mukawa, Gang S. Wang
{"title":"A wearable virtual guide for context-aware cognitive indoor navigation","authors":"Qianli Xu, Liyuan Li, Joo-Hwee Lim, Cheston Tan, Michal Mukawa, Gang S. Wang","doi":"10.1145/2628363.2628390","DOIUrl":null,"url":null,"abstract":"In this paper, we explore a new way to provide context-aware assistance for indoor navigation using a wearable vision system. We investigate how to represent the cognitive knowledge of wayfinding based on first-person-view videos in real-time and how to provide context-aware navigation instructions in a human-like manner. Inspired by the human cognitive process of wayfinding, we propose a novel cognitive model that represents visual concepts as a hierarchical structure. It facilitates efficient and robust localization based on cognitive visual concepts. Next, we design a prototype system that provides intelligent context-aware assistance based on the cognitive indoor navigation knowledge model. We conducted field tests and evaluated the system's efficacy by benchmarking it against traditional 2D maps and human guidance. The results show that context-awareness built on cognitive visual perception enables the system to emulate the efficacy of a human guide, leading to positive user experience.","PeriodicalId":74207,"journal":{"name":"MobileHCI : proceedings of the ... International Conference on Human Computer Interaction with Mobile Devices and Services. MobileHCI (Conference)","volume":"45 1","pages":"111-120"},"PeriodicalIF":0.0000,"publicationDate":"2014-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"21","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"MobileHCI : proceedings of the ... International Conference on Human Computer Interaction with Mobile Devices and Services. MobileHCI (Conference)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2628363.2628390","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 21
Abstract
In this paper, we explore a new way to provide context-aware assistance for indoor navigation using a wearable vision system. We investigate how to represent the cognitive knowledge of wayfinding based on first-person-view videos in real-time and how to provide context-aware navigation instructions in a human-like manner. Inspired by the human cognitive process of wayfinding, we propose a novel cognitive model that represents visual concepts as a hierarchical structure. It facilitates efficient and robust localization based on cognitive visual concepts. Next, we design a prototype system that provides intelligent context-aware assistance based on the cognitive indoor navigation knowledge model. We conducted field tests and evaluated the system's efficacy by benchmarking it against traditional 2D maps and human guidance. The results show that context-awareness built on cognitive visual perception enables the system to emulate the efficacy of a human guide, leading to positive user experience.