{"title":"Unified Visual Perception Model for context-aware wearable AR","authors":"Youngkyoon Jang, Woontack Woo","doi":"10.1109/ISMAR.2013.6671818","DOIUrl":null,"url":null,"abstract":"We propose Unified Visual Perception Model (UVPM), which imitates the human visual perception process, for the stable object recognition necessarily required for augmented reality (AR) in the field. The proposed model is designed based on the theoretical bases in the field of cognitive informatics, brain research and psychological science. The proposed model consists of Working Memory (WM) in charge of low-level processing (in a bottomup manner), Long-Term Memory (LTM) and Short-Term Memory (STM), which are in charge of high-level processing (in a top-down manner). WM and LTM/STM are mutually complementary to increase recognition accuracies. By implementing the initial prototype of each boxes of the model, we could know that the proposed model works for stable object recognition. The proposed model is available to support context-aware AR with the optical see-through HMD.","PeriodicalId":92225,"journal":{"name":"International Symposium on Mixed and Augmented Reality : (ISMAR) [proceedings]. IEEE and ACM International Symposium on Mixed and Augmented Reality","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2013-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Symposium on Mixed and Augmented Reality : (ISMAR) [proceedings]. IEEE and ACM International Symposium on Mixed and Augmented Reality","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISMAR.2013.6671818","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
We propose Unified Visual Perception Model (UVPM), which imitates the human visual perception process, for the stable object recognition necessarily required for augmented reality (AR) in the field. The proposed model is designed based on the theoretical bases in the field of cognitive informatics, brain research and psychological science. The proposed model consists of Working Memory (WM) in charge of low-level processing (in a bottomup manner), Long-Term Memory (LTM) and Short-Term Memory (STM), which are in charge of high-level processing (in a top-down manner). WM and LTM/STM are mutually complementary to increase recognition accuracies. By implementing the initial prototype of each boxes of the model, we could know that the proposed model works for stable object recognition. The proposed model is available to support context-aware AR with the optical see-through HMD.