首页 > 最新文献

2011 International Symposium on Ubiquitous Virtual Reality最新文献

英文 中文
Time-Efficient Data Congregation Protocols on Wireless Sensor Network 无线传感器网络的时效性数据聚合协议
Pub Date : 2011-07-01 DOI: 10.1109/ISUVR.2011.13
Islam A. K. M. Muzahidul, K. Wada, Wei Chen
This paper focuses on time-efficient data congregation protocols on a Dynamic Cluster-based Wireless Sensor Network (CBWSN). The CBWSN is self-configurable and re-configurable, thus capable of performing two dynamic operations: node-move-in and node-move-out. In this paper, we propose two efficient congregation techniques for Dynamic CBWSN. In order to facilitate the efficient congregation protocols we propose an improved cluster-based structure. In this structure, we first construct a communication highway, and then improve the cluster-based structure to facilitate efficient congregation protocols such that the nodes of the network can perform inter and intra cluster communications efficiently. We also study the time complexity of the protocols.
本文主要研究基于动态集群的无线传感器网络(CBWSN)的时效性数据聚合协议。CBWSN具有自配置和可重新配置的能力,因此能够执行两种动态操作:节点移入和节点移出。本文提出了两种有效的动态CBWSN聚合技术。为了实现高效的聚合协议,我们提出了一种改进的基于集群的结构。在该结构中,我们首先构建了通信高速公路,然后对基于集群的结构进行改进,以实现高效的聚合协议,使网络节点能够高效地进行集群间和集群内的通信。我们还研究了协议的时间复杂度。
{"title":"Time-Efficient Data Congregation Protocols on Wireless Sensor Network","authors":"Islam A. K. M. Muzahidul, K. Wada, Wei Chen","doi":"10.1109/ISUVR.2011.13","DOIUrl":"https://doi.org/10.1109/ISUVR.2011.13","url":null,"abstract":"This paper focuses on time-efficient data congregation protocols on a Dynamic Cluster-based Wireless Sensor Network (CBWSN). The CBWSN is self-configurable and re-configurable, thus capable of performing two dynamic operations: node-move-in and node-move-out. In this paper, we propose two efficient congregation techniques for Dynamic CBWSN. In order to facilitate the efficient congregation protocols we propose an improved cluster-based structure. In this structure, we first construct a communication highway, and then improve the cluster-based structure to facilitate efficient congregation protocols such that the nodes of the network can perform inter and intra cluster communications efficiently. We also study the time complexity of the protocols.","PeriodicalId":339967,"journal":{"name":"2011 International Symposium on Ubiquitous Virtual Reality","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128017622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
mARGraphy: Mobile AR-based Dynamic Information Visualization mARGraphy:基于移动ar的动态信息可视化
Pub Date : 2011-07-01 DOI: 10.1109/ISUVR.2011.21
Ahyoung Choi, Youngmin Park, Youngkyoon Jang, Changgu Kang, Woontack Woo
We propose a mARGraphy, which visualizes information based on augmented reality (AR) technology. It provides the intuitive and interactive way to make users to understand dynamic 3D information in situ with highly relevant to the target. To show the effectiveness of our work, we introduce a traditional map viewer application. It recognizes region of a traditional map with object recognition and tracking method on a mobile platform. Then, it aggregates dynamic information obtained from database such as geographical features with temporal changes, and situational contexts. For the verification of this work, we observed how our system improves users' understanding information with mARGraphy through preliminary user study.
我们提出了一种基于增强现实(AR)技术的信息可视化技术。它提供了直观和交互式的方式,使用户能够就地了解与目标高度相关的动态三维信息。为了展示我们工作的有效性,我们将介绍一个传统的地图查看器应用程序。在移动平台上利用目标识别和跟踪方法对传统地图的区域进行识别。然后,对数据库中获取的随时间变化的地理特征、情景背景等动态信息进行聚合。为了验证这项工作,我们通过初步的用户研究观察了我们的系统如何通过mARGraphy提高用户对信息的理解。
{"title":"mARGraphy: Mobile AR-based Dynamic Information Visualization","authors":"Ahyoung Choi, Youngmin Park, Youngkyoon Jang, Changgu Kang, Woontack Woo","doi":"10.1109/ISUVR.2011.21","DOIUrl":"https://doi.org/10.1109/ISUVR.2011.21","url":null,"abstract":"We propose a mARGraphy, which visualizes information based on augmented reality (AR) technology. It provides the intuitive and interactive way to make users to understand dynamic 3D information in situ with highly relevant to the target. To show the effectiveness of our work, we introduce a traditional map viewer application. It recognizes region of a traditional map with object recognition and tracking method on a mobile platform. Then, it aggregates dynamic information obtained from database such as geographical features with temporal changes, and situational contexts. For the verification of this work, we observed how our system improves users' understanding information with mARGraphy through preliminary user study.","PeriodicalId":339967,"journal":{"name":"2011 International Symposium on Ubiquitous Virtual Reality","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128306648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Computer Vision for 3DTV and Augmented Reality 3d电视和增强现实的计算机视觉
Pub Date : 2011-07-01 DOI: 10.1109/ISUVR.2011.25
H. Saito
Recent computer vision technology makes innovative progress in 3D visual media industry. In this paper, I would like to introduce our approaches for making use of the computer vision technology in order to achieve innovative application systems of 3DTV and augmented reality. First, I would demonstrate the effectiveness of multiple viewpoint videos and depth videos in 3DTV applications, in which 3D shape reconstructions and view synthesis as computer vision technologies are used. Augmented reality is a method for presenting digital information over the real world with a see-through display. For such application of AR, real-time camera tracking is one of significant technology which is also based on a state-of-art in computer vision.
近年来的计算机视觉技术使三维视觉媒体行业取得了创新进展。在本文中,我想介绍我们如何利用计算机视觉技术来实现3d电视和增强现实的创新应用系统。首先,我将展示多视点视频和深度视频在3DTV应用中的有效性,其中使用了3D形状重建和视图合成作为计算机视觉技术。增强现实是一种通过透明显示器在现实世界中呈现数字信息的方法。对于AR的这种应用,实时摄像机跟踪是重要的技术之一,这也是基于计算机视觉的最新技术。
{"title":"Computer Vision for 3DTV and Augmented Reality","authors":"H. Saito","doi":"10.1109/ISUVR.2011.25","DOIUrl":"https://doi.org/10.1109/ISUVR.2011.25","url":null,"abstract":"Recent computer vision technology makes innovative progress in 3D visual media industry. In this paper, I would like to introduce our approaches for making use of the computer vision technology in order to achieve innovative application systems of 3DTV and augmented reality. First, I would demonstrate the effectiveness of multiple viewpoint videos and depth videos in 3DTV applications, in which 3D shape reconstructions and view synthesis as computer vision technologies are used. Augmented reality is a method for presenting digital information over the real world with a see-through display. For such application of AR, real-time camera tracking is one of significant technology which is also based on a state-of-art in computer vision.","PeriodicalId":339967,"journal":{"name":"2011 International Symposium on Ubiquitous Virtual Reality","volume":"04 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131097118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Collaboration between Tabletop and Mobile Device 桌面和移动设备之间的协作
Pub Date : 2011-07-01 DOI: 10.1109/ISUVR.2011.18
Jooyoung Lee, Ralf Doerner, Johannes Luderschmidt, Hyungseok Kim, Jee-In Kim
Table-top is collaborative work space providinglarge touch screen which is capable of multi-touch interface. To make an extended collaborative work space withoutrestriction of time and place, one possible way is adoptingmobile devices. In this paper, we propose a way to monitor andcontrol the table-top using mobile devices. To monitor thelarge-screen table-top with a mobile device, it is necessary toconvert image into low-resolution for the device. We adopt a"Focus & context" image generation method for tabletopcontrol, to convert relatively large screen images into smallerone for mobile display. With this method, users are able tohave their own point of view for tabletop-based collaborationand also to have extended work spaces by adopting remotecontrol. We prove the result by conducting several experiments.
桌面是协同工作空间,提供大触摸屏,支持多点触控界面。为了创造一个不受时间和地点限制的扩展协同工作空间,一种可能的方法是采用移动设备。本文提出了一种利用移动设备对桌面进行监控的方法。要用移动设备监控大屏幕桌面,必须将图像转换为设备的低分辨率。我们采用“Focus & context”的桌面控制图像生成方法,将相对较大的屏幕图像转换为较小的屏幕图像用于移动显示。通过这种方法,用户可以对基于桌面的协作拥有自己的观点,并且通过采用远程控制来扩展工作空间。我们做了几个实验来证明这个结果。
{"title":"Collaboration between Tabletop and Mobile Device","authors":"Jooyoung Lee, Ralf Doerner, Johannes Luderschmidt, Hyungseok Kim, Jee-In Kim","doi":"10.1109/ISUVR.2011.18","DOIUrl":"https://doi.org/10.1109/ISUVR.2011.18","url":null,"abstract":"Table-top is collaborative work space providinglarge touch screen which is capable of multi-touch interface. To make an extended collaborative work space withoutrestriction of time and place, one possible way is adoptingmobile devices. In this paper, we propose a way to monitor andcontrol the table-top using mobile devices. To monitor thelarge-screen table-top with a mobile device, it is necessary toconvert image into low-resolution for the device. We adopt a\"Focus & context\" image generation method for tabletopcontrol, to convert relatively large screen images into smallerone for mobile display. With this method, users are able tohave their own point of view for tabletop-based collaborationand also to have extended work spaces by adopting remotecontrol. We prove the result by conducting several experiments.","PeriodicalId":339967,"journal":{"name":"2011 International Symposium on Ubiquitous Virtual Reality","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130972661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
PhoneGuide: Adaptive Image Classification for Mobile Museum Guidance 手机博物馆导览的自适应图像分类
Pub Date : 2011-07-01 DOI: 10.1109/ISUVR.2011.12
O. Bimber, Erich Bruns
This paper summarizes the various components of our mobile museum guidance system PhoneGuide. It explains how practically viable object recognition rates can be achieved under realistic conditions using adaptive image classification.
本文总结了我们的手机博物馆导览系统PhoneGuide的各个组成部分。它解释了如何实际可行的目标识别率可以实现在现实条件下使用自适应图像分类。
{"title":"PhoneGuide: Adaptive Image Classification for Mobile Museum Guidance","authors":"O. Bimber, Erich Bruns","doi":"10.1109/ISUVR.2011.12","DOIUrl":"https://doi.org/10.1109/ISUVR.2011.12","url":null,"abstract":"This paper summarizes the various components of our mobile museum guidance system PhoneGuide. It explains how practically viable object recognition rates can be achieved under realistic conditions using adaptive image classification.","PeriodicalId":339967,"journal":{"name":"2011 International Symposium on Ubiquitous Virtual Reality","volume":"926 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114355377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Effect of Active and Passive Haptic Sensory Information on Memory for 2D Sequential Selection Task 主动和被动触觉信息对二维序列选择任务记忆的影响
Pub Date : 2011-07-01 DOI: 10.1109/ISUVR.2011.24
Hojin Lee, Gabjong Han, In Lee, Sunghoon Yim, Kyungpyo Hong, Seungmoon Choi
This paper introduces an education system for typical assembly procedures that provides various haptic sensory information including active and passive haptic feedbacks. Using the system, we implemented four kinds of training methods and experimentally evaluated their performances in terms of short-term and long-term memory over the task. In results, active haptic guidance showed beneficial effects on the short-term memory. In contrast, passive guidance showed the worst performance and even degraded the efficiency of short-term memory. No training methods resulted in noticeable improvements for the long-term memory performance.
本文介绍了一个典型装配过程的教育系统,该系统提供各种触觉信息,包括主动和被动触觉反馈。利用该系统,我们实施了四种训练方法,并从短期和长期记忆的角度对它们在任务中的表现进行了实验评估。结果表明,主动触觉引导对短期记忆有显著的促进作用。相比之下,被动指导表现最差,甚至降低了短期记忆的效率。没有任何训练方法能显著改善长期记忆的表现。
{"title":"Effect of Active and Passive Haptic Sensory Information on Memory for 2D Sequential Selection Task","authors":"Hojin Lee, Gabjong Han, In Lee, Sunghoon Yim, Kyungpyo Hong, Seungmoon Choi","doi":"10.1109/ISUVR.2011.24","DOIUrl":"https://doi.org/10.1109/ISUVR.2011.24","url":null,"abstract":"This paper introduces an education system for typical assembly procedures that provides various haptic sensory information including active and passive haptic feedbacks. Using the system, we implemented four kinds of training methods and experimentally evaluated their performances in terms of short-term and long-term memory over the task. In results, active haptic guidance showed beneficial effects on the short-term memory. In contrast, passive guidance showed the worst performance and even degraded the efficiency of short-term memory. No training methods resulted in noticeable improvements for the long-term memory performance.","PeriodicalId":339967,"journal":{"name":"2011 International Symposium on Ubiquitous Virtual Reality","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131516964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimation of Illuminants for Plausible Lighting in Augmented Reality 增强现实中可信照明的光源估计
Pub Date : 2011-07-01 DOI: 10.1109/ISUVR.2011.17
Seokjun Lee, Soon Ki Jung
This paper presents a practical method to estimate the positions of light sources in real environment, using a mirror sphere placed on a known natural marker. For the stable results of static lighting, we take the multiple images around a sphere and estimate the principal light directions of the vector clusters for each light source in running-time. We also estimate the moving illuminant for changes of the scene illumination, and augment the virtual objects onto the real image with the proper shading and shadows. Some experimental results show that the proposed method produces plausible AR visualization in real time.
本文提出了一种在已知的自然标记物上放置一个镜面球来估计真实环境中光源位置的实用方法。为了获得稳定的静态照明结果,我们在一个球体周围获取多幅图像,并在运行时间内估计每个光源的矢量簇的主光方向。我们还根据场景照明的变化来估计运动光源,并通过适当的阴影和阴影将虚拟物体增强到真实图像上。实验结果表明,所提出的方法能够实时地产生逼真的AR可视化效果。
{"title":"Estimation of Illuminants for Plausible Lighting in Augmented Reality","authors":"Seokjun Lee, Soon Ki Jung","doi":"10.1109/ISUVR.2011.17","DOIUrl":"https://doi.org/10.1109/ISUVR.2011.17","url":null,"abstract":"This paper presents a practical method to estimate the positions of light sources in real environment, using a mirror sphere placed on a known natural marker. For the stable results of static lighting, we take the multiple images around a sphere and estimate the principal light directions of the vector clusters for each light source in running-time. We also estimate the moving illuminant for changes of the scene illumination, and augment the virtual objects onto the real image with the proper shading and shadows. Some experimental results show that the proposed method produces plausible AR visualization in real time.","PeriodicalId":339967,"journal":{"name":"2011 International Symposium on Ubiquitous Virtual Reality","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132761179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Graphical Menus Using a Mobile Phone for Wearable AR Systems 使用可穿戴AR系统的手机图形菜单
Pub Date : 2011-07-01 DOI: 10.1109/ISUVR.2011.23
Hyeongmook Lee, Dongchul Kim, Woontack Woo
In this paper, we explore the design of various types of graphical menus via a mobile phone for use in a wearable augmented reality system. For efficient system control, locating menus is vital. Based on previous relevant work, we determine display-, manipulator- and target-referenced menu placement according to focusable elements within a wearable augmented reality system. Moreover, we implement and discuss three menu techniques using a mobile phone with a stereo head-mounted display.
在本文中,我们探索了通过手机设计用于可穿戴增强现实系统的各种类型的图形菜单。对于有效的系统控制,定位菜单是至关重要的。基于之前的相关工作,我们根据可穿戴增强现实系统中的可聚焦元素确定了显示、操纵器和目标参考菜单的位置。此外,我们使用带有立体声头戴式显示器的移动电话实现并讨论了三种菜单技术。
{"title":"Graphical Menus Using a Mobile Phone for Wearable AR Systems","authors":"Hyeongmook Lee, Dongchul Kim, Woontack Woo","doi":"10.1109/ISUVR.2011.23","DOIUrl":"https://doi.org/10.1109/ISUVR.2011.23","url":null,"abstract":"In this paper, we explore the design of various types of graphical menus via a mobile phone for use in a wearable augmented reality system. For efficient system control, locating menus is vital. Based on previous relevant work, we determine display-, manipulator- and target-referenced menu placement according to focusable elements within a wearable augmented reality system. Moreover, we implement and discuss three menu techniques using a mobile phone with a stereo head-mounted display.","PeriodicalId":339967,"journal":{"name":"2011 International Symposium on Ubiquitous Virtual Reality","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124473718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Mirror Worlds: Experimenting with Heterogeneous AR 镜像世界:异质AR实验
Pub Date : 2011-07-01 DOI: 10.1109/ISUVR.2011.28
A. Hill, Evan Barba, B. MacIntyre, Maribeth Gandy Coleman, Brian Davidson
Until recently, most content on the Internet has not been explicitly tied to specific people, places or things. However, content is increasingly being geo-coded and semantically labeled, making explicit connections between the physical world around us and the virtual world in cyberspace. Most augmented reality systems simulate a portion of the physical world, for the purposes of rendering a hybrid scene around the user. We have been experimenting with approaches to terra-scale, heterogeneous augmented reality mirror worlds, to unify these two worlds. Our focus has been on the authoring and user-experience, for example allowing ad-hoc transition between augmented and virtual reality interactions for multiple co-present users. This form of ubiquitous virtual reality raises several research questions involving the functional requirements, user affordances and relevant system architectures for these mirror worlds. In this paper, we describe our experiments with two mirror world systems and some lessons learned about the limitations of deploying these systems using massively multiplayer and dedicated game engine technologies.
直到最近,互联网上的大多数内容还没有明确地与特定的人、地点或事物联系在一起。然而,内容越来越多地被地理编码和语义标记,在我们周围的物理世界和网络空间的虚拟世界之间建立明确的联系。大多数增强现实系统模拟物理世界的一部分,以呈现用户周围的混合场景。我们一直在尝试各种方法来实现地球尺度的、异构的增强现实镜像世界,以统一这两个世界。我们的重点一直放在创作和用户体验上,例如允许多个共同呈现的用户在增强现实和虚拟现实交互之间进行临时转换。这种无处不在的虚拟现实形式提出了几个研究问题,涉及到这些镜像世界的功能需求、用户能力和相关的系统架构。在本文中,我们将描述我们在两个镜像世界系统上的实验,以及使用大型多人游戏和专用游戏引擎技术部署这些系统的局限性。
{"title":"Mirror Worlds: Experimenting with Heterogeneous AR","authors":"A. Hill, Evan Barba, B. MacIntyre, Maribeth Gandy Coleman, Brian Davidson","doi":"10.1109/ISUVR.2011.28","DOIUrl":"https://doi.org/10.1109/ISUVR.2011.28","url":null,"abstract":"Until recently, most content on the Internet has not been explicitly tied to specific people, places or things. However, content is increasingly being geo-coded and semantically labeled, making explicit connections between the physical world around us and the virtual world in cyberspace. Most augmented reality systems simulate a portion of the physical world, for the purposes of rendering a hybrid scene around the user. We have been experimenting with approaches to terra-scale, heterogeneous augmented reality mirror worlds, to unify these two worlds. Our focus has been on the authoring and user-experience, for example allowing ad-hoc transition between augmented and virtual reality interactions for multiple co-present users. This form of ubiquitous virtual reality raises several research questions involving the functional requirements, user affordances and relevant system architectures for these mirror worlds. In this paper, we describe our experiments with two mirror world systems and some lessons learned about the limitations of deploying these systems using massively multiplayer and dedicated game engine technologies.","PeriodicalId":339967,"journal":{"name":"2011 International Symposium on Ubiquitous Virtual Reality","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115913856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Barcode-Assisted Planar Object Tracking Method for Mobile Augmented Reality 移动增强现实中的条形码辅助平面目标跟踪方法
Pub Date : 2011-07-01 DOI: 10.1109/ISUVR.2011.20
Nohyoung Park, Wonwoo Lee, Woontack Woo
In this paper, we propose a planar target tracking method that exploits a barcode containing information about a target. Our method combines both barcode detection and natural feature tracking methods to track a planar object efficiently on mobile devices. A planar target is detected by recognizing the barcode located near the target, and the target's keypoints are tracked in video sequences. We embed the information related to a planar object into the barcode, and the information is used to limit image regions to perform keypoint matching between consecutive frames. We show how to detect a barcode robustly and what information is embedded for efficient tracking. Our detection method runs at 30 fps on modern mobile devices, and it can be used for mobile augmented reality applications using planar targets.
在本文中,我们提出了一种利用包含目标信息的条形码的平面目标跟踪方法。我们的方法结合了条形码检测和自然特征跟踪方法,在移动设备上有效地跟踪平面物体。该方法通过识别目标附近的条形码来检测平面目标,并在视频序列中跟踪目标的关键点。我们将平面物体的相关信息嵌入到条码中,利用这些信息来限制图像区域,在连续帧之间进行关键点匹配。我们将展示如何稳健地检测条形码,以及嵌入哪些信息以进行有效跟踪。我们的检测方法在现代移动设备上以30 fps的速度运行,可用于使用平面目标的移动增强现实应用。
{"title":"Barcode-Assisted Planar Object Tracking Method for Mobile Augmented Reality","authors":"Nohyoung Park, Wonwoo Lee, Woontack Woo","doi":"10.1109/ISUVR.2011.20","DOIUrl":"https://doi.org/10.1109/ISUVR.2011.20","url":null,"abstract":"In this paper, we propose a planar target tracking method that exploits a barcode containing information about a target. Our method combines both barcode detection and natural feature tracking methods to track a planar object efficiently on mobile devices. A planar target is detected by recognizing the barcode located near the target, and the target's keypoints are tracked in video sequences. We embed the information related to a planar object into the barcode, and the information is used to limit image regions to perform keypoint matching between consecutive frames. We show how to detect a barcode robustly and what information is embedded for efficient tracking. Our detection method runs at 30 fps on modern mobile devices, and it can be used for mobile augmented reality applications using planar targets.","PeriodicalId":339967,"journal":{"name":"2011 International Symposium on Ubiquitous Virtual Reality","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130040014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
期刊
2011 International Symposium on Ubiquitous Virtual Reality
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1