Pub Date : 2011-10-26DOI: 10.1109/ISMAR.2011.6092369
P. Baheti, A. Swaminathan, Murali Chari, S. Diaz, Slawek Grzechnik
Recently, there has been tremendous interest in the area of mobile Augmented Reality (AR) with applications including navigation, social networking, gaming and education. Current generation mobile phones are equipped with camera, GPS and other sensors, e.g., magnetic compass, accelerometer, gyro in addition to having ever increasing computing/graphics capabilities and memory storage. Mobile AR applications process the output of one or more sensors to augment the real world view with useful information. This paper's focus is on the camera sensor output, and describes the building blocks for a vision-based AR system. We present information-theoretic techniques to build and maintain an image (feature) database based on reference images, and for querying the captured input images against this database. Performance results using standard image sets are provided demonstrating superior recognition performance even with dramatic reductions in feature database size.
{"title":"Information-theoretic database building and querying for mobile augmented reality applications","authors":"P. Baheti, A. Swaminathan, Murali Chari, S. Diaz, Slawek Grzechnik","doi":"10.1109/ISMAR.2011.6092369","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092369","url":null,"abstract":"Recently, there has been tremendous interest in the area of mobile Augmented Reality (AR) with applications including navigation, social networking, gaming and education. Current generation mobile phones are equipped with camera, GPS and other sensors, e.g., magnetic compass, accelerometer, gyro in addition to having ever increasing computing/graphics capabilities and memory storage. Mobile AR applications process the output of one or more sensors to augment the real world view with useful information. This paper's focus is on the camera sensor output, and describes the building blocks for a vision-based AR system. We present information-theoretic techniques to build and maintain an image (feature) database based on reference images, and for querying the captured input images against this database. Performance results using standard image sets are provided demonstrating superior recognition performance even with dramatic reductions in feature database size.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127115780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-26DOI: 10.1109/ISMAR.2011.6092382
Martin Knecht, C. Traxler, W. Purgathofer, M. Wimmer
We present a novel adaptive color mapping method for virtual objects in mixed-reality environments. In several mixed-reality applications, added virtual objects should be visually indistinguishable from real objects. Recent mixed-reality methods use global-illumination algorithms to approach this goal. However, simulating the light distribution is not enough for visually plausible images. Since the observing camera has its very own transfer function from real-world radiance values to RGB colors, virtual objects look artificial just because their rendered colors do not match with those of the camera. Our approach combines an on-line camera characterization method with a heuristic to map colors of virtual objects to colors as they would be seen by the observing camera. Previous tone-mapping functions were not designed for use in mixed-reality systems and thus did not take the camera-specific behavior into account. In contrast, our method takes the camera into account and thus can also handle changes of its parameters during runtime. The results show that virtual objects look visually more plausible than by just applying tone-mapping operators.
{"title":"Adaptive camera-based color mapping for mixed-reality applications","authors":"Martin Knecht, C. Traxler, W. Purgathofer, M. Wimmer","doi":"10.1109/ISMAR.2011.6092382","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092382","url":null,"abstract":"We present a novel adaptive color mapping method for virtual objects in mixed-reality environments. In several mixed-reality applications, added virtual objects should be visually indistinguishable from real objects. Recent mixed-reality methods use global-illumination algorithms to approach this goal. However, simulating the light distribution is not enough for visually plausible images. Since the observing camera has its very own transfer function from real-world radiance values to RGB colors, virtual objects look artificial just because their rendered colors do not match with those of the camera. Our approach combines an on-line camera characterization method with a heuristic to map colors of virtual objects to colors as they would be seen by the observing camera. Previous tone-mapping functions were not designed for use in mixed-reality systems and thus did not take the camera-specific behavior into account. In contrast, our method takes the camera into account and thus can also handle changes of its parameters during runtime. The results show that virtual objects look visually more plausible than by just applying tone-mapping operators.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114886367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-26DOI: 10.1109/ISMAR.2011.6092528
G. Bleser, Gustaf Hendeby, M. Miezal
In the context of a smart user assistance system for industrial manipulation tasks it is necessary to capture motions of the upper body and limbs of the worker in order to derive his or her interactions with the task space. While such capturing technology already exists, the novelty of the proposed work results from the strong requirements of the application context: The method should be flexible and use only on-body sensors, work accurately in industrial environments that suffer from severe magnetic disturbances, and enable consistent registration between the user body frame and the task space. Currently available systems cannot provide this. This paper suggests a novel egocentric solution for visual-inertial upper-body motion tracking based on recursive filtering and model-based sensor fusion. Visual detections of the wrists in the images of a chest-mounted camera are used as substitute for the commonly used magnetometer measurements. The on-body sensor network, the motion capturing system, and the required calibration procedure are described and successful operation is shown in a real industrial environment.
{"title":"Using egocentric vision to achieve robust inertial body tracking under magnetic disturbances","authors":"G. Bleser, Gustaf Hendeby, M. Miezal","doi":"10.1109/ISMAR.2011.6092528","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092528","url":null,"abstract":"In the context of a smart user assistance system for industrial manipulation tasks it is necessary to capture motions of the upper body and limbs of the worker in order to derive his or her interactions with the task space. While such capturing technology already exists, the novelty of the proposed work results from the strong requirements of the application context: The method should be flexible and use only on-body sensors, work accurately in industrial environments that suffer from severe magnetic disturbances, and enable consistent registration between the user body frame and the task space. Currently available systems cannot provide this. This paper suggests a novel egocentric solution for visual-inertial upper-body motion tracking based on recursive filtering and model-based sensor fusion. Visual detections of the wrists in the images of a chest-mounted camera are used as substitute for the commonly used magnetometer measurements. The on-body sensor network, the motion capturing system, and the required calibration procedure are described and successful operation is shown in a real industrial environment.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130495613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-26DOI: 10.1109/ISMAR.2011.6092379
Andrew Maimone, H. Fuchs
This paper introduces a proof-of-concept telepresence system that offers fully dynamic, real-time 3D scene capture and continuous-viewpoint, head-tracked stereo 3D display without requiring the user to wear any tracking or viewing apparatus. We present a complete software and hardware framework for implementing the system, which is based on an array of commodity Microsoft Kinect™color-plus-depth cameras. Novel contributions include an algorithm for merging data between multiple depth cameras and techniques for automatic color calibration and preserving stereo quality even with low rendering rates. Also presented is a solution to the problem of interference that occurs between Kinect cameras with overlapping views. Emphasis is placed on a fully GPU-accelerated data processing and rendering pipeline that can apply hole filling, smoothing, data merger, surface generation, and color correction at rates of up to 100 million triangles/sec on a single PC and graphics board. Also presented is a Kinect-based marker-less tracking system that combines 2D eye recognition with depth information to allow head-tracked stereo views to be rendered for a parallax barrier autostereoscopic display. Our system is affordable and reproducible, offering the opportunity to easily deliver 3D telepresence beyond the researcher's lab.
{"title":"Encumbrance-free telepresence system with real-time 3D capture and display using commodity depth cameras","authors":"Andrew Maimone, H. Fuchs","doi":"10.1109/ISMAR.2011.6092379","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092379","url":null,"abstract":"This paper introduces a proof-of-concept telepresence system that offers fully dynamic, real-time 3D scene capture and continuous-viewpoint, head-tracked stereo 3D display without requiring the user to wear any tracking or viewing apparatus. We present a complete software and hardware framework for implementing the system, which is based on an array of commodity Microsoft Kinect™color-plus-depth cameras. Novel contributions include an algorithm for merging data between multiple depth cameras and techniques for automatic color calibration and preserving stereo quality even with low rendering rates. Also presented is a solution to the problem of interference that occurs between Kinect cameras with overlapping views. Emphasis is placed on a fully GPU-accelerated data processing and rendering pipeline that can apply hole filling, smoothing, data merger, surface generation, and color correction at rates of up to 100 million triangles/sec on a single PC and graphics board. Also presented is a Kinect-based marker-less tracking system that combines 2D eye recognition with depth information to allow head-tracked stereo views to be rendered for a parallax barrier autostereoscopic display. Our system is affordable and reproducible, offering the opportunity to easily deliver 3D telepresence beyond the researcher's lab.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134391616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-26DOI: 10.1109/ISMAR.2011.6092391
N. Dedual, Ohan Oda, Steven K. Feiner
How can multiple different display and interaction devices be used together to create an effective augmented reality environment? We explore the design of several prototype hybrid user interfaces that combine a 2D multi-touch tabletop display with a 3D head-tracked video-see-through display. We describe a simple modeling application and an urban visualization tool in which the information presented on the head-worn display supplements the information displayed on the tabletop, using a variety of approaches to track the head-worn display relative to the tabletop. In all cases, our goal is to allow users who can see only the tabletop to interact effectively with users wearing head-worn displays.
{"title":"Creating hybrid user interfaces with a 2D multi-touch tabletop and a 3D see-through head-worn display","authors":"N. Dedual, Ohan Oda, Steven K. Feiner","doi":"10.1109/ISMAR.2011.6092391","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092391","url":null,"abstract":"How can multiple different display and interaction devices be used together to create an effective augmented reality environment? We explore the design of several prototype hybrid user interfaces that combine a 2D multi-touch tabletop display with a 3D head-tracked video-see-through display. We describe a simple modeling application and an urban visualization tool in which the information presented on the head-worn display supplements the information displayed on the tabletop, using a variety of approaches to track the head-worn display relative to the tabletop. In all cases, our goal is to allow users who can see only the tabletop to interact effectively with users wearing head-worn displays.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134195749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-26DOI: 10.1109/ISMAR.2011.6092355
Daniel Wagner, I. Barakonyi, Istvan Siklossy, Jay Wright, R. Ashok, S. Diaz, B. MacIntyre, D. Schmalstieg
Academic/Research Overview of Mobile Augmented Reality Introduction to the Qualcomm AR Platform — Features and Usage Scenarios Developing Mobile AR Applications using Qualcomm's Unity Extension Introduction to Qualcomm's QCAR SDK Native API Cross Platform Development with the Native QCAR SDK
{"title":"Building your vision with Qualcomm's Mobile Augmented Reality (AR) platform: AR on mobile devices","authors":"Daniel Wagner, I. Barakonyi, Istvan Siklossy, Jay Wright, R. Ashok, S. Diaz, B. MacIntyre, D. Schmalstieg","doi":"10.1109/ISMAR.2011.6092355","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092355","url":null,"abstract":"Academic/Research Overview of Mobile Augmented Reality Introduction to the Qualcomm AR Platform — Features and Usage Scenarios Developing Mobile AR Applications using Qualcomm's Unity Extension Introduction to Qualcomm's QCAR SDK Native API Cross Platform Development with the Native QCAR SDK","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133033134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-26DOI: 10.1109/ISMAR.2011.6092384
D. Nowrouzezahrai, S. Geiger, Kenny Mitchell, R. Sumner, Wojciech Jarosz, M. Gross
Integrating animated virtual objects with their surroundings for high-quality augmented reality requires both geometric and radio-metric consistency. We focus on the latter of these problems and present an approach that captures and factorizes external lighting in a manner that allows for realistic relighting of both animated and static virtual objects. Our factorization facilitates a combination of hard and soft shadows, with high-performance, in a manner that is consistent with the surrounding scene lighting.
{"title":"Light factorization for mixed-frequency shadows in augmented reality","authors":"D. Nowrouzezahrai, S. Geiger, Kenny Mitchell, R. Sumner, Wojciech Jarosz, M. Gross","doi":"10.1109/ISMAR.2011.6092384","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092384","url":null,"abstract":"Integrating animated virtual objects with their surroundings for high-quality augmented reality requires both geometric and radio-metric consistency. We focus on the latter of these problems and present an approach that captures and factorizes external lighting in a manner that allows for realistic relighting of both animated and static virtual objects. Our factorization facilitates a combination of hard and soft shadows, with high-performance, in a manner that is consistent with the surrounding scene lighting.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114143907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-26DOI: 10.1109/ISMAR.2011.6092380
S. Lieberknecht, Andrea Huber, Slobodan Ilic, Selim Benhimane
Compared to standard color cameras, RGB-D cameras are designed to additionally provide the depth of imaged pixels which in turn results in a dense colored 3D point cloud representing the environment from a certain viewpoint. We present a real-time tracking method that performs motion estimation of a consumer RGB-D camera with respect to an unknown environment while at the same time reconstructing this environment as a dense textured mesh. Unlike parallel tracking and mapping performed with a standard color or grey scale camera, tracking with an RGB-D camera allows a correctly scaled camera motion estimation. Therefore, there is no need for measuring the environment by any additional tool or equipping the environment by placing objects in it with known sizes. The tracking can be directly started and does not require any preliminary known and/or constrained camera motion. The colored point clouds obtained from every RGB-D image are used to create textured meshes representing the environment from a certain camera view and the real-time estimated camera motion is used to correctly align these meshes over time in order to combine them into a dense reconstruction of the environment. We quantitatively evaluated the proposed method using real image sequences of a challenging scenario and their corresponding ground truth motion obtained with a mechanical measurement arm. We also compared it to a commonly used state-of-the-art method where only the color information is used. We show the superiority of the proposed tracking in terms of accuracy, robustness and usability. We also demonstrate its usage in several Augmented Reality scenarios where the tracking allows a reliable camera motion estimation and the meshing increases the realism of the augmentations by correctly handling their occlusions.
{"title":"RGB-D camera-based parallel tracking and meshing","authors":"S. Lieberknecht, Andrea Huber, Slobodan Ilic, Selim Benhimane","doi":"10.1109/ISMAR.2011.6092380","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092380","url":null,"abstract":"Compared to standard color cameras, RGB-D cameras are designed to additionally provide the depth of imaged pixels which in turn results in a dense colored 3D point cloud representing the environment from a certain viewpoint. We present a real-time tracking method that performs motion estimation of a consumer RGB-D camera with respect to an unknown environment while at the same time reconstructing this environment as a dense textured mesh. Unlike parallel tracking and mapping performed with a standard color or grey scale camera, tracking with an RGB-D camera allows a correctly scaled camera motion estimation. Therefore, there is no need for measuring the environment by any additional tool or equipping the environment by placing objects in it with known sizes. The tracking can be directly started and does not require any preliminary known and/or constrained camera motion. The colored point clouds obtained from every RGB-D image are used to create textured meshes representing the environment from a certain camera view and the real-time estimated camera motion is used to correctly align these meshes over time in order to combine them into a dense reconstruction of the environment. We quantitatively evaluated the proposed method using real image sequences of a challenging scenario and their corresponding ground truth motion obtained with a mechanical measurement arm. We also compared it to a commonly used state-of-the-art method where only the color information is used. We show the superiority of the proposed tracking in terms of accuracy, robustness and usability. We also demonstrate its usage in several Augmented Reality scenarios where the tracking allows a reliable camera motion estimation and the meshing increases the realism of the augmentations by correctly handling their occlusions.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129972024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-26DOI: 10.1109/ISMAR.2011.6092366
Hideaki Uchiyama, É. Marchand
This paper presents an approach for detecting and tracking various types of planar objects with geometrical features. We combine traditional keypoint detectors with Locally Likely Arrangement Hashing (LLAH) [21] for geometrical feature based keypoint matching. Because the stability of keypoint extraction affects the accuracy of the keypoint matching, we set the criteria of keypoint selection on keypoint response and the distance between keypoints. In order to produce robustness to scale changes, we build a non-uniform image pyramid according to keypoint distribution at each scale. In the experiments, we evaluate the applicability of traditional keypoint detectors with LLAH for the detection. We also compare our approach with SURF and finally demonstrate that it is possible to detect and track different types of textures including colorful pictures, binary fiducial markers and handwritings.
{"title":"Toward augmenting everything: Detecting and tracking geometrical features on planar objects","authors":"Hideaki Uchiyama, É. Marchand","doi":"10.1109/ISMAR.2011.6092366","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092366","url":null,"abstract":"This paper presents an approach for detecting and tracking various types of planar objects with geometrical features. We combine traditional keypoint detectors with Locally Likely Arrangement Hashing (LLAH) [21] for geometrical feature based keypoint matching. Because the stability of keypoint extraction affects the accuracy of the keypoint matching, we set the criteria of keypoint selection on keypoint response and the distance between keypoints. In order to produce robustness to scale changes, we build a non-uniform image pyramid according to keypoint distribution at each scale. In the experiments, we evaluate the applicability of traditional keypoint detectors with LLAH for the detection. We also compare our approach with SURF and finally demonstrate that it is possible to detect and track different types of textures including colorful pictures, binary fiducial markers and handwritings.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126409862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-26DOI: 10.1109/ISMAR.2011.6092398
Gun A. Lee, M. Billinghurst
Recent advances in mobile computing and augmented reality (AR) technology have lead to popularization of mobile AR applications. Touch screen input is common in mobile devices, and also widely used in mobile AR applications. However, due to unsteady camera view movement, it can be hard to carry out precise interactions in handheld AR environments, for tasks such as tracing physical objects. In this research, we investigate a Snap-To-Feature interaction method that helps users to perform more accurate touch screen interactions by attracting user input points to image features in the AR scene. A user experiment is performed using the method to trace a physical object, which is typical for modeling real objects within the AR scene. The results shows that the Snap-To-Feature method makes a significant difference in the accuracy of touch screen based AR interaction.
{"title":"A user study on the Snap-To-Feature interaction method","authors":"Gun A. Lee, M. Billinghurst","doi":"10.1109/ISMAR.2011.6092398","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092398","url":null,"abstract":"Recent advances in mobile computing and augmented reality (AR) technology have lead to popularization of mobile AR applications. Touch screen input is common in mobile devices, and also widely used in mobile AR applications. However, due to unsteady camera view movement, it can be hard to carry out precise interactions in handheld AR environments, for tasks such as tracing physical objects. In this research, we investigate a Snap-To-Feature interaction method that helps users to perform more accurate touch screen interactions by attracting user input points to image features in the AR scene. A user experiment is performed using the method to trace a physical object, which is typical for modeling real objects within the AR scene. The results shows that the Snap-To-Feature method makes a significant difference in the accuracy of touch screen based AR interaction.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126667073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}