Chris Sweeney, John Flynn, B. Nuernberger, M. Turk, Tobias Höllerer
We propose a novel formulation for determining the absolute pose of a single or multi-camera system given a known vertical direction. The vertical direction may be easily obtained by detecting the vertical vanishing points with computer vision techniques, or with the aid of IMU sensor measurements from a smartphone. Our solver is general and able to compute absolute camera pose from two 2D-3D correspondences for single or multi-camera systems. We run several synthetic experiments that demonstrate our algorithm's improved robustness to image and IMU noise compared to the current state of the art. Additionally, we run an image localization experiment that demonstrates the accuracy of our algorithm in real-world scenarios. Finally, we show that our algorithm provides increased performance for real-time model-based tracking compared to solvers that do not utilize the vertical direction and show our algorithm in use with an augmented reality application running on a Google Tango tablet.
{"title":"Efficient Computation of Absolute Pose for Gravity-Aware Augmented Reality","authors":"Chris Sweeney, John Flynn, B. Nuernberger, M. Turk, Tobias Höllerer","doi":"10.1109/ISMAR.2015.20","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.20","url":null,"abstract":"We propose a novel formulation for determining the absolute pose of a single or multi-camera system given a known vertical direction. The vertical direction may be easily obtained by detecting the vertical vanishing points with computer vision techniques, or with the aid of IMU sensor measurements from a smartphone. Our solver is general and able to compute absolute camera pose from two 2D-3D correspondences for single or multi-camera systems. We run several synthetic experiments that demonstrate our algorithm's improved robustness to image and IMU noise compared to the current state of the art. Additionally, we run an image localization experiment that demonstrates the accuracy of our algorithm in real-world scenarios. Finally, we show that our algorithm provides increased performance for real-time model-based tracking compared to solvers that do not utilize the vertical direction and show our algorithm in use with an augmented reality application running on a Google Tango tablet.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117163229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The ideal AR x-ray vision should enable users to clearly observe and grasp not only occludees, but also occluders. We propose a novel selective visualization method of both occludee and oc-cluder layers with dynamic opacity depending on the user's gaze depth. Using the gaze depth as a trigger to select the layers has a essential advantage over using other gestures or spoken commands in the sense of avoiding collision between user's intentional commands and unintentional actions. Our experiment by a visual paired-comparison task shows that our method has achieved a 20% higher success rate, and significantly reduced 30% of the average task completion time than a non-selective method using a constant and half transparency.
{"title":"[POSTER] Vergence-Based AR X-ray Vision","authors":"Y. Kitajima, Sei Ikeda, Kosuke Sato","doi":"10.1109/ISMAR.2015.58","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.58","url":null,"abstract":"The ideal AR x-ray vision should enable users to clearly observe and grasp not only occludees, but also occluders. We propose a novel selective visualization method of both occludee and oc-cluder layers with dynamic opacity depending on the user's gaze depth. Using the gaze depth as a trigger to select the layers has a essential advantage over using other gestures or spoken commands in the sense of avoiding collision between user's intentional commands and unintentional actions. Our experiment by a visual paired-comparison task shows that our method has achieved a 20% higher success rate, and significantly reduced 30% of the average task completion time than a non-selective method using a constant and half transparency.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117306120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mariko Isogawa, Dan Mikami, Kosuke Takahashi, Akira Kojima
A novel framework for image/video content completion comprising three stages is proposed. First, input images/videos are converted to a lower dimensional feature space, which is done to achieve effective restoration even in cases where a damaged region includes complex structures and changes in color. Second, a damaged region is restored in the converted feature space. Finally, an inverse conversion from the lower dimensional feature space to the original feature space is performed to generate the completed image in the original feature space. This three-step solution generates two advantages. First, it enhances the possibility of applying patches dissimilar to those in the original color space. Second, it enables the use of many existing restoration methods, each having various advantages, because the feature space for retrieving the similar patches is the only extension. Experiments verify the effectiveness of the proposed framework.
{"title":"[POSTER] Content Completion in Lower Dimensional Feature Space through Feature Reduction and Compensation","authors":"Mariko Isogawa, Dan Mikami, Kosuke Takahashi, Akira Kojima","doi":"10.1109/ISMAR.2015.45","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.45","url":null,"abstract":"A novel framework for image/video content completion comprising three stages is proposed. First, input images/videos are converted to a lower dimensional feature space, which is done to achieve effective restoration even in cases where a damaged region includes complex structures and changes in color. Second, a damaged region is restored in the converted feature space. Finally, an inverse conversion from the lower dimensional feature space to the original feature space is performed to generate the completed image in the original feature space. This three-step solution generates two advantages. First, it enhances the possibility of applying patches dissimilar to those in the original color space. Second, it enables the use of many existing restoration methods, each having various advantages, because the feature space for retrieving the similar patches is the only extension. Experiments verify the effectiveness of the proposed framework.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117337272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many researchers are trying to realize a pseudo-haptic system which can visually manipulate a user's haptic shape perception when touching a physical object. In this paper, we focus on manipulating the perceived surface shape of a curved object when touching it with an index finger, by visually deforming it's surface shape and displacing the visual representation of the user's index finger as like s/he is touching the deformed surface, using spatial augmented reality. Experiments were conducted with a projection system to confirm the effect of the visual feedback for manipulating the perceived shape of curved surface. The results supported the proposed concept.
{"title":"[POSTER] Manipulating Haptic Shape Perception by Visual Surface Deformation and Finger Displacement in Spatial Augmented Reality","authors":"Toshio Kanamori, D. Iwai, Kosuke Sato","doi":"10.1109/ISMAR.2015.59","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.59","url":null,"abstract":"Many researchers are trying to realize a pseudo-haptic system which can visually manipulate a user's haptic shape perception when touching a physical object. In this paper, we focus on manipulating the perceived surface shape of a curved object when touching it with an index finger, by visually deforming it's surface shape and displacing the visual representation of the user's index finger as like s/he is touching the deformed surface, using spatial augmented reality. Experiments were conducted with a projection system to confirm the effect of the visual feedback for manipulating the perceived shape of curved surface. The results supported the proposed concept.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126191736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Securing one's personal space is quite important in leading a comfortable social life. However, it is difficult to maintain an appropriate interpersonal distance all the time. Therefore, we propose an interpersonal distance control system with a video see-through system, consisting of a head-mounted display (HMD), depth sensor, and RGB camera. The proposed system controls the interpersonal distance by changing the size of the person in the HMD view. In this paper, we describe the proposed system and conduct an experiment to confirm the capability of the proposed system. Finally, we show and discuss the results of the experiment.
{"title":"[POSTER] Maintaining Appropriate Interpersonal Distance Using Virtual Body Size","authors":"Masaki Maeda, Nobuchika Sakata","doi":"10.1109/ISMAR.2015.57","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.57","url":null,"abstract":"Securing one's personal space is quite important in leading a comfortable social life. However, it is difficult to maintain an appropriate interpersonal distance all the time. Therefore, we propose an interpersonal distance control system with a video see-through system, consisting of a head-mounted display (HMD), depth sensor, and RGB camera. The proposed system controls the interpersonal distance by changing the size of the person in the HMD view. In this paper, we describe the proposed system and conduct an experiment to confirm the capability of the proposed system. Finally, we show and discuss the results of the experiment.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132377419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we present FastAR, a software component capable of transforming Joomla based websites into AR-channels compatible with the most popular augmented reality browsers (i.e. Junaio, Layar, Wikitude). FastAR exploits the consistency of the data structure across multiple sites that have been developed using the same content management system, so as to automate the transformation process of an internet website to an augmented reality channel. The proposed component abstracts all related programming tasks and significantly reduces the time required to generate and publish AR-content, making the entire process manageable by non-experts. In verifying the usefulness and effectiveness of FastAR, we conducted a survey to solicit the opinion of users who carried out the installation and transformation process.
{"title":"[POSTER] Transforming Your Website to an Augmented Reality View","authors":"D. Ververidis, S. Nikolopoulos, Y. Kompatsiaris","doi":"10.1109/ISMAR.2015.33","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.33","url":null,"abstract":"In this paper we present FastAR, a software component capable of transforming Joomla based websites into AR-channels compatible with the most popular augmented reality browsers (i.e. Junaio, Layar, Wikitude). FastAR exploits the consistency of the data structure across multiple sites that have been developed using the same content management system, so as to automate the transformation process of an internet website to an augmented reality channel. The proposed component abstracts all related programming tasks and significantly reduces the time required to generate and publish AR-content, making the entire process manageable by non-experts. In verifying the usefulness and effectiveness of FastAR, we conducted a survey to solicit the opinion of users who carried out the installation and transformation process.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134565216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Ohta, Shunsuke Nagano, Hotaka Niwa, K. Yamashita
This paper proposes a mixed-reality shopping system for users who do not own a PC but do own a tablet. In this system, while viewing panoramic images photographed along the aisles of a real store, the user can move freely around the store. Products can be selected and freely viewed from any angle. Furthermore, by utilizing a Photo-based augmented reality (Photo AR) technology the product can be displayed as if it were in the hands of the user. The results of a user evaluation showed that even though the proposed system uses a tablet with a smaller screen it was preferred over a conventional e-commerce site using a larger monitor.
{"title":"[POSTER] Mixed-Reality Store on the Other Side of a Tablet","authors":"M. Ohta, Shunsuke Nagano, Hotaka Niwa, K. Yamashita","doi":"10.1109/ISMAR.2015.60","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.60","url":null,"abstract":"This paper proposes a mixed-reality shopping system for users who do not own a PC but do own a tablet. In this system, while viewing panoramic images photographed along the aisles of a real store, the user can move freely around the store. Products can be selected and freely viewed from any angle. Furthermore, by utilizing a Photo-based augmented reality (Photo AR) technology the product can be displayed as if it were in the hands of the user. The results of a user evaluation showed that even though the proposed system uses a tablet with a smaller screen it was preferred over a conventional e-commerce site using a larger monitor.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114941419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AR systems have been developed for many years now, ranging from systems consisting of a single sensor and output device to systems with a multitude of sensors and/or output devices. With the increasing complexity of the setup, the complexity of handling the different sensors as well as the necessary calibrations and registrations increases accordingly. A much needed (yet missing) area of augmented reality applications is to support AR system engineers when they set up and maintain an AR system by providing visual guides and giving immediate feedback on the current quality of their calibration measurements. In this poster we present an approach to use Augmented Reality itself to support the user in calibrating an Augmented Reality system.
{"title":"[POSTER] AR4AR: Using Augmented Reality for guidance in Augmented Reality Systems Setup","authors":"Frieder Pankratz, G. Klinker","doi":"10.1109/ISMAR.2015.41","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.41","url":null,"abstract":"AR systems have been developed for many years now, ranging from systems consisting of a single sensor and output device to systems with a multitude of sensors and/or output devices. With the increasing complexity of the setup, the complexity of handling the different sensors as well as the necessary calibrations and registrations increases accordingly. A much needed (yet missing) area of augmented reality applications is to support AR system engineers when they set up and maintain an AR system by providing visual guides and giving immediate feedback on the current quality of their calibration measurements. In this poster we present an approach to use Augmented Reality itself to support the user in calibrating an Augmented Reality system.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133915873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marina-Anca Cidotã, Rory M. S. Clifford, P. Dezentje, S. Lukosch, Paulina J. M. Bank
For the clinical community, there is great need for objective, quantitative and valid measures of the factors contributing to motor dysfunction. Currently, there are no standard protocols to assess motor dysfunction in various patient groups, where each medical discipline uses subjectively scored clinical tests, qualitative video analysis, or marker-based motion capturing. We investigate the potential of Augmented Reality (AR) combined with serious gaming and marker-less tracking of the hand to facilitate efficient, cost-effective and patient-friendly methods for evaluation of upper extremity motor dysfunction in different patient groups. First, the design process of the game and the system architecture of the AR framework are described. To provide unhindered assessment of motor dysfunction, patients should operate with the system in a natural way and be able to understand their actions in the virtual AR world. To test this in our system, we conducted a usability study with five healthy people (aged between 57-63) on three different modalities of visual feedback for natural hand interaction with AR objects. These modalities are: no augmented hand, partial augmented hand (tip of index finger and tip of thumb) and a full augmented hand model. The results of the study show that a virtual representation of the fingertips or hand improves the usability of natural hand interaction.
{"title":"[POSTER] Affording Visual Feedback for Natural Hand Interaction in AR to Assess Upper Extremity Motor Dysfunction","authors":"Marina-Anca Cidotã, Rory M. S. Clifford, P. Dezentje, S. Lukosch, Paulina J. M. Bank","doi":"10.1109/ISMAR.2015.29","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.29","url":null,"abstract":"For the clinical community, there is great need for objective, quantitative and valid measures of the factors contributing to motor dysfunction. Currently, there are no standard protocols to assess motor dysfunction in various patient groups, where each medical discipline uses subjectively scored clinical tests, qualitative video analysis, or marker-based motion capturing. We investigate the potential of Augmented Reality (AR) combined with serious gaming and marker-less tracking of the hand to facilitate efficient, cost-effective and patient-friendly methods for evaluation of upper extremity motor dysfunction in different patient groups. First, the design process of the game and the system architecture of the AR framework are described. To provide unhindered assessment of motor dysfunction, patients should operate with the system in a natural way and be able to understand their actions in the virtual AR world. To test this in our system, we conducted a usability study with five healthy people (aged between 57-63) on three different modalities of visual feedback for natural hand interaction with AR objects. These modalities are: no augmented hand, partial augmented hand (tip of index finger and tip of thumb) and a full augmented hand model. The results of the study show that a virtual representation of the fingertips or hand improves the usability of natural hand interaction.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124062187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christoph J. Paulus, Nazim Haouchine, David Cazier, S. Cotin
Current methods dealing with non-rigid augmented reality only provide an augmented view when the topology of the tracked object is not modified, which is an important limitation. In this paper we solve this shortcoming by introducing a method for physics-based non-rigid augmented reality. Singularities caused by topological changes are detected by analyzing the displacement field of the underlying deformable model. These topological changes are then applied to the physics-based model to approximate the real cut. All these steps, from deformation to cutting simulation, are performed in real-time. This significantly improves the coherence between the actual view and the model, and provides added value.
{"title":"Augmented Reality during Cutting and Tearing of Deformable Objects","authors":"Christoph J. Paulus, Nazim Haouchine, David Cazier, S. Cotin","doi":"10.1109/ISMAR.2015.19","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.19","url":null,"abstract":"Current methods dealing with non-rigid augmented reality only provide an augmented view when the topology of the tracked object is not modified, which is an important limitation. In this paper we solve this shortcoming by introducing a method for physics-based non-rigid augmented reality. Singularities caused by topological changes are detected by analyzing the displacement field of the underlying deformable model. These topological changes are then applied to the physics-based model to approximate the real cut. All these steps, from deformation to cutting simulation, are performed in real-time. This significantly improves the coherence between the actual view and the model, and provides added value.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125368684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}