Pub Date : 2010-11-22DOI: 10.1109/ISMAR.2010.5643544
T. Blum, M. Wieczorek, A. Aichert, Radhika Tibrewal, Nassir Navab
Visual discomfort is a major problem for head-mounted displays and other stereo displays. One effect that is known to reduce visual comfort is double vision, which can occur due to high disparities. Previous studies suggest that adding artificial out-of-focus blur increases the fusional limits, where the left and right image can be fused without double vision. We investigate the effect of adding artificial out-of-focus blur on visual discomfort using two different setups. One uses a stereo monitor and an eye tracker to change the depth of focus based on the gaze of the user. The other one uses a video-see through head mounted display. A study involving 18 subjects showed that the viewing comfort when using blur is significantly higher in both setups for virtual scenes. However we can not confirm without doubt that the higher viewing comfort is only related to an increase of the fusional limits, as many subjects reported that double vision did not occur during the experiment. Results for additional photographed images that have been shown to the subjects were less significant. A first prototype of an AR system extracting a depth map from stereo images and adding artificial out-of-focus blur is presented.
{"title":"The effect of out-of-focus blur on visual discomfort when using stereo displays","authors":"T. Blum, M. Wieczorek, A. Aichert, Radhika Tibrewal, Nassir Navab","doi":"10.1109/ISMAR.2010.5643544","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643544","url":null,"abstract":"Visual discomfort is a major problem for head-mounted displays and other stereo displays. One effect that is known to reduce visual comfort is double vision, which can occur due to high disparities. Previous studies suggest that adding artificial out-of-focus blur increases the fusional limits, where the left and right image can be fused without double vision. We investigate the effect of adding artificial out-of-focus blur on visual discomfort using two different setups. One uses a stereo monitor and an eye tracker to change the depth of focus based on the gaze of the user. The other one uses a video-see through head mounted display. A study involving 18 subjects showed that the viewing comfort when using blur is significantly higher in both setups for virtual scenes. However we can not confirm without doubt that the higher viewing comfort is only related to an increase of the fusional limits, as many subjects reported that double vision did not occur during the experiment. Results for additional photographed images that have been shown to the subjects were less significant. A first prototype of an AR system extracting a depth map from stereo images and adding artificial out-of-focus blur is presented.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116681406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-22DOI: 10.1109/ISMAR.2010.5643560
Maribeth Gandy Coleman, R. Catrambone, B. MacIntyre, Chris Alvarez, Elsa Eiríksdóttir, M. Hilimire, Brian Davidson, A. McLaughlin
This paper discusses an experiment carried out in an AR test bed called “the pit”. Inspired by the well-known VR acrophobia study of Meehan et al. [18], the experimental goals were to explore whether VR presence instruments were useful in AR (and to modify them where appropriate), to compare additional measures to these well-researched techniques, and to determine if findings from VR evaluations can be transferred to AR. An experimental protocol appropriate for AR was developed. The initial experimental findings concern varying immersion factors (frame rate) and their effect on feelings of presence, user performance and behavior. Unlike the VR study, which found differing frame rates to affect presence measures, there were few differences in the five frame rate modes in our study as measured by the qualitative and quantitative instruments, which included physiological responses, a custom presence questionnaire, task performance, and user behavior. The AR presence questionnaire indicated users experienced a high feeling of presence in all frame rate modes. Behavior, performance, and interview results indicated the participants felt anxiety in the pit environment. However, the physiological data did not reflect this anxiety due to factors of user experience and experiment design. Efforts to develop a useful AR test bed and to identify results from a large data set has produced a body of knowledge related to AR evaluation that can inform others seeking to create AR experiments.
{"title":"Experiences with an AR evaluation test bed: Presence, performance, and physiological measurement","authors":"Maribeth Gandy Coleman, R. Catrambone, B. MacIntyre, Chris Alvarez, Elsa Eiríksdóttir, M. Hilimire, Brian Davidson, A. McLaughlin","doi":"10.1109/ISMAR.2010.5643560","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643560","url":null,"abstract":"This paper discusses an experiment carried out in an AR test bed called “the pit”. Inspired by the well-known VR acrophobia study of Meehan et al. [18], the experimental goals were to explore whether VR presence instruments were useful in AR (and to modify them where appropriate), to compare additional measures to these well-researched techniques, and to determine if findings from VR evaluations can be transferred to AR. An experimental protocol appropriate for AR was developed. The initial experimental findings concern varying immersion factors (frame rate) and their effect on feelings of presence, user performance and behavior. Unlike the VR study, which found differing frame rates to affect presence measures, there were few differences in the five frame rate modes in our study as measured by the qualitative and quantitative instruments, which included physiological responses, a custom presence questionnaire, task performance, and user behavior. The AR presence questionnaire indicated users experienced a high feeling of presence in all frame rate modes. Behavior, performance, and interview results indicated the participants felt anxiety in the pit environment. However, the physiological data did not reflect this anxiety due to factors of user experience and experiment design. Efforts to develop a useful AR test bed and to identify results from a large data set has produced a body of knowledge related to AR evaluation that can inform others seeking to create AR experiments.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129587019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-22DOI: 10.1109/ISMAR.2010.5643588
G. Kamei, Takeshi Matsuyama, Ken-ichi Okada
This paper proposes augmentation of check in/out model for remote collaboration with Mixed Reality (MR). We add a 3D shared and private space into the real workspace by MR technology, and augment the check in/out model to remote collaboration. By our proposal, user can intuitively receive remote partner's work via virtual objects in the shared space and stop sharing information about an object by just moving the object into the private space if the user doesn't want to share it. We implement a system which achieves our proposal and evaluate it.
{"title":"Augmentation of check in/out model for remote collaboration with Mixed Reality","authors":"G. Kamei, Takeshi Matsuyama, Ken-ichi Okada","doi":"10.1109/ISMAR.2010.5643588","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643588","url":null,"abstract":"This paper proposes augmentation of check in/out model for remote collaboration with Mixed Reality (MR). We add a 3D shared and private space into the real workspace by MR technology, and augment the check in/out model to remote collaboration. By our proposal, user can intuitively receive remote partner's work via virtual objects in the shared space and stop sharing information about an object by just moving the object into the private space if the user doesn't want to share it. We implement a system which achieves our proposal and evaluate it.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129476676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-22DOI: 10.1109/ISMAR.2010.5643596
Fumio Okura, M. Kanbara, N. Yokoya
This study is concerned with a large-scale telepresence system based on remote control of mobile robot or aerial vehicle. The proposed system provides a user with not only view of remote site but also related information by AR technique. Such systems are referred to as augmented telepresence in this paper. Aerial imagery can capture a wider area at once than image capturing from the ground. However, it is difficult for a user to change position and direction of viewpoint freely because of the difficulty in remote control and limitation of hardware. To overcome these problems, the proposed system uses an autopilot airship to support changing user's viewpoint and employs an omni-directional camera for changing viewing direction easily. This paper describes hardware configuration for aerial imagery, an approach for overlaying virtual objects, and automatic control of the airship, as well as experimental results using a prototype system.
{"title":"Augmented telepresence using autopilot airship and omni-directional camera","authors":"Fumio Okura, M. Kanbara, N. Yokoya","doi":"10.1109/ISMAR.2010.5643596","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643596","url":null,"abstract":"This study is concerned with a large-scale telepresence system based on remote control of mobile robot or aerial vehicle. The proposed system provides a user with not only view of remote site but also related information by AR technique. Such systems are referred to as augmented telepresence in this paper. Aerial imagery can capture a wider area at once than image capturing from the ground. However, it is difficult for a user to change position and direction of viewpoint freely because of the difficulty in remote control and limitation of hardware. To overcome these problems, the proposed system uses an autopilot airship to support changing user's viewpoint and employs an omni-directional camera for changing viewing direction easily. This paper describes hardware configuration for aerial imagery, an approach for overlaying virtual objects, and automatic control of the airship, as well as experimental results using a prototype system.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115407247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-22DOI: 10.1109/ISMAR.2010.5643619
P. Belimpasakis, Petri Selonen, Yu You
While many attempts have been done towards creating mixed reality platforms for mobile client devices, there have not been any significant efforts at the server/infrastructure side. We demonstrate our Mixed Reality Web Service Platform (MRS-WS) [2] dedicated to enabling rapid creation of mixed reality solutions, those being either desktop or mobile. Focusing on common interfaces and functions across user generated and commercial geo-content, we provide an appealing developer offering, which we are currently evaluating via a closed set of university partners. Our plan is to gradually expand the developer API access to more partners, before deciding if it is ready for fully public developer access.
{"title":"A Web Service Platform dedicated to building mixed reality solutions","authors":"P. Belimpasakis, Petri Selonen, Yu You","doi":"10.1109/ISMAR.2010.5643619","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643619","url":null,"abstract":"While many attempts have been done towards creating mixed reality platforms for mobile client devices, there have not been any significant efforts at the server/infrastructure side. We demonstrate our Mixed Reality Web Service Platform (MRS-WS) [2] dedicated to enabling rapid creation of mixed reality solutions, those being either desktop or mobile. Focusing on common interfaces and functions across user generated and commercial geo-content, we provide an appealing developer offering, which we are currently evaluating via a closed set of university partners. Our plan is to gradually expand the developer API access to more partners, before deciding if it is ready for fully public developer access.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"8 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123316211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-22DOI: 10.1109/ISMAR.2010.5643552
Sandy Martedi, Hideaki Uchiyama, G. Enriquez, H. Saito, Tsutomu Miyashita, Takenori Hara
This paper presents folded surface detection and tracking for augmented maps. For the detection, plane detection is iteratively applied to 2D correspondences between an input image and a reference plane because the folded surface is composed of multiple planes. In order to compute the exact folding line from the detected planes, the intersection line of the planes is computed from their positional relationship. After the detection is done, each plane is individually tracked by frame-by-frame descriptor update. For a natural augmentation on the folded surface, we overlay virtual geographic data on each detected plane. The user can interact with the geographic data by finger pointing because the finger tip of the user is also detected during the tracking. As scenario of use, some interactions on the folded surface are introduced. Experimental results show the accuracy and performance of folded surface detection for evaluating the effectiveness of our approach.
{"title":"Foldable augmented maps","authors":"Sandy Martedi, Hideaki Uchiyama, G. Enriquez, H. Saito, Tsutomu Miyashita, Takenori Hara","doi":"10.1109/ISMAR.2010.5643552","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643552","url":null,"abstract":"This paper presents folded surface detection and tracking for augmented maps. For the detection, plane detection is iteratively applied to 2D correspondences between an input image and a reference plane because the folded surface is composed of multiple planes. In order to compute the exact folding line from the detected planes, the intersection line of the planes is computed from their positional relationship. After the detection is done, each plane is individually tracked by frame-by-frame descriptor update. For a natural augmentation on the folded surface, we overlay virtual geographic data on each detected plane. The user can interact with the geographic data by finger pointing because the finger tip of the user is also detected during the tracking. As scenario of use, some interactions on the folded surface are introduced. Experimental results show the accuracy and performance of folded surface detection for evaluating the effectiveness of our approach.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127587709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-22DOI: 10.1109/ISMAR.2010.5643583
A. Hill, B. MacIntyre, Maribeth Gandy Coleman, Brian Davidson, Hafez Rouzati
Widespread future adoption of augmented reality technology will rely on a broadly accessible standard for authoring and distributing content with, at a minimum, the flexibility and interactivity provided by current web authoring technologies. We introduce KHARMA, an open architecture based on KML for geospatial and relative referencing combined with HTML, JavaScript and CSS technologies for content development and delivery. This architecture uses lightweight representations that decouple infrastructure and tracking sources from authoring and content delivery. Our main contribution is a re-conceptualization of KML that turns HTML content formerly confined to balloons into first-class elements in the scene. We introduce the KARML extension that gives authors increase control over the presentation of HTML content and its spatial relationship to other content.
{"title":"KHARMA: An open KML/HTML architecture for mobile augmented reality applications","authors":"A. Hill, B. MacIntyre, Maribeth Gandy Coleman, Brian Davidson, Hafez Rouzati","doi":"10.1109/ISMAR.2010.5643583","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643583","url":null,"abstract":"Widespread future adoption of augmented reality technology will rely on a broadly accessible standard for authoring and distributing content with, at a minimum, the flexibility and interactivity provided by current web authoring technologies. We introduce KHARMA, an open architecture based on KML for geospatial and relative referencing combined with HTML, JavaScript and CSS technologies for content development and delivery. This architecture uses lightweight representations that decouple infrastructure and tracking sources from authoring and content delivery. Our main contribution is a re-conceptualization of KML that turns HTML content formerly confined to balloons into first-class elements in the scene. We introduce the KARML extension that gives authors increase control over the presentation of HTML content and its spatial relationship to other content.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131932175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-22DOI: 10.1109/ISMAR.2010.5643550
Amaury Dame, É. Marchand
In this paper we present a direct tracking approach that uses Mutual Information (MI) as a metric for alignment. The proposed approach is robust, real-time and gives an accurate estimation of the displacement that makes it adapted to augmented reality applications. MI is a measure of the quantity of information shared by signals that has been widely used in medical applications. Since then, and although MI has the ability to perform robust alignment with illumination changes, multi-modality and partial occlusions, few works propose MI-based applications related to object tracking in image sequences due to some optimization problems.
{"title":"Accurate real-time tracking using mutual information","authors":"Amaury Dame, É. Marchand","doi":"10.1109/ISMAR.2010.5643550","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643550","url":null,"abstract":"In this paper we present a direct tracking approach that uses Mutual Information (MI) as a metric for alignment. The proposed approach is robust, real-time and gives an accurate estimation of the displacement that makes it adapted to augmented reality applications. MI is a measure of the quantity of information shared by signals that has been widely used in medical applications. Since then, and although MI has the ability to perform robust alignment with illumination changes, multi-modality and partial occlusions, few works propose MI-based applications related to object tracking in image sequences due to some optimization problems.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"123 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131163628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-22DOI: 10.1109/ISMAR.2010.5643620
D. Weng, D. Li, W. Xu, Y. Liu, Y. Wang
Our system is specially designed with augmented reality technologies and has significant commercial potential. It will be installed in a theme park in the next year.
我们的系统专门设计了增强现实技术,具有巨大的商业潜力。它将于明年被安装在一个主题公园里。
{"title":"AR Shooter: An augmented reality shooting game system","authors":"D. Weng, D. Li, W. Xu, Y. Liu, Y. Wang","doi":"10.1109/ISMAR.2010.5643620","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643620","url":null,"abstract":"Our system is specially designed with augmented reality technologies and has significant commercial potential. It will be installed in a theme park in the next year.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115783531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-22DOI: 10.1109/ISMAR.2010.5643601
Fabian Scheer, S. Müller
A precise tracking with minimal setup times, minimal changes to the environment and acceptable costs, satisfying industrial demands in large factory buildings is still a challenging task for augmented reality(AR) applications. We present a system to determine the pose for monitor based AR systems in large indoor environments, e.g. 200 – 200 meters and more. An infrared laser detects retroreflective targets and computes a 2D position and orientation based on the information of a preprocessed map of the targets. Based on this information the 6D pose of a video camera attached to a servo motor, that is further mounted on a mobile cart is obtained by identifying the transformation between the laser scanner and the several adjustable views of the camera through a calibration method. The adjustable steps of the servo motor are limited to a discrete number of steps to limit the calibration effort. The positional accuracy of the system is estimated by error propagation and presented.
{"title":"Large area indoor tracking for industrial augmented reality","authors":"Fabian Scheer, S. Müller","doi":"10.1109/ISMAR.2010.5643601","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643601","url":null,"abstract":"A precise tracking with minimal setup times, minimal changes to the environment and acceptable costs, satisfying industrial demands in large factory buildings is still a challenging task for augmented reality(AR) applications. We present a system to determine the pose for monitor based AR systems in large indoor environments, e.g. 200 – 200 meters and more. An infrared laser detects retroreflective targets and computes a 2D position and orientation based on the information of a preprocessed map of the targets. Based on this information the 6D pose of a video camera attached to a servo motor, that is further mounted on a mobile cart is obtained by identifying the transformation between the laser scanner and the several adjustable views of the camera through a calibration method. The adjustable steps of the servo motor are limited to a discrete number of steps to limit the calibration effort. The positional accuracy of the system is estimated by error propagation and presented.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122927335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}