Pub Date : 2010-11-22DOI: 10.1109/ISMAR.2010.5643585
Seokhee Jeon, B. Knoerlein, M. Harders, Seungmoon Choi
Haptic augmented reality (AR) allows to modulate the haptic properties of a real object by providing virtual haptic feedback. We previously developed a haptic AR system wherein the stiffness of a real object can be augmented with the aid of a haptic interface. To demonstrate its potential, this paper presents a case study for medical training of breast cancer palpation. A real breast model made of soft silicone is augmented with a virtual tumor rendered inside. Haptic stimuli for the virtual tumor are generated based on a contact dynamics model identified via real measurements, without the need of geometric information on the breast. A subjective evaluation confirmed the realism and fidelity of our palpation system.
{"title":"Haptic simulation of breast cancer palpation: A case study of haptic augmented reality","authors":"Seokhee Jeon, B. Knoerlein, M. Harders, Seungmoon Choi","doi":"10.1109/ISMAR.2010.5643585","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643585","url":null,"abstract":"Haptic augmented reality (AR) allows to modulate the haptic properties of a real object by providing virtual haptic feedback. We previously developed a haptic AR system wherein the stiffness of a real object can be augmented with the aid of a haptic interface. To demonstrate its potential, this paper presents a case study for medical training of breast cancer palpation. A real breast model made of soft silicone is augmented with a virtual tumor rendered inside. Haptic stimuli for the virtual tumor are generated based on a contact dynamics model identified via real measurements, without the need of geometric information on the breast. A subjective evaluation confirmed the realism and fidelity of our palpation system.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126443264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-22DOI: 10.1109/ISMAR.2010.5643607
N. Yabuki, Kyoko Miyashita, T. Fukuda
Building tall structures behind an aesthetic and historical building tends to destroy the good landscape. To avoid such situations, public agencies must regulate height of buildings and other structures near the landscape target. In order to check the visibility of portions of high, future structures, in this research, a new method using Augmented Reality (AR) was proposed. In this method, a number of virtual rectangular objects with a scale are located on the grid of 3D geographical model. And then, the virtual rulers are shown in an overlapping manner with the actual landscape from multiple viewpoints using the AR technology. The user measures the maximum skyline-preserving height for each rectangular object at a grid point. Using the measured data, the government or public agencies can establish appropriate height regulations for all surrounding areas of the target structures. To verify the proposed method, a system was developed deploying AR Toolkit and was applied to a scenic building. The performance of the system was checked and then, the errors of the obtained data were evaluated. In conclusion, the proposed method was evaluated feasible and effective.
{"title":"AR-based visibility evaluation for preserving landscapes of historical buildings","authors":"N. Yabuki, Kyoko Miyashita, T. Fukuda","doi":"10.1109/ISMAR.2010.5643607","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643607","url":null,"abstract":"Building tall structures behind an aesthetic and historical building tends to destroy the good landscape. To avoid such situations, public agencies must regulate height of buildings and other structures near the landscape target. In order to check the visibility of portions of high, future structures, in this research, a new method using Augmented Reality (AR) was proposed. In this method, a number of virtual rectangular objects with a scale are located on the grid of 3D geographical model. And then, the virtual rulers are shown in an overlapping manner with the actual landscape from multiple viewpoints using the AR technology. The user measures the maximum skyline-preserving height for each rectangular object at a grid point. Using the measured data, the government or public agencies can establish appropriate height regulations for all surrounding areas of the target structures. To verify the proposed method, a system was developed deploying AR Toolkit and was applied to a scenic building. The performance of the system was checked and then, the errors of the obtained data were evaluated. In conclusion, the proposed method was evaluated feasible and effective.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131227464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-22DOI: 10.1109/ISMAR.2010.5643598
Joonsuk Park, Jun Park
Outdoor Augmented Reality (AR) gained popularity recently due to its potential for location based mobile services. However, most commercially available Global Positioning Systems (GPS), except for the expensive high-end models, do not provide accurate location information that is enough to be used for displaying practically meaningful location based information. In this paper, we present a computer vision based method for improving user's two dimensional location and one-dimensional orientation, the initial values of which are obtained from a GPS and a digital compass. Our method utilizes corner positions of buildings in the map and the vertical edges of the buildings in the captured images. We applied anisotropic diffusion in order to filter noise and preserve edges, and dual vertical edge filters on gray and saturation images. Our method is suitable for mobile services in urban environments where tall buildings degrade GPS signals. In average, our method improved 15.0 meters in position and 2.2 degrees in orientation.
{"title":"3DOF tracking accuracy improvement for outdoor Augmented Reality","authors":"Joonsuk Park, Jun Park","doi":"10.1109/ISMAR.2010.5643598","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643598","url":null,"abstract":"Outdoor Augmented Reality (AR) gained popularity recently due to its potential for location based mobile services. However, most commercially available Global Positioning Systems (GPS), except for the expensive high-end models, do not provide accurate location information that is enough to be used for displaying practically meaningful location based information. In this paper, we present a computer vision based method for improving user's two dimensional location and one-dimensional orientation, the initial values of which are obtained from a GPS and a digital compass. Our method utilizes corner positions of buildings in the map and the vertical edges of the buildings in the captured images. We applied anisotropic diffusion in order to filter noise and preserve edges, and dual vertical edge filters on gray and saturation images. Our method is suitable for mobile services in urban environments where tall buildings degrade GPS signals. In average, our method improved 15.0 meters in position and 2.2 degrees in orientation.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"38 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114022667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-22DOI: 10.1109/ISMAR.2010.5643564
Lukas Gruber, Steffen Gauglitz, Jonathan Ventura, S. Zollmann, Manuel J. Huber, M. Schlegel, G. Klinker, D. Schmalstieg, Tobias Höllerer
We describe the design and implementation of a physical and virtual model of an imaginary urban scene—the “City of Sights”— that can serve as a backdrop or “stage” for a variety of Augmented Reality (AR) research. We argue that the AR research community would benefit from such a standard model dataset which can be used for evaluation of such AR topics as tracking systems, modeling, spatial AR, rendering tests, collaborative AR and user interface design. By openly sharing the digital blueprints and assembly instructions for our models, we allow the proposed set to be physically replicable by anyone and permit customization and experimental changes to the stage design which enable comprehensive exploration of algorithms and methods. Furthermore we provide an accompanying rich dataset consisting of video sequences under varying conditions with ground truth camera pose. We employed three different ground truth acquisition methods to support a broad range of use cases. The goal of our design is to enable and improve the replicability and evaluation of future augmented reality research.
{"title":"The City of Sights: Design, construction, and measurement of an Augmented Reality stage set","authors":"Lukas Gruber, Steffen Gauglitz, Jonathan Ventura, S. Zollmann, Manuel J. Huber, M. Schlegel, G. Klinker, D. Schmalstieg, Tobias Höllerer","doi":"10.1109/ISMAR.2010.5643564","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643564","url":null,"abstract":"We describe the design and implementation of a physical and virtual model of an imaginary urban scene—the “City of Sights”— that can serve as a backdrop or “stage” for a variety of Augmented Reality (AR) research. We argue that the AR research community would benefit from such a standard model dataset which can be used for evaluation of such AR topics as tracking systems, modeling, spatial AR, rendering tests, collaborative AR and user interface design. By openly sharing the digital blueprints and assembly instructions for our models, we allow the proposed set to be physically replicable by anyone and permit customization and experimental changes to the stage design which enable comprehensive exploration of algorithms and methods. Furthermore we provide an accompanying rich dataset consisting of video sequences under varying conditions with ground truth camera pose. We employed three different ground truth acquisition methods to support a broad range of use cases. The goal of our design is to enable and improve the replicability and evaluation of future augmented reality research.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132674603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-22DOI: 10.1109/ISMAR.2010.5643558
B. Lu, Tetsuya Kakuta, Rei Kawakami, Takeshi Oishi, K. Ikeuchi
Occlusion handling in augmented reality (AR) applications is challenging in synthesizing virtual objects correctly into the real scene with respect to existing foregrounds and shadows. Furthermore, outdoor environment makes the task more difficult due to the unpredictable illumination changes. This paper proposes novel outdoor illumination constraints for resolving the foreground occlusion problem in outdoor environment. The constraints can be also integrated into a probabilistic model of multiple cues for a better segmentation of the foreground. In addition, we introduce an effective method to resolve the shadow occlusion problem by using shadow detection and recasting with a spherical vision camera. We have applied the system in our digital cultural heritage project named Virtual Asuka (VA) and verified the effectiveness of the system.
{"title":"Foreground and shadow occlusion handling for outdoor augmented reality","authors":"B. Lu, Tetsuya Kakuta, Rei Kawakami, Takeshi Oishi, K. Ikeuchi","doi":"10.1109/ISMAR.2010.5643558","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643558","url":null,"abstract":"Occlusion handling in augmented reality (AR) applications is challenging in synthesizing virtual objects correctly into the real scene with respect to existing foregrounds and shadows. Furthermore, outdoor environment makes the task more difficult due to the unpredictable illumination changes. This paper proposes novel outdoor illumination constraints for resolving the foreground occlusion problem in outdoor environment. The constraints can be also integrated into a probabilistic model of multiple cues for a better segmentation of the foreground. In addition, we introduce an effective method to resolve the shadow occlusion problem by using shadow detection and recasting with a spherical vision camera. We have applied the system in our digital cultural heritage project named Virtual Asuka (VA) and verified the effectiveness of the system.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128975529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-22DOI: 10.1109/ISMAR.2010.5643595
Oscar Nestares, Yoram Gat, H. Haussecker, I. Kozintsev
Estimating the 3D orientation of the camera in a video sequence within a global frame of reference is useful for video stabilization when displaying the video in a virtual 3D environment, as well as for accurate navigation and other applications. This task requires the input of orientation sensors attached to the camera to provide absolute 3D orientation in a geographical frame of reference. However, high-frequency noise in the sensor readings makes it impossible to achieve accurate orientation estimates required for visually stable presentation of video sequences that were acquired with a camera subject to jitter, such as a handheld camera or a vehicle mounted camera. On the other hand, image alignment has proven successful for image stabilization, providing accurate frame-to-frame orientation estimates but drifting over time due to error and bias accumulation and lacking absolute orientation. In this paper we propose a practical method for generating high accuracy estimates of the 3D orientation of the camera within a global frame of reference by fusing orientation estimates from an efficient image-based alignment method, and the estimates from an orientation sensor, overcoming the limitations of the component methods.
{"title":"Video stabilization to a global 3D frame of reference by fusing orientation sensor and image alignment data","authors":"Oscar Nestares, Yoram Gat, H. Haussecker, I. Kozintsev","doi":"10.1109/ISMAR.2010.5643595","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643595","url":null,"abstract":"Estimating the 3D orientation of the camera in a video sequence within a global frame of reference is useful for video stabilization when displaying the video in a virtual 3D environment, as well as for accurate navigation and other applications. This task requires the input of orientation sensors attached to the camera to provide absolute 3D orientation in a geographical frame of reference. However, high-frequency noise in the sensor readings makes it impossible to achieve accurate orientation estimates required for visually stable presentation of video sequences that were acquired with a camera subject to jitter, such as a handheld camera or a vehicle mounted camera. On the other hand, image alignment has proven successful for image stabilization, providing accurate frame-to-frame orientation estimates but drifting over time due to error and bias accumulation and lacking absolute orientation. In this paper we propose a practical method for generating high accuracy estimates of the 3D orientation of the camera within a global frame of reference by fusing orientation estimates from an efficient image-based alignment method, and the estimates from an orientation sensor, overcoming the limitations of the component methods.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130270431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-22DOI: 10.1109/ISMAR.2010.5643592
P. Maier, M. Tönnis, G. Klinker
Systems using two-handed spatial manipulation techniques also require strategies to enable system control tasks. These strategies make it possible to interact with the system comfortably while controlling two hand-held objects simultaneously.
{"title":"Designing and comparing two-handed gestures to confirm links between user controlled objects","authors":"P. Maier, M. Tönnis, G. Klinker","doi":"10.1109/ISMAR.2010.5643592","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643592","url":null,"abstract":"Systems using two-handed spatial manipulation techniques also require strategies to enable system control tasks. These strategies make it possible to interact with the system comfortably while controlling two hand-held objects simultaneously.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"117 16","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131913863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-22DOI: 10.1109/ISMAR.2010.5643570
J. Bastian, Ben Ward, R. Hill, A. Hengel, A. Dick
We present a method for estimating the 3D shape of an object from a sequence of images captured by a hand-held device. The method is well suited to augmented reality applications in that minimal user interaction is required, and the models generated are of an appropriate form. The method proceeds by segmenting the object in every image as it is captured and using the calculated silhouette to update the current shape estimate. In contrast to previous silhouettebased modelling approaches, however, the segmentation process is informed by a 3D prior based on the previous shape estimate. A voting scheme is also introduced in order to compensate for the inevitable noise in the camera position estimates. The combination of the voting scheme with the closed-loop segmentation process provides a robust and flexible shape estimation method. We demonstrate the approach on a number of scenes where segmentation without a 3D prior would be challenging.
{"title":"Interactive modelling for AR applications","authors":"J. Bastian, Ben Ward, R. Hill, A. Hengel, A. Dick","doi":"10.1109/ISMAR.2010.5643570","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643570","url":null,"abstract":"We present a method for estimating the 3D shape of an object from a sequence of images captured by a hand-held device. The method is well suited to augmented reality applications in that minimal user interaction is required, and the models generated are of an appropriate form. The method proceeds by segmenting the object in every image as it is captured and using the calculated silhouette to update the current shape estimate. In contrast to previous silhouettebased modelling approaches, however, the segmentation process is informed by a 3D prior based on the previous shape estimate. A voting scheme is also introduced in order to compensate for the inevitable noise in the camera position estimates. The combination of the voting scheme with the closed-loop segmentation process provides a robust and flexible shape estimation method. We demonstrate the approach on a number of scenes where segmentation without a 3D prior would be challenging.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133470841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-22DOI: 10.1109/ISMAR.2010.5643602
Jun Shingu, E. Rieffel, Don Kimber, Jim Vaughan, Pernilla Qvarfordt, K. Tuite
We propose an Augmented Reality (AR) system that helps users take a picture from a designated pose, such as the position and camera angle of an earlier photo. Repeat photography is frequently used to observe and document changes in an object. Our system uses AR technology to estimate camera poses in real time. When a user takes a photo, the camera pose is saved as a “view bookmark”. To support a user in taking a repeat photo, two simple graphics are rendered in an AR viewer on the camera's screen to guide the user to this bookmarked view. The system then uses image adjustment techniques to create an image based on the user's repeat photo that is even closer to the original.
{"title":"Camera pose navigation using Augmented Reality","authors":"Jun Shingu, E. Rieffel, Don Kimber, Jim Vaughan, Pernilla Qvarfordt, K. Tuite","doi":"10.1109/ISMAR.2010.5643602","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643602","url":null,"abstract":"We propose an Augmented Reality (AR) system that helps users take a picture from a designated pose, such as the position and camera angle of an earlier photo. Repeat photography is frequently used to observe and document changes in an object. Our system uses AR technology to estimate camera poses in real time. When a user takes a photo, the camera pose is saved as a “view bookmark”. To support a user in taking a repeat photo, two simple graphics are rendered in an AR viewer on the camera's screen to guide the user to this bookmarked view. The system then uses image adjustment techniques to create an image based on the user's repeat photo that is even closer to the original.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133676346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-22DOI: 10.1109/ISMAR.2010.5643559
E. Prytz, Susanna Nilsson, Arne Jönsson
Eye contact is believed to be an important factor in normal human communication and as a result of this a head mounted display (HMD) is often seen as something intrusive and limiting. This can be especially problematic when AR is used in a collaborative setting. The study presented in this paper aims to investigate the effects an HMD-based AR system can have on eye contact behaviour between participants in a collaborative task and thus, in extension, the effects of the HMD on collaboration itself. The focus of the study is on task-oriented collaboration between professionals. The participants worked through three different scenarios alternating between HMDs and regular paper maps with the purpose of managing the crisis response to a simulated major forest fire. Correlations between eye contact between participants and questionnaire items concerning team- and taskwork were analysed, indicating that, for the paper map condition, a high amount of eye contact is associated with low confidence and trust in the artefacts used (i.e. paper map and symbols). The amount of eye-contact in both conditions was very low. It was significantly higher for conditions without HMDs. However, the confidence and trust in the artefacts was generally rated significantly higher with HMDs than without. In conclusion, the decrease in eye contact with HMDs does not seem to have a direct effect on the collaboration in a professional, task-oriented context. This is contrary to popular assumptions and the results are relevant for future design choices for AR systems using HMDs.
{"title":"The importance of eye-contact for collaboration in AR systems","authors":"E. Prytz, Susanna Nilsson, Arne Jönsson","doi":"10.1109/ISMAR.2010.5643559","DOIUrl":"https://doi.org/10.1109/ISMAR.2010.5643559","url":null,"abstract":"Eye contact is believed to be an important factor in normal human communication and as a result of this a head mounted display (HMD) is often seen as something intrusive and limiting. This can be especially problematic when AR is used in a collaborative setting. The study presented in this paper aims to investigate the effects an HMD-based AR system can have on eye contact behaviour between participants in a collaborative task and thus, in extension, the effects of the HMD on collaboration itself. The focus of the study is on task-oriented collaboration between professionals. The participants worked through three different scenarios alternating between HMDs and regular paper maps with the purpose of managing the crisis response to a simulated major forest fire. Correlations between eye contact between participants and questionnaire items concerning team- and taskwork were analysed, indicating that, for the paper map condition, a high amount of eye contact is associated with low confidence and trust in the artefacts used (i.e. paper map and symbols). The amount of eye-contact in both conditions was very low. It was significantly higher for conditions without HMDs. However, the confidence and trust in the artefacts was generally rated significantly higher with HMDs than without. In conclusion, the decrease in eye contact with HMDs does not seem to have a direct effect on the collaboration in a professional, task-oriented context. This is contrary to popular assumptions and the results are relevant for future design choices for AR systems using HMDs.","PeriodicalId":250608,"journal":{"name":"2010 IEEE International Symposium on Mixed and Augmented Reality","volume":"358 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122811610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}