首页 > 最新文献

2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)最新文献

英文 中文
Tangible and Visible 3D Object Reconstruction in Augmented Reality 增强现实中有形和可见的三维物体重建
Pub Date : 2019-10-01 DOI: 10.1109/ISMAR.2019.00-30
Yinchen Wu, Liwei Chan, Wen-Chieh Lin
Many crucial applications in the fields of filmmaking, game design, education, and cultural preservation — among others — involve the modeling, authoring, or editing of 3D objects and scenes. The two major methods of creating 3D models are 1) modeling, using computer software, and 2) reconstruction, generally using high-quality 3D scanners. Scanners of sufficient quality to support the latter method remain unaffordable to the general public. Since the emergence of consumer-grade RGBD cameras, there has been a growing interest in 3D reconstruction systems using depth cameras. However, most such systems are not user-friendly, and require intense efforts and practice if good reconstruction results are to be obtained. In this paper, we propose to increase the accessibility of depth-camera-based 3D reconstruction by assisting its users with augmented reality (AR) technology. Specifically, the proposed approach will allow users to rotate/move a target object freely with their hands and see the object being overlapped with its reconstructing model during the reconstruction process. As well as being more instinctual than conventional reconstruction systems, our proposed system will provide useful hints on complete 3D reconstruction of an object, including the best capturing range; reminder of moving and rotating the object at a steady speed; and which model regions are complex enough to require zooming-in. We evaluated our system via a user study that compared its performance against those of three other stateof- the-art approaches, and found our system outperforms the other approaches. Specifically, the participants rated it highest in usability, understandability, and model satisfaction.
在电影制作、游戏设计、教育和文化保护等领域的许多重要应用都涉及到3D物体和场景的建模、创作或编辑。创建3D模型的两种主要方法是:1)建模,使用计算机软件;2)重建,通常使用高质量的3D扫描仪。支持后一种方法的高质量扫描仪对普通大众来说仍然是负担不起的。自从消费级RGBD相机出现以来,人们对使用深度相机的3D重建系统的兴趣越来越大。然而,大多数这样的系统都不是用户友好的,如果要获得良好的重建结果,需要大量的努力和实践。在本文中,我们提出通过帮助用户使用增强现实(AR)技术来增加基于深度相机的3D重建的可访问性。具体来说,所提出的方法将允许用户用手自由旋转/移动目标对象,并在重建过程中看到对象与其重建模型重叠。以及比传统的重建系统更本能,我们提出的系统将提供一个对象的完整3D重建有用的提示,包括最佳捕获范围;提醒:提醒物体以稳定的速度移动和旋转;哪些模型区域足够复杂,需要放大。我们通过用户研究来评估我们的系统,将其性能与其他三种最先进的方法进行比较,发现我们的系统优于其他方法。具体来说,参与者在可用性、可理解性和模型满意度方面对它的评价最高。
{"title":"Tangible and Visible 3D Object Reconstruction in Augmented Reality","authors":"Yinchen Wu, Liwei Chan, Wen-Chieh Lin","doi":"10.1109/ISMAR.2019.00-30","DOIUrl":"https://doi.org/10.1109/ISMAR.2019.00-30","url":null,"abstract":"Many crucial applications in the fields of filmmaking, game design, education, and cultural preservation — among others — involve the modeling, authoring, or editing of 3D objects and scenes. The two major methods of creating 3D models are 1) modeling, using computer software, and 2) reconstruction, generally using high-quality 3D scanners. Scanners of sufficient quality to support the latter method remain unaffordable to the general public. Since the emergence of consumer-grade RGBD cameras, there has been a growing interest in 3D reconstruction systems using depth cameras. However, most such systems are not user-friendly, and require intense efforts and practice if good reconstruction results are to be obtained. In this paper, we propose to increase the accessibility of depth-camera-based 3D reconstruction by assisting its users with augmented reality (AR) technology. Specifically, the proposed approach will allow users to rotate/move a target object freely with their hands and see the object being overlapped with its reconstructing model during the reconstruction process. As well as being more instinctual than conventional reconstruction systems, our proposed system will provide useful hints on complete 3D reconstruction of an object, including the best capturing range; reminder of moving and rotating the object at a steady speed; and which model regions are complex enough to require zooming-in. We evaluated our system via a user study that compared its performance against those of three other stateof- the-art approaches, and found our system outperforms the other approaches. Specifically, the participants rated it highest in usability, understandability, and model satisfaction.","PeriodicalId":348216,"journal":{"name":"2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121830169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Annotation vs. Virtual Tutor: Comparative Analysis on the Effectiveness of Visual Instructions in Immersive Virtual Reality 注释与虚拟导师:沉浸式虚拟现实中视觉教学效果的比较分析
Pub Date : 2019-10-01 DOI: 10.1109/ISMAR.2019.00030
Hyeopwoo Lee, Hyejin Kim, D. Monteiro, Youngnoh Goh, Daseong Han, Hai-Ning Liang, H. Yang, Jinki Jung
In this paper we present a comparative study of visual instructions in Immersive Virtual Reality (IVR), i.e., annotation (ANN) that employs 3D texts and objects for instructions and virtual tutor (TUT) that demonstrates a task with a 3D character. The comparison is based on three tasks, maze escape (ME), stretching exercise (SE), and crane manipulation (CM), defined by the types of a unit instruction. We conducted an automated evaluation of user's memory recall performances (recall time, accuracy, and error) by mapping a sequence of user's behaviors and events as a string. Results revealed that ANN group showed significantly more accurate performance (1.3 times) in ME and time performance (1.64 times) in SE than TUT group, while no statistical main difference was found in CM. Interestingly, although ANN showed statistically shorter execution time, the recalling time pattern of TUT group showed a steep convergence after initial trial. The results can be used in the field in terms of informing designers of IVR on what types of visual instruction are best for different task purpose.
在本文中,我们对沉浸式虚拟现实(IVR)中的视觉指令进行了比较研究,即使用3D文本和对象进行指令的注释(ANN)和演示具有3D字符的任务的虚拟导师(TUT)。比较基于三个任务,迷宫逃脱(ME),伸展运动(SE)和起重机操作(CM),由单元指令的类型定义。我们通过将用户的行为和事件序列映射为字符串,对用户的记忆回忆性能(回忆时间、准确性和错误)进行了自动评估。结果显示,ANN组在ME和SE上的准确率(1.3倍)和时间(1.64倍)均显著高于TUT组,而在CM上的差异无统计学意义。有趣的是,尽管ANN在统计上显示出更短的执行时间,但TUT组的回忆时间模式在初始试验后呈现出陡峭的收敛。研究结果可以在现场使用,告诉IVR的设计师哪种类型的视觉教学最适合不同的任务目的。
{"title":"Annotation vs. Virtual Tutor: Comparative Analysis on the Effectiveness of Visual Instructions in Immersive Virtual Reality","authors":"Hyeopwoo Lee, Hyejin Kim, D. Monteiro, Youngnoh Goh, Daseong Han, Hai-Ning Liang, H. Yang, Jinki Jung","doi":"10.1109/ISMAR.2019.00030","DOIUrl":"https://doi.org/10.1109/ISMAR.2019.00030","url":null,"abstract":"In this paper we present a comparative study of visual instructions in Immersive Virtual Reality (IVR), i.e., annotation (ANN) that employs 3D texts and objects for instructions and virtual tutor (TUT) that demonstrates a task with a 3D character. The comparison is based on three tasks, maze escape (ME), stretching exercise (SE), and crane manipulation (CM), defined by the types of a unit instruction. We conducted an automated evaluation of user's memory recall performances (recall time, accuracy, and error) by mapping a sequence of user's behaviors and events as a string. Results revealed that ANN group showed significantly more accurate performance (1.3 times) in ME and time performance (1.64 times) in SE than TUT group, while no statistical main difference was found in CM. Interestingly, although ANN showed statistically shorter execution time, the recalling time pattern of TUT group showed a steep convergence after initial trial. The results can be used in the field in terms of informing designers of IVR on what types of visual instruction are best for different task purpose.","PeriodicalId":348216,"journal":{"name":"2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121849228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Towards SLAM-Based Outdoor Localization using Poor GPS and 2.5D Building Models 利用较差的GPS和2.5D建筑模型实现基于slam的户外定位
Pub Date : 2019-10-01 DOI: 10.1109/ISMAR.2019.00016
Ruyu Liu, Jianhua Zhang, Shengyong Chen, Clemens Arth
In this paper, we address the topic of outdoor localization and tracking using monocular camera setups with poor GPS priors. We leverage 2.5D building maps, which are freely available from open-source databases such as OpenStreetMap. The main contributions of our work are a fast initialization method and a non-linear optimization scheme. The initialization upgrades a visual SLAM reconstruction with an absolute scale. The non-linear optimization uses the 2.5D building model footprint, which further improves the tracking accuracy and the scale estimation. A pose optimization step relates the vision-based camera pose estimation from SLAM to the position information received through GPS, in order to fix the common problem of drift. We evaluate our approach on a set of challenging scenarios. The experimental results show that our approach achieves improved accuracy and robustness with an advantage in run-time over previous setups.
在本文中,我们讨论了使用具有较差GPS先验的单目相机设置进行户外定位和跟踪的主题。我们利用2.5D建筑地图,这些地图可以从OpenStreetMap等开源数据库免费获得。我们工作的主要贡献是一种快速初始化方法和非线性优化方案。初始化对可视化SLAM重建进行了绝对尺度的升级。非线性优化采用2.5D建筑模型足迹,进一步提高了跟踪精度和尺度估计。姿态优化步骤将基于视觉的SLAM相机姿态估计与GPS接收的位置信息联系起来,以解决常见的漂移问题。我们在一系列具有挑战性的场景中评估我们的方法。实验结果表明,我们的方法提高了精度和鲁棒性,并且在运行时比以前的设置具有优势。
{"title":"Towards SLAM-Based Outdoor Localization using Poor GPS and 2.5D Building Models","authors":"Ruyu Liu, Jianhua Zhang, Shengyong Chen, Clemens Arth","doi":"10.1109/ISMAR.2019.00016","DOIUrl":"https://doi.org/10.1109/ISMAR.2019.00016","url":null,"abstract":"In this paper, we address the topic of outdoor localization and tracking using monocular camera setups with poor GPS priors. We leverage 2.5D building maps, which are freely available from open-source databases such as OpenStreetMap. The main contributions of our work are a fast initialization method and a non-linear optimization scheme. The initialization upgrades a visual SLAM reconstruction with an absolute scale. The non-linear optimization uses the 2.5D building model footprint, which further improves the tracking accuracy and the scale estimation. A pose optimization step relates the vision-based camera pose estimation from SLAM to the position information received through GPS, in order to fix the common problem of drift. We evaluate our approach on a set of challenging scenarios. The experimental results show that our approach achieves improved accuracy and robustness with an advantage in run-time over previous setups.","PeriodicalId":348216,"journal":{"name":"2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"29 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114113162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Coherent Rendering of Virtual Smile Previews with Fast Neural Style Transfer 基于快速神经风格迁移的虚拟微笑预览连贯渲染
Pub Date : 2019-10-01 DOI: 10.1109/ISMAR.2019.00-25
Valentin Vasiliu, Gábor Sörös
Coherent rendering in augmented reality deals with synthesizing virtual content that seamlessly blends in with the real content. Unfortunately, capturing or modeling every real aspect in the virtual rendering process is often unfeasible or too expensive. We present a post-processing method that improves the look of rendered overlays in a dental virtual try-on application. We combine the original frame and the default rendered frame in an autoencoder neural network in order to obtain a more natural output, inspired by artistic style transfer research. Specifically, we apply the original frame as style on the rendered frame as content, repeating the process with each new pair of frames. Our method requires only a single forward pass, our shallow architecture ensures fast execution, and our internal feedback loop inherently enforces temporal consistency.
增强现实中的连贯渲染涉及到虚拟内容与真实内容无缝融合的合成。不幸的是,在虚拟渲染过程中捕获或建模每个真实方面通常是不可行的,或者成本太高。我们提出了一种后处理方法,提高了在牙科虚拟试戴应用程序中渲染叠加的外观。受艺术风格迁移研究的启发,我们将原始帧和默认渲染帧结合在一个自编码器神经网络中,以获得更自然的输出。具体来说,我们将原始框架作为样式应用于渲染框架作为内容,并对每一对新框架重复此过程。我们的方法只需要一个向前传递,我们的浅架构确保快速执行,我们的内部反馈循环固有地强制时间一致性。
{"title":"Coherent Rendering of Virtual Smile Previews with Fast Neural Style Transfer","authors":"Valentin Vasiliu, Gábor Sörös","doi":"10.1109/ISMAR.2019.00-25","DOIUrl":"https://doi.org/10.1109/ISMAR.2019.00-25","url":null,"abstract":"Coherent rendering in augmented reality deals with synthesizing virtual content that seamlessly blends in with the real content. Unfortunately, capturing or modeling every real aspect in the virtual rendering process is often unfeasible or too expensive. We present a post-processing method that improves the look of rendered overlays in a dental virtual try-on application. We combine the original frame and the default rendered frame in an autoencoder neural network in order to obtain a more natural output, inspired by artistic style transfer research. Specifically, we apply the original frame as style on the rendered frame as content, repeating the process with each new pair of frames. Our method requires only a single forward pass, our shallow architecture ensures fast execution, and our internal feedback loop inherently enforces temporal consistency.","PeriodicalId":348216,"journal":{"name":"2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130832878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Pointing and Selection Methods for Text Entry in Augmented Reality Head Mounted Displays 增强现实头戴式显示器中文本输入的指向和选择方法
Pub Date : 2019-10-01 DOI: 10.1109/ISMAR.2019.00026
Wenge Xu, Hai-Ning Liang, Anqi He, Zifan Wang
Augmented reality (AR) is on the rise with consumer-level head-mounted displays (HMDs) becoming available in recent years. Text entry is an essential activity for AR systems, but it is still relatively underexplored. Although it is possible to use a physical keyboard to enter text in AR systems, it is not the most optimal and ideal way because it confines the uses to a stationary position and within indoor environments. Instead, a virtual keyboard seems more suitable. Text entry via virtual keyboards requires a pointing method and a selection mechanism. Although there exist various combinations of pointing+selection mechanisms, it is not well understood how well suited each combination is to support fast text entry speed with low error rates and positive usability (regarding workload, user experience, motion sickness, and immersion). In this research, we perform an empirical study to investigate user preference and text entry performance of four pointing methods (Controller, Head, Hand, and Hybrid) in combination with two input mechanisms (Swype and Tap). Our research represents a first systematic investigation of these eight possible combinations. Our results show that Controller outperforms all the other device-free methods in both text entry performance and user experience. However, device-free pointing methods can be usable depending on task requirements and users' preferences and physical condition.
近年来,随着消费级头戴式显示器(hmd)的出现,增强现实(AR)正在兴起。文本输入是AR系统的一项基本活动,但它仍然相对未被充分开发。虽然在增强现实系统中可以使用物理键盘输入文本,但这并不是最理想的方式,因为它将使用限制在固定位置和室内环境中。相反,虚拟键盘似乎更合适。通过虚拟键盘输入文本需要一个指向方法和一个选择机制。尽管存在各种指向+选择机制的组合,但人们并不清楚每种组合如何很好地支持快速文本输入速度、低错误率和积极的可用性(关于工作量、用户体验、晕动病和沉浸感)。在本研究中,我们进行了一项实证研究,以调查用户偏好和文本输入性能的四种点法(控制器,头部,手,和混合)结合两种输入机制(Swype和Tap)。我们的研究是对这八种可能组合的首次系统调查。我们的结果表明,Controller在文本输入性能和用户体验方面优于所有其他不需要设备的方法。但是,根据任务要求和用户的偏好和物理条件,可以使用与设备无关的指向方法。
{"title":"Pointing and Selection Methods for Text Entry in Augmented Reality Head Mounted Displays","authors":"Wenge Xu, Hai-Ning Liang, Anqi He, Zifan Wang","doi":"10.1109/ISMAR.2019.00026","DOIUrl":"https://doi.org/10.1109/ISMAR.2019.00026","url":null,"abstract":"Augmented reality (AR) is on the rise with consumer-level head-mounted displays (HMDs) becoming available in recent years. Text entry is an essential activity for AR systems, but it is still relatively underexplored. Although it is possible to use a physical keyboard to enter text in AR systems, it is not the most optimal and ideal way because it confines the uses to a stationary position and within indoor environments. Instead, a virtual keyboard seems more suitable. Text entry via virtual keyboards requires a pointing method and a selection mechanism. Although there exist various combinations of pointing+selection mechanisms, it is not well understood how well suited each combination is to support fast text entry speed with low error rates and positive usability (regarding workload, user experience, motion sickness, and immersion). In this research, we perform an empirical study to investigate user preference and text entry performance of four pointing methods (Controller, Head, Hand, and Hybrid) in combination with two input mechanisms (Swype and Tap). Our research represents a first systematic investigation of these eight possible combinations. Our results show that Controller outperforms all the other device-free methods in both text entry performance and user experience. However, device-free pointing methods can be usable depending on task requirements and users' preferences and physical condition.","PeriodicalId":348216,"journal":{"name":"2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129950340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 39
Is Any Room Really OK? The Effect of Room Size and Furniture on Presence, Narrative Engagement, and Usability During a Space-Adaptive Augmented Reality Game 任何房间都可以吗?在空间适应性增强现实游戏中,房间大小和家具对存在、叙事参与和可用性的影响
Pub Date : 2019-10-01 DOI: 10.1109/ISMAR.2019.00-11
Jae-eun Shin, Hayun Kim, Callum Parker, Hyung-il Kim, Seoyoung Oh, Woontack Woo
One of the main challenges in creating narrative-driven Augmented Reality (AR) content for Head Mounted Displays (HMDs) is to make them equally accessible and enjoyable in different types of indoor environments. However, little has been studied in regards to whether such content can indeed provide similar, if not the same, levels of experience across different spaces. To gain more understanding towards this issue, we examine the effect of room size and furniture on the player experience of Fragments, a space-adaptive, indoor AR crime-solving game created for the Microsoft HoloLens. The study compares factors of player experience in four types of spatial conditions: (1) Large Room - Fully Furnished; (2) Large Room - Scarcely Furnished; (3) Small Room - Fully Furnished; and (4) Small Room - Scarcely Furnished. Our results show that while large spaces facilitate a higher sense of presence and narrative engagement, fully-furnished rooms raise perceived workload. Based on our findings, we propose design suggestions that can support narrative-driven, space-adaptive indoor HMD-based AR content in delivering optimal experiences for various types of rooms.
为头戴式显示器(hmd)创建叙事驱动的增强现实(AR)内容的主要挑战之一是使它们在不同类型的室内环境中同样可访问和愉快。然而,很少有人研究这些内容是否真的能在不同的空间提供相似的体验水平。为了更好地理解这个问题,我们研究了房间大小和家具对《Fragments》玩家体验的影响,这是一款为微软HoloLens开发的空间适应性室内AR犯罪解决游戏。该研究比较了四种类型空间条件下的玩家体验因素:(1)大型房间-设备齐全;(2)大房间——几乎没有家具;(3)小房间-设备齐全;(4)房间小——几乎没有家具。我们的研究结果表明,虽然大空间促进了更高的存在感和叙事参与,但家具齐全的房间增加了感知工作量。基于我们的研究结果,我们提出了设计建议,可以支持基于hmd的叙事驱动的、空间适应性的室内AR内容,为各种类型的房间提供最佳体验。
{"title":"Is Any Room Really OK? The Effect of Room Size and Furniture on Presence, Narrative Engagement, and Usability During a Space-Adaptive Augmented Reality Game","authors":"Jae-eun Shin, Hayun Kim, Callum Parker, Hyung-il Kim, Seoyoung Oh, Woontack Woo","doi":"10.1109/ISMAR.2019.00-11","DOIUrl":"https://doi.org/10.1109/ISMAR.2019.00-11","url":null,"abstract":"One of the main challenges in creating narrative-driven Augmented Reality (AR) content for Head Mounted Displays (HMDs) is to make them equally accessible and enjoyable in different types of indoor environments. However, little has been studied in regards to whether such content can indeed provide similar, if not the same, levels of experience across different spaces. To gain more understanding towards this issue, we examine the effect of room size and furniture on the player experience of Fragments, a space-adaptive, indoor AR crime-solving game created for the Microsoft HoloLens. The study compares factors of player experience in four types of spatial conditions: (1) Large Room - Fully Furnished; (2) Large Room - Scarcely Furnished; (3) Small Room - Fully Furnished; and (4) Small Room - Scarcely Furnished. Our results show that while large spaces facilitate a higher sense of presence and narrative engagement, fully-furnished rooms raise perceived workload. Based on our findings, we propose design suggestions that can support narrative-driven, space-adaptive indoor HMD-based AR content in delivering optimal experiences for various types of rooms.","PeriodicalId":348216,"journal":{"name":"2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131477952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Investigating Cyclical Stereoscopy Effects Over Visual Discomfort and Fatigue in Virtual Reality While Learning 在虚拟现实学习中研究周期性立体效果对视觉不适和疲劳的影响
Pub Date : 2019-10-01 DOI: 10.1109/ISMAR.2019.00031
Alexis D. Souchet, Stéphanie Philippe, Floriane Ober, Aurélien Léveque, Laure Leroy
Purpose: It is hypothesized that cyclical stereoscopy (displaying stereoscopy or 2D cyclically) has effect over visual fatigue, learning curves and quality of experience, and that those effects are different from regular stereoscopy. Materials and Methods: 59 participants played a serious game simulating a job interview with a Samsung Gear VR Head Mounted Display (HMD). Participants were randomly assigned to 3 groups: HMD with regular stereoscopy (S3D) and HMD with cyclical stereoscopy (cycles of 1 or 3 minutes). Participants played the game thrice (third try on a PC one month later). Visual discomfort, Flow, Presence, were measured with questionnaires. Visual Fatigue was assessed pre-and post-exposure with optometric measures. Learning traces were obtained in-game. Results: Visual discomfort and flow are lower with cyclical-S3D than S3D but not Presence. Cyclical stereoscopy every 1 minute is more tiring than stereoscopy. Cyclical stereoscopy every 3 minutes tends to be more tiring than stereoscopy. Cyclical stereoscopy groups improved during Short-Term Learning. None of the statistical tests showed a difference between groups in either Short-Term Learning or Long-Term Learning curves. Conclusion: cyclical stereoscopy displayed cyclically had a positive impact on Visual Comfort and Flow, but not Presence. It affects oculomotor functions in a HMD while learning with a serious game with low disparities and easy visual tasks. Other visual tasks should be tested, and eye-tracking should be considered to assess visual fatigue during exposure. Results in ecological conditions seem to support models suggesting that activating cyclically stereopsis in a HMD is more tiring than maintaining it.
目的:假设周期性立体(周期性地显示立体或二维立体)对视觉疲劳、学习曲线和体验质量有影响,并且这些影响不同于常规立体。材料与方法:59名参与者使用三星Gear VR头戴式显示器(HMD)进行模拟求职面试的严肃游戏。参与者被随机分为三组:HMD定期立体(S3D)和HMD周期性立体(周期1或3分钟)。参与者玩了三次游戏(一个月后在PC上进行第三次尝试)。视觉不适、“心流”和“在场”都是通过问卷来测量的。用验光方法评估暴露前后的视疲劳。学习痕迹在游戏中获得。结果:循环-S3D组视觉不适和血流明显低于S3D组,而Presence组明显低于S3D组。每1分钟循环立体比立体更累。每3分钟做一次循环立体检查往往比立体检查更累人。周期性立体视觉组在短期学习中有所改善。在短期学习曲线和长期学习曲线上,没有统计学测试显示两组之间的差异。结论:周期性立体视觉对视觉舒适和心流有积极影响,但对存在感没有积极影响。它会影响头戴式头戴设备的眼动功能,同时还会影响低视差的严肃游戏和简单的视觉任务。其他的视觉任务也应该被测试,眼球追踪也应该被考虑用来评估暴露期间的视觉疲劳。生态条件下的结果似乎支持这样的模型,即在HMD中激活周期性立体视觉比维持它更累人。
{"title":"Investigating Cyclical Stereoscopy Effects Over Visual Discomfort and Fatigue in Virtual Reality While Learning","authors":"Alexis D. Souchet, Stéphanie Philippe, Floriane Ober, Aurélien Léveque, Laure Leroy","doi":"10.1109/ISMAR.2019.00031","DOIUrl":"https://doi.org/10.1109/ISMAR.2019.00031","url":null,"abstract":"Purpose: It is hypothesized that cyclical stereoscopy (displaying stereoscopy or 2D cyclically) has effect over visual fatigue, learning curves and quality of experience, and that those effects are different from regular stereoscopy. Materials and Methods: 59 participants played a serious game simulating a job interview with a Samsung Gear VR Head Mounted Display (HMD). Participants were randomly assigned to 3 groups: HMD with regular stereoscopy (S3D) and HMD with cyclical stereoscopy (cycles of 1 or 3 minutes). Participants played the game thrice (third try on a PC one month later). Visual discomfort, Flow, Presence, were measured with questionnaires. Visual Fatigue was assessed pre-and post-exposure with optometric measures. Learning traces were obtained in-game. Results: Visual discomfort and flow are lower with cyclical-S3D than S3D but not Presence. Cyclical stereoscopy every 1 minute is more tiring than stereoscopy. Cyclical stereoscopy every 3 minutes tends to be more tiring than stereoscopy. Cyclical stereoscopy groups improved during Short-Term Learning. None of the statistical tests showed a difference between groups in either Short-Term Learning or Long-Term Learning curves. Conclusion: cyclical stereoscopy displayed cyclically had a positive impact on Visual Comfort and Flow, but not Presence. It affects oculomotor functions in a HMD while learning with a serious game with low disparities and easy visual tasks. Other visual tasks should be tested, and eye-tracking should be considered to assess visual fatigue during exposure. Results in ecological conditions seem to support models suggesting that activating cyclically stereopsis in a HMD is more tiring than maintaining it.","PeriodicalId":348216,"journal":{"name":"2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133462364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
VR Props: An End-to-End Pipeline for Transporting Real Objects Into Virtual and Augmented Environments VR道具:将真实对象传输到虚拟和增强环境的端到端管道
Pub Date : 2019-10-01 DOI: 10.1109/ISMAR.2019.00-22
Catherine Taylor, Chris Mullany, Robin McNicholas, D. Cosker
Improvements in both software and hardware, as well as an increase in consumer suitable equipment, have resulted in great advances in the fields of virtual and augmented reality. Typically, systems use controllers or hand gestures to interact with virtual objects. However, these motions are often unnatural and diminish the immersion of the experience. Moreover, these approaches offer limited tactile feedback. There does not currently exist a platform to bring an arbitrary physical object into the virtual world without additional peripherals or the use of expensive motion capture systems. Such a system could be used for immersive experiences within the entertainment industry as well as being applied to VR or AR training experiences, in the fields of health and engineering. We propose an end-to-end pipeline for creating an interactive virtual prop from rigid and non-rigid physical objects. This includes a novel method for tracking the deformations of rigid and non-rigid objects at interactive rates using a single RGBD camera. We scan our physical object and process the point cloud to produce a triangular mesh. A range of possible deformations can be obtained by using a finite element method simulation and these are reduced to a low dimensional basis using principal component analysis. Machine learning approaches, in particular neural networks, have become key tools in computer vision and have been used on a range of tasks. Moreover, there has been an increased trend in training networks on synthetic data. To this end, we use a convolutional neural network, trained on synthetic data, to track the movement and potential deformations of an object in unlabelled RGB images from a single RGBD camera. We demonstrate our results for several objects with different sizes and appearances.
软件和硬件的改进,以及适合消费者的设备的增加,导致了虚拟和增强现实领域的巨大进步。通常,系统使用控制器或手势与虚拟对象进行交互。然而,这些动作往往是不自然的,会削弱游戏体验的沉浸感。此外,这些方法提供的触觉反馈有限。目前还没有一个平台可以在没有额外的外围设备或使用昂贵的动作捕捉系统的情况下将任意物理对象带入虚拟世界。这样的系统可以用于娱乐行业的沉浸式体验,也可以应用于健康和工程领域的VR或AR培训体验。我们提出了一个端到端的管道,用于从刚性和非刚性物理对象创建交互式虚拟道具。这包括一种使用单个RGBD相机以交互速率跟踪刚性和非刚性物体变形的新方法。我们扫描我们的物理对象,并处理点云,以产生一个三角形网格。通过有限元法模拟可以获得一系列可能的变形,并通过主成分分析将这些变形降为低维基。机器学习方法,特别是神经网络,已经成为计算机视觉的关键工具,并已用于一系列任务。此外,基于合成数据的训练网络也有增加的趋势。为此,我们使用经过合成数据训练的卷积神经网络来跟踪来自单个RGBD相机的未标记RGB图像中物体的运动和潜在变形。我们用几个不同大小和外观的物体来演示我们的结果。
{"title":"VR Props: An End-to-End Pipeline for Transporting Real Objects Into Virtual and Augmented Environments","authors":"Catherine Taylor, Chris Mullany, Robin McNicholas, D. Cosker","doi":"10.1109/ISMAR.2019.00-22","DOIUrl":"https://doi.org/10.1109/ISMAR.2019.00-22","url":null,"abstract":"Improvements in both software and hardware, as well as an increase in consumer suitable equipment, have resulted in great advances in the fields of virtual and augmented reality. Typically, systems use controllers or hand gestures to interact with virtual objects. However, these motions are often unnatural and diminish the immersion of the experience. Moreover, these approaches offer limited tactile feedback. There does not currently exist a platform to bring an arbitrary physical object into the virtual world without additional peripherals or the use of expensive motion capture systems. Such a system could be used for immersive experiences within the entertainment industry as well as being applied to VR or AR training experiences, in the fields of health and engineering. We propose an end-to-end pipeline for creating an interactive virtual prop from rigid and non-rigid physical objects. This includes a novel method for tracking the deformations of rigid and non-rigid objects at interactive rates using a single RGBD camera. We scan our physical object and process the point cloud to produce a triangular mesh. A range of possible deformations can be obtained by using a finite element method simulation and these are reduced to a low dimensional basis using principal component analysis. Machine learning approaches, in particular neural networks, have become key tools in computer vision and have been used on a range of tasks. Moreover, there has been an increased trend in training networks on synthetic data. To this end, we use a convolutional neural network, trained on synthetic data, to track the movement and potential deformations of an object in unlabelled RGB images from a single RGBD camera. We demonstrate our results for several objects with different sizes and appearances.","PeriodicalId":348216,"journal":{"name":"2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128639614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Measuring Cognitive Load and Insight: A Methodology Exemplified in a Virtual Reality Learning Context 测量认知负荷和洞察力:一种以虚拟现实学习为例的方法
Pub Date : 2019-10-01 DOI: 10.1109/ISMAR.2019.00033
J. Collins, H. Regenbrecht, T. Langlotz, Y. Can, Cem Ersoy, Russell Butson
Recent improvements of Virtual Reality (VR) technology have enabled researchers to investigate the benefits VR may provide for various domains such as health, entertainment, training, and education. A significant proportion of VR system evaluations rely on perception-based measures such as user pre-and post-questionnaires and interviews. While these self-reports provide valuable insights into users' perceptions of VR environments, recent developments in digital sensors and data collection techniques afford researchers access to measures of physiological response. This work explores the merits of physiological measures in the evaluation of emotional responses in virtual environments (ERVE). We include and place at the center of our ERVE methodology emotional response data by way of electrodermal activity and heart-rate detection which are analyzed in conjunction with event-driven data to derive further measures. In this paper, we present our ERVE methodology together with a case study within the context of VR-based learning in which we derive measures of cognitive load and moments of insight. We discuss our methodology, and its potential for use in many other application and research domains to provide more in-depth and objective analyses of experiences within VR.
虚拟现实(VR)技术的最新改进使研究人员能够调查VR可能为健康,娱乐,培训和教育等各个领域提供的好处。很大比例的VR系统评估依赖于基于感知的措施,如用户问卷调查前后和访谈。虽然这些自我报告为用户对VR环境的感知提供了有价值的见解,但数字传感器和数据收集技术的最新发展为研究人员提供了测量生理反应的途径。这项工作探讨了生理测量在虚拟环境(ERVE)中评估情绪反应的优点。通过皮电活动和心率检测,我们将情绪反应数据纳入并置于ERVE方法的中心,并将其与事件驱动数据相结合进行分析,以得出进一步的措施。在本文中,我们提出了我们的ERVE方法,并在基于vr的学习背景下进行了案例研究,其中我们得出了认知负荷和洞察力时刻的度量。我们讨论了我们的方法,以及它在许多其他应用和研究领域的应用潜力,以提供更深入和客观的VR体验分析。
{"title":"Measuring Cognitive Load and Insight: A Methodology Exemplified in a Virtual Reality Learning Context","authors":"J. Collins, H. Regenbrecht, T. Langlotz, Y. Can, Cem Ersoy, Russell Butson","doi":"10.1109/ISMAR.2019.00033","DOIUrl":"https://doi.org/10.1109/ISMAR.2019.00033","url":null,"abstract":"Recent improvements of Virtual Reality (VR) technology have enabled researchers to investigate the benefits VR may provide for various domains such as health, entertainment, training, and education. A significant proportion of VR system evaluations rely on perception-based measures such as user pre-and post-questionnaires and interviews. While these self-reports provide valuable insights into users' perceptions of VR environments, recent developments in digital sensors and data collection techniques afford researchers access to measures of physiological response. This work explores the merits of physiological measures in the evaluation of emotional responses in virtual environments (ERVE). We include and place at the center of our ERVE methodology emotional response data by way of electrodermal activity and heart-rate detection which are analyzed in conjunction with event-driven data to derive further measures. In this paper, we present our ERVE methodology together with a case study within the context of VR-based learning in which we derive measures of cognitive load and moments of insight. We discuss our methodology, and its potential for use in many other application and research domains to provide more in-depth and objective analyses of experiences within VR.","PeriodicalId":348216,"journal":{"name":"2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"245 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116150951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Augmented Environment Mapping for Appearance Editing of Glossy Surfaces 增强环境映射的外观编辑光滑的表面
Pub Date : 2019-10-01 DOI: 10.1109/ISMAR.2019.00-26
Takumi Kaminokado, D. Iwai, Kosuke Sato
We propose a novel spatial augmented reality (SAR) framework to edit the appearance of physical glossy surfaces. The key idea is utilizing the specular reflection, which was a major distractor in conventional SAR systems. Namely, we spatially manipulate the appearance of an environmental surface, which is observed through the specular reflection. We use a stereoscopic display to present two appearances with disparity on the environmental surface, by which the depth of the specularly reflected visual information corresponds to the glossy surface. We refer to this method as augmented environment mapping (AEM). The paper describes its principle, followed by three different implementation approaches inspired by typical virtual and augmented reality approaches. We confirmed the feasibility of AEM through both quantitative and qualitative experiments using prototype systems.
我们提出了一种新的空间增强现实(SAR)框架来编辑物理光滑表面的外观。关键思想是利用镜面反射,这是一个主要的干扰在传统的SAR系统。也就是说,我们在空间上操纵环境表面的外观,这是通过镜面反射观察到的。我们使用立体显示器在环境表面上呈现两种视差的外观,通过镜面反射的视觉信息的深度对应于光滑表面。我们将这种方法称为增强环境映射(AEM)。本文介绍了它的原理,然后介绍了受典型虚拟现实和增强现实方法启发的三种不同的实现方法。我们通过原型系统的定量和定性实验证实了AEM的可行性。
{"title":"Augmented Environment Mapping for Appearance Editing of Glossy Surfaces","authors":"Takumi Kaminokado, D. Iwai, Kosuke Sato","doi":"10.1109/ISMAR.2019.00-26","DOIUrl":"https://doi.org/10.1109/ISMAR.2019.00-26","url":null,"abstract":"We propose a novel spatial augmented reality (SAR) framework to edit the appearance of physical glossy surfaces. The key idea is utilizing the specular reflection, which was a major distractor in conventional SAR systems. Namely, we spatially manipulate the appearance of an environmental surface, which is observed through the specular reflection. We use a stereoscopic display to present two appearances with disparity on the environmental surface, by which the depth of the specularly reflected visual information corresponds to the glossy surface. We refer to this method as augmented environment mapping (AEM). The paper describes its principle, followed by three different implementation approaches inspired by typical virtual and augmented reality approaches. We confirmed the feasibility of AEM through both quantitative and qualitative experiments using prototype systems.","PeriodicalId":348216,"journal":{"name":"2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116161796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1