首页 > 最新文献

2015 IEEE Virtual Reality (VR)最新文献

英文 中文
Human-avatar interaction and recognition memory according to interaction types and methods 基于交互类型和方式的人-化身交互与识别记忆
Pub Date : 2015-03-23 DOI: 10.1109/VR.2015.7223369
Mingyu Kim, Woncheol Jang, K. Kim
For the several decades, researchers studied human-avatar interactions using a virtual reality (VR). However, speculation on the interaction between a human's recognition memory and interaction types/methods has not enough considered yet. In the current study, we designed a VR interaction paradigm with two different types of human-avatar interaction including initiating and responding interactions, and we also included two interaction methods including a head-gazing and a hand-pointing. The result indicated that there are significant differences in the recognition memory between the initiating and responding interactions. However, we couldn't find any significant effects of interaction methods in the current study. These results suggest that the human-avatar interaction may have similar patterns with the human-human interaction in the recognition memory, and methodological advances also required.
几十年来,研究人员一直在使用虚拟现实(VR)研究人与化身的互动。然而,关于人类的识别记忆和交互类型/方法之间的相互作用的推测还没有得到足够的考虑。在本研究中,我们设计了一种虚拟现实交互范式,包括两种不同类型的人-化身交互,包括发起和响应交互,我们还包括两种交互方式,包括头凝视和手指向。结果表明,初始交互和响应交互在识别记忆方面存在显著差异。然而,在目前的研究中,我们没有发现交互方法的显著效果。这些结果表明,人与虚拟形象的交互可能与识别记忆中的人与人交互具有相似的模式,并且还需要在方法上取得进展。
{"title":"Human-avatar interaction and recognition memory according to interaction types and methods","authors":"Mingyu Kim, Woncheol Jang, K. Kim","doi":"10.1109/VR.2015.7223369","DOIUrl":"https://doi.org/10.1109/VR.2015.7223369","url":null,"abstract":"For the several decades, researchers studied human-avatar interactions using a virtual reality (VR). However, speculation on the interaction between a human's recognition memory and interaction types/methods has not enough considered yet. In the current study, we designed a VR interaction paradigm with two different types of human-avatar interaction including initiating and responding interactions, and we also included two interaction methods including a head-gazing and a hand-pointing. The result indicated that there are significant differences in the recognition memory between the initiating and responding interactions. However, we couldn't find any significant effects of interaction methods in the current study. These results suggest that the human-avatar interaction may have similar patterns with the human-human interaction in the recognition memory, and methodological advances also required.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"1247 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131732986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Scalability limits of large immersive high-resolution displays 大型沉浸式高分辨率显示器的可扩展性限制
Pub Date : 2015-03-23 DOI: 10.1109/VR.2015.7223318
C. Papadopoulos, Seyedkoosha Mirhosseini, Ievgeniia Gutenko, Kaloian Petkov, A. Kaufman, B. Laha
We present the results of a variable information space experiment, targeted at exploring the scalability limits of immersive highresolution, tiled-display walls under physical navigation. Our work is motivated by a lack of evidence supporting the extension of previously established benefits on substantially large, room-shaped displays. Using the Reality Deck, a gigapixel resolution immersive display, as its apparatus, our study spans four display form-factors, starting at 100 megapixels arranged planarly and up to one gi-gapixel in a horizontally immersive setting. We focus on four core tasks: visual search, attribute search, comparisons and pattern finding. We present a quantitative analysis of per-task user performance across the various display conditions. Our results demonstrate improvements in user performance as the display form-factor changes to 600 megapixels. At the 600 megapixel to 1 gigapixel transition, we observe no tangible performance improvements and the visual search task regressed substantially. Additionally, our analysis of subjective mental effort questionnaire responses indicates that subjective user effort grows as the display size increases, validating previous studies on smaller displays. Our analysis of the participants' physical navigation during the study sessions shows an increase in user movement as the display grew. Finally, by visualizing the participants' movement within the display apparatus space, we discover two main approaches (termed “overview” and “detail”) through which users chose to tackle the various data exploration tasks. The results of our study can inform the design of immersive high-resolution display systems and provide insight into how users navigate within these room-sized visualization spaces.
我们展示了一项变量信息空间实验的结果,旨在探索沉浸式高分辨率平铺显示墙在物理导航下的可扩展性限制。我们的工作的动机是缺乏证据支持先前确定的好处扩展到实质上的大,房间形状的显示器。使用现实甲板,一个十亿像素分辨率的沉浸式显示器,作为它的设备,我们的研究跨越了四种显示形式因素,从平面排列的1亿像素到水平沉浸式设置的1亿像素。我们专注于四个核心任务:视觉搜索、属性搜索、比较和模式查找。我们在不同的显示条件下对每个任务的用户性能进行了定量分析。我们的研究结果表明,当显示尺寸变化到6亿像素时,用户性能得到了改善。在6亿像素到10亿像素的转换过程中,我们没有观察到明显的性能改进,视觉搜索任务明显退化。此外,我们对主观心理努力问卷调查结果的分析表明,随着显示器尺寸的增加,主观用户努力也会增加,这验证了之前关于较小显示器的研究。我们对参与者在学习期间的物理导航的分析显示,随着屏幕的增长,用户的移动也在增加。最后,通过可视化参与者在显示设备空间内的运动,我们发现了两种主要方法(称为“概述”和“细节”),用户通过这些方法选择处理各种数据探索任务。我们的研究结果可以为沉浸式高分辨率显示系统的设计提供信息,并提供用户如何在这些房间大小的可视化空间中导航的见解。
{"title":"Scalability limits of large immersive high-resolution displays","authors":"C. Papadopoulos, Seyedkoosha Mirhosseini, Ievgeniia Gutenko, Kaloian Petkov, A. Kaufman, B. Laha","doi":"10.1109/VR.2015.7223318","DOIUrl":"https://doi.org/10.1109/VR.2015.7223318","url":null,"abstract":"We present the results of a variable information space experiment, targeted at exploring the scalability limits of immersive highresolution, tiled-display walls under physical navigation. Our work is motivated by a lack of evidence supporting the extension of previously established benefits on substantially large, room-shaped displays. Using the Reality Deck, a gigapixel resolution immersive display, as its apparatus, our study spans four display form-factors, starting at 100 megapixels arranged planarly and up to one gi-gapixel in a horizontally immersive setting. We focus on four core tasks: visual search, attribute search, comparisons and pattern finding. We present a quantitative analysis of per-task user performance across the various display conditions. Our results demonstrate improvements in user performance as the display form-factor changes to 600 megapixels. At the 600 megapixel to 1 gigapixel transition, we observe no tangible performance improvements and the visual search task regressed substantially. Additionally, our analysis of subjective mental effort questionnaire responses indicates that subjective user effort grows as the display size increases, validating previous studies on smaller displays. Our analysis of the participants' physical navigation during the study sessions shows an increase in user movement as the display grew. Finally, by visualizing the participants' movement within the display apparatus space, we discover two main approaches (termed “overview” and “detail”) through which users chose to tackle the various data exploration tasks. The results of our study can inform the design of immersive high-resolution display systems and provide insight into how users navigate within these room-sized visualization spaces.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125279083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
EVE: Exercise in Virtual Environments 伊芙:在虚拟环境中锻炼
Pub Date : 2015-03-23 DOI: 10.1109/VR.2015.7223408
A. Solignac, Sebastien Kuntz
EVE (Exercise in Virtual Environments) is an operational VR system designed for space, polar and submarine crews. This system allows crewmembers - living and working in artificial habitats- to explore immersive natural landscapes during their daily physical exercise, and experience presence in a variety of alternate environments. Using recent hardware and software, this innovative psychological counter-measure aims at reducing the adverse effects of confinement and monotony in long duration missions, while maintaining motivation for physical exercise. Initial testing with a proof-of-concept prototype was conducted near the South magnetic pole, as well as in transient microgravity.
EVE(虚拟环境演习)是一个为太空、极地和潜艇人员设计的可操作虚拟现实系统。该系统允许在人工栖息地生活和工作的机组人员在日常体育锻炼中探索身临其境的自然景观,并体验各种替代环境中的存在感。使用最新的硬件和软件,这种创新的心理对策旨在减少长期任务中禁闭和单调的不利影响,同时保持体育锻炼的动力。在南磁极附近以及瞬态微重力环境中进行了概念验证原型的初步测试。
{"title":"EVE: Exercise in Virtual Environments","authors":"A. Solignac, Sebastien Kuntz","doi":"10.1109/VR.2015.7223408","DOIUrl":"https://doi.org/10.1109/VR.2015.7223408","url":null,"abstract":"EVE (Exercise in Virtual Environments) is an operational VR system designed for space, polar and submarine crews. This system allows crewmembers - living and working in artificial habitats- to explore immersive natural landscapes during their daily physical exercise, and experience presence in a variety of alternate environments. Using recent hardware and software, this innovative psychological counter-measure aims at reducing the adverse effects of confinement and monotony in long duration missions, while maintaining motivation for physical exercise. Initial testing with a proof-of-concept prototype was conducted near the South magnetic pole, as well as in transient microgravity.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133842754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Using augmented reality to support situated analytics 使用增强现实来支持定位分析
Pub Date : 2015-03-23 DOI: 10.1109/VR.2015.7223352
Neven A. M. ElSayed, B. Thomas, Ross T. Smith, K. Marriott, J. Piantadosi
We draw from the domains of Visual Analytics and Augmented Reality to support a new form of in-situ interactive visual analysis. We present a Situated Analytics model, a novel interaction, and a visualization concept for reasoning support. Situated Analytics has four primary elements: situated information, abstract information, augmented reality interaction, and analytical interaction.
我们借鉴了视觉分析和增强现实的领域来支持一种新的现场交互式视觉分析形式。我们提出了一种定位分析模型,一种新颖的交互,以及一种用于推理支持的可视化概念。情境分析有四个基本元素:情境信息、抽象信息、增强现实交互和分析交互。
{"title":"Using augmented reality to support situated analytics","authors":"Neven A. M. ElSayed, B. Thomas, Ross T. Smith, K. Marriott, J. Piantadosi","doi":"10.1109/VR.2015.7223352","DOIUrl":"https://doi.org/10.1109/VR.2015.7223352","url":null,"abstract":"We draw from the domains of Visual Analytics and Augmented Reality to support a new form of in-situ interactive visual analysis. We present a Situated Analytics model, a novel interaction, and a visualization concept for reasoning support. Situated Analytics has four primary elements: situated information, abstract information, augmented reality interaction, and analytical interaction.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132364655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Robust high-speed tracking against illumination changes for dynamic projection mapping 针对光照变化的动态投影映射鲁棒高速跟踪
Pub Date : 2015-03-23 DOI: 10.1109/VR.2015.7223330
Tomohiro Sueishi, H. Oku, M. Ishikawa
Dynamic Projection Mapping, projection-based AR for a moving object without misalignment by a high-speed optical axis controller by rotational mirrors, has a trade-off between stability of highspeed tracking and high visibility for a variety of projection content. In this paper, a system that will provide robust high-speed tracking without any markers on objects against illumination changes, including projected images, is realized by introducing a retroreflective background with the optical axis controller for Dynamic Projection Mapping. Low-intensity episcopic light is projected with Projection Mapping content, and the light reflected from the background is sufficient for high-speed cameras but is nearly invisible to observers. In addition, we introduce adaptive windows and peripheral weighted erosion to maintain accurate high-speed tracking. Under low light conditions, we examined the visual performance of diffuse reflection and retroreflection from both camera and observer viewpoints. We evaluated stability relative to illumination and disturbance caused by non-target objects. Dynamic Projection Mapping with partially well-lit content in a low-intensity light environment is realized by our proposed system.
动态投影映射(Dynamic Projection Mapping)是一种基于投影的AR技术,通过高速光轴控制器通过旋转镜实现运动物体的无偏差,它在高速跟踪的稳定性和各种投影内容的高可视性之间进行了权衡。本文通过引入带有动态投影映射光轴控制器的反向反射背景,实现了一种针对光照变化(包括投影图像)提供无标记物体的鲁棒高速跟踪系统。用投影映射内容投射低强度的主教光,从背景反射的光对高速摄像机来说是足够的,但对观察者来说几乎是不可见的。此外,我们还引入了自适应窗口和外围加权侵蚀来保持准确的高速跟踪。在弱光条件下,我们从相机和观察者的角度研究了漫反射和反向反射的视觉性能。我们评估了相对于光照和非目标物体引起的干扰的稳定性。该系统实现了低光强环境下部分光照良好内容的动态投影映射。
{"title":"Robust high-speed tracking against illumination changes for dynamic projection mapping","authors":"Tomohiro Sueishi, H. Oku, M. Ishikawa","doi":"10.1109/VR.2015.7223330","DOIUrl":"https://doi.org/10.1109/VR.2015.7223330","url":null,"abstract":"Dynamic Projection Mapping, projection-based AR for a moving object without misalignment by a high-speed optical axis controller by rotational mirrors, has a trade-off between stability of highspeed tracking and high visibility for a variety of projection content. In this paper, a system that will provide robust high-speed tracking without any markers on objects against illumination changes, including projected images, is realized by introducing a retroreflective background with the optical axis controller for Dynamic Projection Mapping. Low-intensity episcopic light is projected with Projection Mapping content, and the light reflected from the background is sufficient for high-speed cameras but is nearly invisible to observers. In addition, we introduce adaptive windows and peripheral weighted erosion to maintain accurate high-speed tracking. Under low light conditions, we examined the visual performance of diffuse reflection and retroreflection from both camera and observer viewpoints. We evaluated stability relative to illumination and disturbance caused by non-target objects. Dynamic Projection Mapping with partially well-lit content in a low-intensity light environment is realized by our proposed system.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117085520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
Underwater integral photography 水下整体摄影
Pub Date : 2015-03-23 DOI: 10.1109/VR.2015.7223436
Nahomi Maki, K. Yanaka
A novel integral photography (IP) system in which the amount of popping out is more than three times larger than usual is demonstrated in this study. If autostereoscopic display is introduced into virtual reality, IP is an ideal candidate because not only the horizontal but also the vertical parallax can be obtained. However, the amount of popping out obtained by IP is generally far less than that obtained by head-mounted display because the ray density decreases when the viewer is distant from the fly's eye lens. Although a solution is to extend the focal length of the fly's eye lens, this lens is difficult to manufacture. We address this problem by simply immersing the fly's eye lens into water to extend the effective focal length.
本研究展示了一种新型的整体摄影(IP)系统,其弹出量比通常大三倍以上。如果在虚拟现实中引入自动立体显示,IP是一个理想的选择,因为它不仅可以获得水平视差,而且可以获得垂直视差。然而,IP获得的弹出量通常远小于头戴式显示器,因为当观看者远离苍蝇的眼睛镜头时,光线密度会降低。虽然一种解决方案是延长苍蝇眼睛晶状体的焦距,但这种晶状体很难制造。我们通过简单地将苍蝇的眼睛晶状体浸入水中来延长有效焦距来解决这个问题。
{"title":"Underwater integral photography","authors":"Nahomi Maki, K. Yanaka","doi":"10.1109/VR.2015.7223436","DOIUrl":"https://doi.org/10.1109/VR.2015.7223436","url":null,"abstract":"A novel integral photography (IP) system in which the amount of popping out is more than three times larger than usual is demonstrated in this study. If autostereoscopic display is introduced into virtual reality, IP is an ideal candidate because not only the horizontal but also the vertical parallax can be obtained. However, the amount of popping out obtained by IP is generally far less than that obtained by head-mounted display because the ray density decreases when the viewer is distant from the fly's eye lens. Although a solution is to extend the focal length of the fly's eye lens, this lens is difficult to manufacture. We address this problem by simply immersing the fly's eye lens into water to extend the effective focal length.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115649525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Social presence with virtual glass 虚拟眼镜的社交存在感
Pub Date : 2015-03-23 DOI: 10.1109/VR.2015.7223399
H. Regenbrecht, Mansoor Alghamdi, S. Hoermann, T. Langlotz, Mike Goodwin, Colin Aldridge
Collaborative Virtual Environments (CVE) with co-located or remote video communication functionality require a continuous experience of social presence. If, at any stage during the experience the communication interrupts presence, then the CVE experience as a whole is affected - spatial presence is then decoupled from social presence. We present a solution to this problem by introducing the concept of a virtualized version of Google Glass™ called Virtual Glass. Virtual Glass is integrated into the CVE as a real-world metaphor for a communication device, one particularly suited for collaborative instructor-performer systems. In a study with 65 participants we demonstrated that the concept of Virtual Glass is effective, that it supports a high level of social presence and that the social presence is rated higher than a standard picture-in-picture videoconferencing approach for certain tasks.
具有协同位置或远程视频通信功能的协作虚拟环境(CVE)需要持续的社交存在体验。如果在体验的任何阶段,通信中断了存在,那么CVE体验作为一个整体就会受到影响——空间存在就会与社会存在脱钩。我们提出了一个解决方案,通过引入虚拟版本的谷歌眼镜™的概念,称为虚拟眼镜。虚拟眼镜作为一种现实世界的通信设备集成到CVE中,特别适合于协作式教师-表演者系统。在一项有65名参与者的研究中,我们证明了虚拟眼镜的概念是有效的,它支持高水平的社交存在,并且在某些任务中,社交存在的评分高于标准的图片对图片视频会议方法。
{"title":"Social presence with virtual glass","authors":"H. Regenbrecht, Mansoor Alghamdi, S. Hoermann, T. Langlotz, Mike Goodwin, Colin Aldridge","doi":"10.1109/VR.2015.7223399","DOIUrl":"https://doi.org/10.1109/VR.2015.7223399","url":null,"abstract":"Collaborative Virtual Environments (CVE) with co-located or remote video communication functionality require a continuous experience of social presence. If, at any stage during the experience the communication interrupts presence, then the CVE experience as a whole is affected - spatial presence is then decoupled from social presence. We present a solution to this problem by introducing the concept of a virtualized version of Google Glass™ called Virtual Glass. Virtual Glass is integrated into the CVE as a real-world metaphor for a communication device, one particularly suited for collaborative instructor-performer systems. In a study with 65 participants we demonstrated that the concept of Virtual Glass is effective, that it supports a high level of social presence and that the social presence is rated higher than a standard picture-in-picture videoconferencing approach for certain tasks.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121607540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Light field projection for lighting reproduction 用于照明再现的光场投影
Pub Date : 2015-03-23 DOI: 10.1109/VR.2015.7223335
Zhong Zhou, Tao Yu, Xiaofeng Qiu, Ruigang Yang, Qinping Zhao
We propose a novel approach to generate 4D light field in the physical world for lighting reproduction. The light field is generated by projecting lighting images on a lens array. The lens array turns the projected images into a controlled anisotropic point light source array which can simulate the light field of a real scene. In terms of acquisition, we capture an array of light probe images from a real scene, based on which an incident light field is generated. The lens array and the projectors are geometric and photometrically calibrated, and an efficient resampling algorithm is developed to turn the incident light field into the images projected onto the lens array. The reproduced illumination, which allows per-ray lighting control, can produce realistic lighting result on real objects, avoiding the complex process of geometric and material modeling. We demonstrate the effectiveness of our approach with a prototype setup.
我们提出了一种在物理世界中产生四维光场的新方法。光场是通过在透镜阵列上投射照明图像而产生的。透镜阵列将投影图像转化为可控的各向异性点光源阵列,可以模拟真实场景的光场。在采集方面,我们从真实场景中捕获一组光探针图像,并在此基础上生成入射光场。对透镜阵列和投影仪进行了几何和光度校正,并开发了一种有效的重采样算法,将入射光场转换为投影到透镜阵列上的图像。再现照明,允许每条光线照明控制,可以在真实物体上产生逼真的照明效果,避免了复杂的几何和材料建模过程。我们通过一个原型装置证明了我们的方法的有效性。
{"title":"Light field projection for lighting reproduction","authors":"Zhong Zhou, Tao Yu, Xiaofeng Qiu, Ruigang Yang, Qinping Zhao","doi":"10.1109/VR.2015.7223335","DOIUrl":"https://doi.org/10.1109/VR.2015.7223335","url":null,"abstract":"We propose a novel approach to generate 4D light field in the physical world for lighting reproduction. The light field is generated by projecting lighting images on a lens array. The lens array turns the projected images into a controlled anisotropic point light source array which can simulate the light field of a real scene. In terms of acquisition, we capture an array of light probe images from a real scene, based on which an incident light field is generated. The lens array and the projectors are geometric and photometrically calibrated, and an efficient resampling algorithm is developed to turn the incident light field into the images projected onto the lens array. The reproduced illumination, which allows per-ray lighting control, can produce realistic lighting result on real objects, avoiding the complex process of geometric and material modeling. We demonstrate the effectiveness of our approach with a prototype setup.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129296657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
AR-SSVEP for brain-machine interface: Estimating user's gaze in head-mounted display with USB camera AR-SSVEP脑机接口:通过USB摄像头估算用户在头戴式显示器上的凝视
Pub Date : 2015-03-23 DOI: 10.1109/VR.2015.7223361
S. Horii, S. Nakauchi, M. Kitazaki
We aim to develop a brain-machine interface (BMI) system that estimates user's gaze or attention on an object to pick it up in the real world. In Experiment 1 and 2 we measured steady-state visual evoked potential (SSVEP) using luminance and/or contrast modulated flickers of photographic scenes presented on a head-mounted display (HMD). We applied multiclass SVM to estimate gaze locations for every 2s time-window data, and obtained significantly good classifications of gaze locations with the leave-one-session-out cross validation. In Experiment 3 we measured SSVEP using luminance and contrast modulated flickers of real scenes that were online captured by a USB camera and presented on the HMD. We put AR markers on real objects and made their locations flickering on HMD. We obtained the best performance of gaze classification with highest luminance and contrast modulation (73-91% accuracy at chance level 33%), and significantly good classification with low (25% of the highest) luminance and contrast modulation (42-50% accuracy). These results suggest that the luminance-modulated flickers of real scenes through USB camera can be applied to BMI by using augmented reality technology.
我们的目标是开发一个脑机接口(BMI)系统,它可以估计用户对一个物体的凝视或注意力,从而在现实世界中拾取它。在实验1和2中,我们使用头戴式显示器(HMD)上呈现的摄影场景的亮度和/或对比度调制闪烁来测量稳态视觉诱发电位(SSVEP)。我们采用多类支持向量机对每15个时间窗数据进行凝视位置估计,并通过留一会话交叉验证获得了较好的凝视位置分类。在实验3中,我们使用USB相机在线捕获并在HMD上显示的真实场景的亮度和对比度调制闪烁来测量SSVEP。我们把AR标记放在真实物体上,让它们的位置在HMD上闪烁。我们获得了最高亮度和对比度调制时的最佳分类性能(准确率为73-91%,随机水平为33%),较低亮度和对比度调制(准确率为最高的25%)时的分类效果显著好(准确率为42-50%)。这些结果表明,通过USB摄像头拍摄的真实场景的亮度调制闪烁可以通过增强现实技术应用于BMI。
{"title":"AR-SSVEP for brain-machine interface: Estimating user's gaze in head-mounted display with USB camera","authors":"S. Horii, S. Nakauchi, M. Kitazaki","doi":"10.1109/VR.2015.7223361","DOIUrl":"https://doi.org/10.1109/VR.2015.7223361","url":null,"abstract":"We aim to develop a brain-machine interface (BMI) system that estimates user's gaze or attention on an object to pick it up in the real world. In Experiment 1 and 2 we measured steady-state visual evoked potential (SSVEP) using luminance and/or contrast modulated flickers of photographic scenes presented on a head-mounted display (HMD). We applied multiclass SVM to estimate gaze locations for every 2s time-window data, and obtained significantly good classifications of gaze locations with the leave-one-session-out cross validation. In Experiment 3 we measured SSVEP using luminance and contrast modulated flickers of real scenes that were online captured by a USB camera and presented on the HMD. We put AR markers on real objects and made their locations flickering on HMD. We obtained the best performance of gaze classification with highest luminance and contrast modulation (73-91% accuracy at chance level 33%), and significantly good classification with low (25% of the highest) luminance and contrast modulation (42-50% accuracy). These results suggest that the luminance-modulated flickers of real scenes through USB camera can be applied to BMI by using augmented reality technology.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130422341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Using astigmatism in wide angle HMDs to improve rendering 在广角头显中使用散光来改善渲染
Pub Date : 2015-03-23 DOI: 10.1109/VR.2015.7223396
Daniel Pohl, Timo Bolkart, Stefan Nickels, O. Grau
Lenses in modern consumer HMDs introduce distortions like astigmatism: only the center area of the displayed content can be perceived sharp while with increasing distance from the center the image gets out of focus. We show with three new approaches that this undesired side effect can be used in a positive way to save calculations in blurry areas. For example, using sampling maps to lower the detail in areas where the image is blurred through astigmatism, increases performance by a factor of 2 to 3. Further, we introduce a new calibration of user-specific viewing parameters that increase the performance by about 20-75%.
现代消费级头戴式显示器的镜头会导致像散光这样的畸变:只有显示内容的中心区域可以被感知到清晰,而随着距离中心的增加,图像会失去焦点。我们用三种新方法表明,这种不希望的副作用可以以积极的方式使用,以节省模糊区域的计算。例如,使用采样地图来降低图像因散光而模糊的区域的细节,可以将性能提高2到3倍。此外,我们引入了一种新的校准用户特定的观看参数,可将性能提高约20-75%。
{"title":"Using astigmatism in wide angle HMDs to improve rendering","authors":"Daniel Pohl, Timo Bolkart, Stefan Nickels, O. Grau","doi":"10.1109/VR.2015.7223396","DOIUrl":"https://doi.org/10.1109/VR.2015.7223396","url":null,"abstract":"Lenses in modern consumer HMDs introduce distortions like astigmatism: only the center area of the displayed content can be perceived sharp while with increasing distance from the center the image gets out of focus. We show with three new approaches that this undesired side effect can be used in a positive way to save calculations in blurry areas. For example, using sampling maps to lower the detail in areas where the image is blurred through astigmatism, increases performance by a factor of 2 to 3. Further, we introduce a new calibration of user-specific viewing parameters that increase the performance by about 20-75%.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130440829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
2015 IEEE Virtual Reality (VR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1