首页 > 最新文献

Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology最新文献

英文 中文
Indian Virtual reality affective database with self-report measures and EDA 印度虚拟现实情感数据库与自我报告措施和EDA
Surya Soujanya Kodavalla, M. Goel, Priyanka Srivastava
The current work assesses the physiological and psychological responses to the 360° emotional videos selected from Stanford virtual reality (VR) affective database [Li et al., 2017], presented using VR head-mounted display (HMD). Participants were asked to report valence and arousal level after watching each video. The electro-dermal activity (EDA) was recorded while watching the videos. The current pilot study shows no significant difference in skin-conductance response (SCR) between the high and low arousal experience. Similar trends were observed during high and low valence. The self-report pilot data on valence and arousal shows no statistically significant difference between Stanford VR affective responses and the corresponding Indian population psychological responses. Despite positive result of no-significant difference in self-report across cultures, we are limited to generalize the result because of small sample size.
目前的工作评估了从斯坦福虚拟现实(VR)情感数据库中选择的360°情感视频的生理和心理反应[Li等人,2017],使用VR头戴式显示器(HMD)呈现。参与者被要求在观看完每段视频后报告心理效价和兴奋程度。在观看视频时记录皮肤电活动(EDA)。目前的初步研究表明,在高唤醒和低唤醒体验之间,皮肤电导反应(SCR)没有显著差异。在高价和低价期间观察到类似的趋势。在效价和觉醒方面的自我报告先导数据显示,斯坦福VR情感反应与相应的印度人群心理反应之间无统计学差异。尽管不同文化的自我报告没有显著差异,但由于样本量小,我们限制了结果的推广。
{"title":"Indian Virtual reality affective database with self-report measures and EDA","authors":"Surya Soujanya Kodavalla, M. Goel, Priyanka Srivastava","doi":"10.1145/3359996.3364698","DOIUrl":"https://doi.org/10.1145/3359996.3364698","url":null,"abstract":"The current work assesses the physiological and psychological responses to the 360° emotional videos selected from Stanford virtual reality (VR) affective database [Li et al., 2017], presented using VR head-mounted display (HMD). Participants were asked to report valence and arousal level after watching each video. The electro-dermal activity (EDA) was recorded while watching the videos. The current pilot study shows no significant difference in skin-conductance response (SCR) between the high and low arousal experience. Similar trends were observed during high and low valence. The self-report pilot data on valence and arousal shows no statistically significant difference between Stanford VR affective responses and the corresponding Indian population psychological responses. Despite positive result of no-significant difference in self-report across cultures, we are limited to generalize the result because of small sample size.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116880017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
AHMED: Toolset for Ad-Hoc Mixed-reality Exhibition Design AHMED: Ad-Hoc混合现实展览设计工具集
Krzysztof Pietroszek, Carl Moore
We present “AHMED”, a mixed-reality toolset that allows visitors to experience mixed-reality museum or art exhibitions created ad-hoc at locations such as event venues, private parties,or a living room. The system democratizes access to exhibitions for populations that cannot visit these exhibitions in person for reasons of disability, time-constraints, travel restrictions, or socio-economic status.
我们展示了“AHMED”,这是一个混合现实工具集,可以让游客体验在活动场所、私人聚会或客厅等地点临时创建的混合现实博物馆或艺术展览。该系统使由于残疾、时间限制、旅行限制或社会经济地位等原因无法亲自参观展览的人群能够无障碍地参观展览。
{"title":"AHMED: Toolset for Ad-Hoc Mixed-reality Exhibition Design","authors":"Krzysztof Pietroszek, Carl Moore","doi":"10.1145/3359996.3364729","DOIUrl":"https://doi.org/10.1145/3359996.3364729","url":null,"abstract":"We present “AHMED”, a mixed-reality toolset that allows visitors to experience mixed-reality museum or art exhibitions created ad-hoc at locations such as event venues, private parties,or a living room. The system democratizes access to exhibitions for populations that cannot visit these exhibitions in person for reasons of disability, time-constraints, travel restrictions, or socio-economic status.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116956679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Visualizing Convolutional Neural Networks with Virtual Reality 卷积神经网络的虚拟现实可视化
Annika Wohlan, N. Hochgeschwender, Nadine Meissler
Software systems and components are increasingly based on machine learning methods, such as Convolutional Neural Networks (CNNs). Thus, there is a growing need for common programmers and machine learning newcomers to understand the general functioning of these algorithms. However, as neural networks are complex in nature, novel presentation means are required to enable rapid access to the functionality. For that purpose, this paper examines how CNNs can be visualized in Virtual Reality. A first exploratory study has confirmed that our visualization approach is both intuitive to use and conductive to learning.
软件系统和组件越来越多地基于机器学习方法,如卷积神经网络(cnn)。因此,越来越多的普通程序员和机器学习新手需要了解这些算法的一般功能。然而,由于神经网络本质上是复杂的,因此需要新颖的表示方法来实现对其功能的快速访问。为此,本文探讨了如何在虚拟现实中实现cnn的可视化。第一项探索性研究证实,我们的可视化方法既直观易用,又有助于学习。
{"title":"Visualizing Convolutional Neural Networks with Virtual Reality","authors":"Annika Wohlan, N. Hochgeschwender, Nadine Meissler","doi":"10.1145/3359996.3364817","DOIUrl":"https://doi.org/10.1145/3359996.3364817","url":null,"abstract":"Software systems and components are increasingly based on machine learning methods, such as Convolutional Neural Networks (CNNs). Thus, there is a growing need for common programmers and machine learning newcomers to understand the general functioning of these algorithms. However, as neural networks are complex in nature, novel presentation means are required to enable rapid access to the functionality. For that purpose, this paper examines how CNNs can be visualized in Virtual Reality. A first exploratory study has confirmed that our visualization approach is both intuitive to use and conductive to learning.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121167202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
SlingDrone: Mixed Reality System for Pointing and Interaction Using a Single Drone SlingDrone:使用单一无人机进行指向和交互的混合现实系统
Evgeny V. Tsykunov, R. Ibrahimov, Derek Vasquez, D. Tsetserukou
We propose SlingDrone, a novel Mixed Reality interaction paradigm that utilizes a micro-quadrotor as both pointing controller and interactive robot with a slingshot motion type. The drone attempts to hover at a given position while the human pulls it in desired direction using a hand grip and a leash. Based on the displacement, a virtual trajectory is defined. To allow for intuitive and simple control, we use virtual reality (VR) technology to trace the path of the drone based on the displacement input. The user receives force feedback propagated through the leash. Force feedback from SlingDrone coupled with visualized trajectory in VR creates an intuitive and user friendly pointing device. When the drone is released, it follows the trajectory that was shown in VR. Onboard payload (e.g. magnetic gripper) can perform various scenarios for real interaction with the surroundings, e.g. manipulation or sensing. Unlike HTC Vive controller, SlingDrone does not require handheld devices, thus it can be used as a standalone pointing technology in VR.
我们提出了SlingDrone,一种新型的混合现实交互范例,利用微型四旋翼作为指向控制器和具有弹弓运动类型的交互机器人。无人机试图在给定的位置悬停,而人类用手握和皮带将其拉向所需的方向。基于位移,定义了虚拟轨迹。为了实现直观和简单的控制,我们使用虚拟现实(VR)技术根据位移输入跟踪无人机的路径。用户接收通过牵引带传播的力反馈。来自SlingDrone的力反馈加上VR中的可视化轨迹创建了一个直观和用户友好的指向设备。当无人机被释放时,它会沿着VR中显示的轨迹飞行。机载有效载荷(例如磁性抓手)可以执行与周围环境的真实交互的各种场景,例如操作或传感。与HTC Vive控制器不同,SlingDrone不需要手持设备,因此它可以作为VR中的独立指向技术。
{"title":"SlingDrone: Mixed Reality System for Pointing and Interaction Using a Single Drone","authors":"Evgeny V. Tsykunov, R. Ibrahimov, Derek Vasquez, D. Tsetserukou","doi":"10.1145/3359996.3364271","DOIUrl":"https://doi.org/10.1145/3359996.3364271","url":null,"abstract":"We propose SlingDrone, a novel Mixed Reality interaction paradigm that utilizes a micro-quadrotor as both pointing controller and interactive robot with a slingshot motion type. The drone attempts to hover at a given position while the human pulls it in desired direction using a hand grip and a leash. Based on the displacement, a virtual trajectory is defined. To allow for intuitive and simple control, we use virtual reality (VR) technology to trace the path of the drone based on the displacement input. The user receives force feedback propagated through the leash. Force feedback from SlingDrone coupled with visualized trajectory in VR creates an intuitive and user friendly pointing device. When the drone is released, it follows the trajectory that was shown in VR. Onboard payload (e.g. magnetic gripper) can perform various scenarios for real interaction with the surroundings, e.g. manipulation or sensing. Unlike HTC Vive controller, SlingDrone does not require handheld devices, thus it can be used as a standalone pointing technology in VR.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126535456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Augmented Reality Visualisation Concepts to Support Intraoperative Distance Estimation 增强现实可视化概念支持术中距离估计
F. Heinrich, G. Schmidt, F. Jungmann, C. Hansen
The estimation of distances and spatial relations between surgical instruments and surrounding anatomical structures is a challenging task for clinicians in image-guided surgery. Using augmented reality (AR), navigation aids can be displayed directly at the intervention site to support the assessment of distances and reduce the risk of damage to healthy tissue. To this end, four distance-encoding visualisation concepts were developed using a head-mounted optical see-through AR setup and evaluated by conducting a comparison study. Results suggest the general advantage of the proposed methods compared to a blank visualisation providing no additional information. Using a Distance Sensor concept signalising the proximity of nearby structures resulted in the least time the instrument was located below 5mm to surrounding risk structures and yielded the least amount of collisions with them.
在图像引导手术中,手术器械与周围解剖结构之间的距离和空间关系的估计是临床医生面临的一个具有挑战性的任务。使用增强现实(AR),导航辅助设备可以直接显示在干预地点,以支持距离评估并降低对健康组织的损害风险。为此,使用头戴式光学透明AR装置开发了四个距离编码可视化概念,并通过进行比较研究进行了评估。结果表明,与没有提供额外信息的空白可视化相比,所提出的方法具有总体优势。使用距离传感器概念来指示附近建筑物的接近程度,可以在最短的时间内将仪器定位在距离周围危险建筑物5mm以下,并产生最少的碰撞。
{"title":"Augmented Reality Visualisation Concepts to Support Intraoperative Distance Estimation","authors":"F. Heinrich, G. Schmidt, F. Jungmann, C. Hansen","doi":"10.1145/3359996.3364818","DOIUrl":"https://doi.org/10.1145/3359996.3364818","url":null,"abstract":"The estimation of distances and spatial relations between surgical instruments and surrounding anatomical structures is a challenging task for clinicians in image-guided surgery. Using augmented reality (AR), navigation aids can be displayed directly at the intervention site to support the assessment of distances and reduce the risk of damage to healthy tissue. To this end, four distance-encoding visualisation concepts were developed using a head-mounted optical see-through AR setup and evaluated by conducting a comparison study. Results suggest the general advantage of the proposed methods compared to a blank visualisation providing no additional information. Using a Distance Sensor concept signalising the proximity of nearby structures resulted in the least time the instrument was located below 5mm to surrounding risk structures and yielded the least amount of collisions with them.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126207652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
ALS-SimVR: Advanced Life Support Virtual Reality Training Application ALS-SimVR:高级生命支持虚拟现实训练应用
Nathan Moore, Soojeong Yoo, N. Ahmadpour, Russel Tommy, Martin Brown, P. Poronnik
The delivery of ongoing training and support to Advanced Life Support (ALS) teams poses significant resourcing and logistical challenges. A reduced exposure to cardiac arrests and mandated re-accreditation pose further challenges for educators to overcome. This work presents the ALS-SimVR (Advanced Life Support Simulation in VR) application. The application is intended for use as a supplementary training and refresher asset for ALS team leaders. The purpose of the application is to allow critical care clinicians to rehearse the role of ALS Team leader in their own time and location of choice. The application was developed for the Oculus-Go and ported to the Oculus-Quest. The application is also supported for a desktop and server based streaming release.
向高级生命支持(ALS)小组提供持续的培训和支持构成了重大的资源和后勤挑战。心脏骤停的减少和强制重新认证对教育工作者构成了进一步的挑战。本文介绍了ALS-SimVR (Advanced Life Support Simulation in VR)的应用。该应用程序旨在作为ALS团队领导的补充培训和复习资产。该应用程序的目的是让重症监护临床医生在他们自己的时间和选择的地点排练ALS团队领导的角色。该应用程序是为Oculus-Go开发的,并移植到Oculus-Quest上。该应用程序还支持基于桌面和服务器的流媒体版本。
{"title":"ALS-SimVR: Advanced Life Support Virtual Reality Training Application","authors":"Nathan Moore, Soojeong Yoo, N. Ahmadpour, Russel Tommy, Martin Brown, P. Poronnik","doi":"10.1145/3359996.3365051","DOIUrl":"https://doi.org/10.1145/3359996.3365051","url":null,"abstract":"The delivery of ongoing training and support to Advanced Life Support (ALS) teams poses significant resourcing and logistical challenges. A reduced exposure to cardiac arrests and mandated re-accreditation pose further challenges for educators to overcome. This work presents the ALS-SimVR (Advanced Life Support Simulation in VR) application. The application is intended for use as a supplementary training and refresher asset for ALS team leaders. The purpose of the application is to allow critical care clinicians to rehearse the role of ALS Team leader in their own time and location of choice. The application was developed for the Oculus-Go and ported to the Oculus-Quest. The application is also supported for a desktop and server based streaming release.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126679209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Predicting the Torso Direction from HMD Movements for Walk-in-Place Navigation through Deep Learning 通过深度学习从HMD运动中预测躯干方向用于原地行走导航
Juyoung Lee, Andréas Pastor, Jae-In Hwang, G. Kim
In this paper, we propose to use the deep learning technique to estimate and predict the torso direction from the head movements alone. The prediction allows to implement the walk-in-place navigation interface without additional sensing of the torso direction, and thereby improves the convenience and usability. We created a small dataset and tested our idea by training an LSTM model and obtained a 3-class prediction rate of about 90%, a figure higher than using other conventional machine learning techniques. While preliminary, the results show the possible inter-dependence between the viewing and torso directions, and with richer dataset and more parameters, a more accurate level of prediction seems possible.
在本文中,我们建议使用深度学习技术,仅从头部运动来估计和预测躯干方向。该预测可以实现无需额外感知躯干方向的行走导航界面,从而提高了便利性和可用性。我们创建了一个小数据集,并通过训练LSTM模型来测试我们的想法,并获得了约90%的3类预测率,这一数字高于使用其他传统机器学习技术。虽然是初步的,但结果显示了视觉和躯干方向之间可能存在的相互依赖性,并且有了更丰富的数据集和更多的参数,更准确的预测似乎是可能的。
{"title":"Predicting the Torso Direction from HMD Movements for Walk-in-Place Navigation through Deep Learning","authors":"Juyoung Lee, Andréas Pastor, Jae-In Hwang, G. Kim","doi":"10.1145/3359996.3364709","DOIUrl":"https://doi.org/10.1145/3359996.3364709","url":null,"abstract":"In this paper, we propose to use the deep learning technique to estimate and predict the torso direction from the head movements alone. The prediction allows to implement the walk-in-place navigation interface without additional sensing of the torso direction, and thereby improves the convenience and usability. We created a small dataset and tested our idea by training an LSTM model and obtained a 3-class prediction rate of about 90%, a figure higher than using other conventional machine learning techniques. While preliminary, the results show the possible inter-dependence between the viewing and torso directions, and with richer dataset and more parameters, a more accurate level of prediction seems possible.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128093891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Depth Perception in Projective Augmented Reality: An Evaluation of Advanced Visualization Techniques 投影增强现实中的深度感知:对先进可视化技术的评价
F. Heinrich, Kai Bornemann, K. Lawonn, C. Hansen
Augmented reality (AR) is a promising tool to convey useful information at the place where it is needed. However, perceptual issues with augmented reality visualizations affect the estimation of distances and depth and thus can lead to critically wrong assumptions. These issues have been successfully investigated for video see-through modalities. Moreover, advanced visualization methods encoding depth information by displaying additional depth cues were developed. In this work, state-of-the-art visualization concepts were adopted for a projective AR setup. We conducted a user study to assess the concepts’ suitability to convey depth information. Participants were asked to sort virtual cubes by using the provided depth cues. The investigated visualization concepts consisted of conventional Phong shading, a virtual mirror, depth-encoding silhouettes, pseudo-chromadepth rendering and an illustrative visualization using supporting line depth cues. Besides different concepts, we altered between a monoscopic and a stereoscopic display mode to examine the effects of stereopsis. Consistent results across variables show a clear ranking of examined concepts. The supporting lines approach and the pseudo-chromadepth rendering performed best. Stereopsis was shown to provide significant advantages for depth perception, while the current visualization technique had only little effect on investigated measures in this condition. However, similar results were achieved using the supporting lines and the pseudo-chromadepth concepts in a monoscopic setup. Our study showed the suitability of advanced visualization concepts for the rendering of virtual content in projective AR. Specific depth estimation results contribute to the future design and development of applications for these systems.
增强现实(AR)是一种很有前途的工具,可以在需要的地方传递有用的信息。然而,增强现实可视化的感知问题会影响距离和深度的估计,从而可能导致严重错误的假设。这些问题已经成功地研究了视频透视模式。此外,还开发了通过显示附加深度线索来编码深度信息的高级可视化方法。在这项工作中,最先进的可视化概念被用于投影AR设置。我们进行了一项用户研究,以评估概念是否适合传达深度信息。参与者被要求使用提供的深度线索对虚拟立方体进行排序。所研究的可视化概念包括传统的Phong阴影,虚拟镜子,深度编码轮廓,伪色度渲染和使用支持线深度线索的解释性可视化。除了不同的概念,我们改变了单眼和立体显示模式来检查立体视觉的影响。跨变量的一致结果显示了被检查概念的明确排名。支持线方法和伪色度渲染效果最好。立体视觉在深度感知方面具有显著优势,而目前的可视化技术对深度感知的影响很小。然而,在单镜设置中使用支持线和伪色度概念获得了类似的结果。我们的研究显示了先进的可视化概念在投影AR中呈现虚拟内容的适用性。具体的深度估计结果有助于这些系统的未来设计和应用程序的开发。
{"title":"Depth Perception in Projective Augmented Reality: An Evaluation of Advanced Visualization Techniques","authors":"F. Heinrich, Kai Bornemann, K. Lawonn, C. Hansen","doi":"10.1145/3359996.3364245","DOIUrl":"https://doi.org/10.1145/3359996.3364245","url":null,"abstract":"Augmented reality (AR) is a promising tool to convey useful information at the place where it is needed. However, perceptual issues with augmented reality visualizations affect the estimation of distances and depth and thus can lead to critically wrong assumptions. These issues have been successfully investigated for video see-through modalities. Moreover, advanced visualization methods encoding depth information by displaying additional depth cues were developed. In this work, state-of-the-art visualization concepts were adopted for a projective AR setup. We conducted a user study to assess the concepts’ suitability to convey depth information. Participants were asked to sort virtual cubes by using the provided depth cues. The investigated visualization concepts consisted of conventional Phong shading, a virtual mirror, depth-encoding silhouettes, pseudo-chromadepth rendering and an illustrative visualization using supporting line depth cues. Besides different concepts, we altered between a monoscopic and a stereoscopic display mode to examine the effects of stereopsis. Consistent results across variables show a clear ranking of examined concepts. The supporting lines approach and the pseudo-chromadepth rendering performed best. Stereopsis was shown to provide significant advantages for depth perception, while the current visualization technique had only little effect on investigated measures in this condition. However, similar results were achieved using the supporting lines and the pseudo-chromadepth concepts in a monoscopic setup. Our study showed the suitability of advanced visualization concepts for the rendering of virtual content in projective AR. Specific depth estimation results contribute to the future design and development of applications for these systems.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128494151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
POL360: A Universal Mobile VR Motion Controller using Polarized Light POL360:使用偏振光的通用移动VR运动控制器
Hyouk Jang, Juheon Choi, Gunhee Kim
We introduce POL360: the first universal VR motion controller that leverages the principle of light polarization. POL360 enables a user who holds it and wears a VR headset to see their hand motion in a virtual world via its accurate 6-DOF position tracking. Compared to other techniques for VR positioning, POL360 has several advantages as follows. (1) Mobile compatibility: Neither additional computing resource like a PC/console nor any complicated pre-installation is required in the environment. Only necessary device is a VR headset with an IR LED module as a light source to which a thin-film linear polarizer is attached. (2) On-device computing: Our POL360’s computation for positioning is completed on the microprocessor in the device. Thus, it does not require additional computing resource of a VR headset. (3) Competitive accuracy and update rate: In spite of POL360’s superior mobile compatibility and affordability, POL360 attains competitive performance of accuracy and fast update rates. That is, it achieves the subcentimeter accuracy of positioning and the tracking rate higher than 60 Hz. In this paper, we derive the mathematical formulation of 6-DOF positioning using light polarization for the first time and implement a POL360 prototype that can directly operate with any commercial VR headset systems. In order to demonstrate POL360’s performance and usability, we carry out thorough quantitative evaluation and a user study and develop three game demos as use cases.
我们推出POL360:第一款利用光偏振原理的通用VR运动控制器。POL360通过精确的6-DOF位置跟踪功能,让佩戴VR耳机的用户能够在虚拟世界中看到自己的手部运动。与其他VR定位技术相比,POL360具有以下几个优势。(1)移动兼容性:既不需要像PC/控制台那样的额外计算资源,也不需要在环境中进行任何复杂的预安装。唯一需要的设备是一个VR头显,用红外LED模块作为光源,上面附着一个薄膜线性偏光片。(2)设备上计算:我们POL360的定位计算是在设备内的微处理器上完成的。因此,它不需要VR头显的额外计算资源。(3)具有竞争力的精度和更新速度:尽管POL360具有优越的移动兼容性和可负担性,但POL360在精度和快速更新速度方面具有竞争力。即实现了亚厘米级的定位精度和60hz以上的跟踪速率。在本文中,我们首次利用光偏振导出了六自由度定位的数学公式,并实现了一个可以直接与任何商业VR头显系统操作的POL360原型。为了证明POL360的性能和可用性,我们进行了彻底的定量评估和用户研究,并开发了三个游戏演示作为用例。
{"title":"POL360: A Universal Mobile VR Motion Controller using Polarized Light","authors":"Hyouk Jang, Juheon Choi, Gunhee Kim","doi":"10.1145/3359996.3364262","DOIUrl":"https://doi.org/10.1145/3359996.3364262","url":null,"abstract":"We introduce POL360: the first universal VR motion controller that leverages the principle of light polarization. POL360 enables a user who holds it and wears a VR headset to see their hand motion in a virtual world via its accurate 6-DOF position tracking. Compared to other techniques for VR positioning, POL360 has several advantages as follows. (1) Mobile compatibility: Neither additional computing resource like a PC/console nor any complicated pre-installation is required in the environment. Only necessary device is a VR headset with an IR LED module as a light source to which a thin-film linear polarizer is attached. (2) On-device computing: Our POL360’s computation for positioning is completed on the microprocessor in the device. Thus, it does not require additional computing resource of a VR headset. (3) Competitive accuracy and update rate: In spite of POL360’s superior mobile compatibility and affordability, POL360 attains competitive performance of accuracy and fast update rates. That is, it achieves the subcentimeter accuracy of positioning and the tracking rate higher than 60 Hz. In this paper, we derive the mathematical formulation of 6-DOF positioning using light polarization for the first time and implement a POL360 prototype that can directly operate with any commercial VR headset systems. In order to demonstrate POL360’s performance and usability, we carry out thorough quantitative evaluation and a user study and develop three game demos as use cases.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134065264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Blind Navigation of Web Pages through Vibro-tactile Feedbacks 基于触觉振动反馈的网页盲导航
Waseem Safi, Fabrice Maurel, J. Routoure, P. Beust, Michèle Molina, Coralie Sann, Jessica Guilbert
We present results of an empirical study for examining the performance of sighted and blind individuals in discriminating structures of web pages through vibro-tactile feedbacks.
我们提出了一项实证研究的结果,通过振动触觉反馈来检验视力正常和失明个体在辨别网页结构方面的表现。
{"title":"Blind Navigation of Web Pages through Vibro-tactile Feedbacks","authors":"Waseem Safi, Fabrice Maurel, J. Routoure, P. Beust, Michèle Molina, Coralie Sann, Jessica Guilbert","doi":"10.1145/3359996.3364758","DOIUrl":"https://doi.org/10.1145/3359996.3364758","url":null,"abstract":"We present results of an empirical study for examining the performance of sighted and blind individuals in discriminating structures of web pages through vibro-tactile feedbacks.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115201531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1