首页 > 最新文献

2015 IEEE Virtual Reality (VR)最新文献

英文 中文
Continuous automatic calibration for optical see-through displays 光学透明显示器的连续自动校准
Pub Date : 2015-03-23 DOI: 10.1109/VR.2015.7223385
Kenneth R. Moser, Yuta Itoh, J. Swan
The current advent of consumer level optical see-through (OST) head-mounted displays (HMD's) has greatly broadened the accessibility of Augmented Reality (AR) to not only researchers but also the general public as well. This increased user base heightens the need for robust automatic calibration mechanisms suited for nontechnical users. We are developing a fully automated calibration system for two stereo OST HMD's, a consumer level and prototype model, based on the recently introduced interaction free display calibration (INDICA) method. Our current efforts are also focused on the development of an evaluation process to assess the performance of the system during use by non-expert subjects.
消费者级光学透明头戴式显示器(HMD)的出现极大地拓宽了增强现实(AR)的可及性,不仅对研究人员,而且对公众也是如此。这种增加的用户基础增加了对适合非技术用户的健壮的自动校准机制的需求。我们正在开发一个完全自动化的校准系统,两个立体声OST头戴式显示器,一个消费级和原型模型,基于最近推出的交互自由显示校准(INDICA)方法。我们目前的工作还集中在开发一个评估过程,以评估系统在非专业科目使用期间的性能。
{"title":"Continuous automatic calibration for optical see-through displays","authors":"Kenneth R. Moser, Yuta Itoh, J. Swan","doi":"10.1109/VR.2015.7223385","DOIUrl":"https://doi.org/10.1109/VR.2015.7223385","url":null,"abstract":"The current advent of consumer level optical see-through (OST) head-mounted displays (HMD's) has greatly broadened the accessibility of Augmented Reality (AR) to not only researchers but also the general public as well. This increased user base heightens the need for robust automatic calibration mechanisms suited for nontechnical users. We are developing a fully automated calibration system for two stereo OST HMD's, a consumer level and prototype model, based on the recently introduced interaction free display calibration (INDICA) method. Our current efforts are also focused on the development of an evaluation process to assess the performance of the system during use by non-expert subjects.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115709379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
3D position measurement of planar photo detector using gradient patterns 基于梯度模式的平面光电探测器三维位置测量
Pub Date : 2015-03-23 DOI: 10.1109/VR.2015.7223370
Tatsuya Kodera, M. Sugimoto, Ross T. Smith, B. Thomas
We propose a three dimensional position measurement method employing planar photo detectors to calibrate a Spatial Augmented Reality system of unknown geometry. In Spatial Augmented Reality, projectors overlay images onto an object in the physical environment. For this purpose, the alignment of the images and physical objects is required. Traditional camera based 3D position tracking systems, such as multi-camera motion capture systems, detect the positions of optical markers in two-dimensional image plane of each camera device, so those systems require multiple camera devices at known locations to obtain 3D position of the markers. We introduce a detection method of 3D position of a planar photo detector by projecting gradient patterns. The main contribution of our method is to realize an alignment of the projected images with the physical objects and measuring the geometry of the objects simultaneously for Spatial Augmented Reality applications.
提出了一种利用平面光电探测器对未知几何形状的空间增强现实系统进行三维位置测量的方法。在空间增强现实中,投影仪将图像覆盖到物理环境中的物体上。为此,需要对图像和物理对象进行对齐。传统的基于摄像机的三维位置跟踪系统,如多摄像机运动捕捉系统,检测光学标记在每个摄像机设备的二维图像平面上的位置,因此这些系统需要在已知位置的多个摄像机设备来获得标记的三维位置。介绍了一种利用投影梯度模式检测平面光电探测器三维位置的方法。该方法的主要贡献是实现了投影图像与物理对象的对齐,并同时测量对象的几何形状,用于空间增强现实应用。
{"title":"3D position measurement of planar photo detector using gradient patterns","authors":"Tatsuya Kodera, M. Sugimoto, Ross T. Smith, B. Thomas","doi":"10.1109/VR.2015.7223370","DOIUrl":"https://doi.org/10.1109/VR.2015.7223370","url":null,"abstract":"We propose a three dimensional position measurement method employing planar photo detectors to calibrate a Spatial Augmented Reality system of unknown geometry. In Spatial Augmented Reality, projectors overlay images onto an object in the physical environment. For this purpose, the alignment of the images and physical objects is required. Traditional camera based 3D position tracking systems, such as multi-camera motion capture systems, detect the positions of optical markers in two-dimensional image plane of each camera device, so those systems require multiple camera devices at known locations to obtain 3D position of the markers. We introduce a detection method of 3D position of a planar photo detector by projecting gradient patterns. The main contribution of our method is to realize an alignment of the projected images with the physical objects and measuring the geometry of the objects simultaneously for Spatial Augmented Reality applications.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114564845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A multi-projector display system of arbitrary shape, size and resolution 任意形状、大小和分辨率的多投影机显示系统
Pub Date : 2015-03-23 DOI: 10.1145/2782782.2792500
A. Majumder, Duy-Quoc Lai, M. A. Tehrani
In this demo we will demonstrate integration of general content delivery from a windows desktop to a multi-projector display of arbitrary, shape, size and resolution automatically calibrated using our calibration methods. We have developed these sophisticated completely automatic geometric and color registration techniques in our lab for deploying seamless multi-projector displays on popular non-planar surfaces (e.g. cylinders, domes, truncated domes). This work has gotten significant attention in both VR and Visualization venues in the past 5 years and this will be the first time such calibration will be integrated with content delivery.
在这个演示中,我们将演示从windows桌面到任意形状、大小和分辨率的多投影机显示器的一般内容传输的集成,这些多投影机显示器使用我们的校准方法自动校准。我们在实验室开发了这些复杂的全自动几何和颜色配准技术,用于在流行的非平面表面(例如圆柱体,圆顶,截形圆顶)上部署无缝多投影仪显示器。在过去的5年里,这项工作在VR和可视化领域都得到了极大的关注,这将是这种校准首次与内容交付相结合。
{"title":"A multi-projector display system of arbitrary shape, size and resolution","authors":"A. Majumder, Duy-Quoc Lai, M. A. Tehrani","doi":"10.1145/2782782.2792500","DOIUrl":"https://doi.org/10.1145/2782782.2792500","url":null,"abstract":"In this demo we will demonstrate integration of general content delivery from a windows desktop to a multi-projector display of arbitrary, shape, size and resolution automatically calibrated using our calibration methods. We have developed these sophisticated completely automatic geometric and color registration techniques in our lab for deploying seamless multi-projector displays on popular non-planar surfaces (e.g. cylinders, domes, truncated domes). This work has gotten significant attention in both VR and Visualization venues in the past 5 years and this will be the first time such calibration will be integrated with content delivery.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116212669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Does virtual reality affect visual perception of egocentric distance? 虚拟现实是否影响以自我为中心的距离的视觉感知?
Pub Date : 2015-03-23 DOI: 10.1109/VR.2015.7223403
Thomas Rousset, C. Bourdin, Cedric Goulon, Jocelyn Monnoyer, J. Vercher
Virtual reality (driving simulators) tends to generalize for the study of human behavior in mobility. It is thus crucial to ensure that perception of space and motion is little or not affected by the virtual environment (VE). The aim of this study was to determine a metrics of distance perception in VEs and whether this metrics depends on interactive factors: stereoscopy and motion parallax. After a training session, participants were asked, while driving, to estimate the relative location (5 to 80 m) of a car on the same road. The overall results suggest that distance perception in this range does not depend on interactive factors. In average, as generally reported, subjects underestimated the distances whatever the vision conditions. However, the study revealed a large interpersonal variability: two profiles of participants were defined, those who quite accurately perceived distances in VR and those who underestimated distances as usually reported. Overall, this classification was correlated to the level of performance of participants during the training phase. Furthermore, learning performance is predictive of the behavior of participants.
虚拟现实(驾驶模拟器)倾向于推广人类移动行为的研究。因此,确保空间和运动的感知很少或不受虚拟环境(VE)的影响是至关重要的。本研究的目的是确定视觉视觉的距离感知指标,以及该指标是否取决于互动因素:立体视觉和运动视差。训练结束后,参与者被要求在开车时估计同一条路上一辆车的相对位置(5到80米)。总体结果表明,在这个范围内的距离感知不依赖于交互因素。一般来说,无论视力状况如何,受试者都会低估距离。然而,该研究揭示了很大的人际差异:定义了两种参与者的概况,一种是在VR中非常准确地感知距离的人,另一种是通常报道的低估距离的人。总的来说,这种分类与参与者在训练阶段的表现水平相关。此外,学习表现可以预测参与者的行为。
{"title":"Does virtual reality affect visual perception of egocentric distance?","authors":"Thomas Rousset, C. Bourdin, Cedric Goulon, Jocelyn Monnoyer, J. Vercher","doi":"10.1109/VR.2015.7223403","DOIUrl":"https://doi.org/10.1109/VR.2015.7223403","url":null,"abstract":"Virtual reality (driving simulators) tends to generalize for the study of human behavior in mobility. It is thus crucial to ensure that perception of space and motion is little or not affected by the virtual environment (VE). The aim of this study was to determine a metrics of distance perception in VEs and whether this metrics depends on interactive factors: stereoscopy and motion parallax. After a training session, participants were asked, while driving, to estimate the relative location (5 to 80 m) of a car on the same road. The overall results suggest that distance perception in this range does not depend on interactive factors. In average, as generally reported, subjects underestimated the distances whatever the vision conditions. However, the study revealed a large interpersonal variability: two profiles of participants were defined, those who quite accurately perceived distances in VR and those who underestimated distances as usually reported. Overall, this classification was correlated to the level of performance of participants during the training phase. Furthermore, learning performance is predictive of the behavior of participants.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124884227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Mobile user interfaces for efficient verification of holograms 用于有效验证全息图的移动用户界面
Pub Date : 2015-03-23 DOI: 10.1109/VR.2015.7223333
Andreas Hartl, Jens Grubert, Christian Reinbacher, Clemens Arth, D. Schmalstieg
Paper documents such as passports, visas and banknotes are frequently checked by inspection of security elements. In particular, view-dependent elements such as holograms are interesting, but the expertise of individuals performing the task varies greatly. Augmented Reality systems can provide all relevant information on standard mobile devices. Hologram verification still takes long and causes considerable load for the user. We aim to address this drawback by first presenting a work flow for recording and automatic matching of hologram patches. Several user interfaces for hologram verification are presented, aiming to noticeably reduce verification time. We evaluate the most promising interfaces in a user study with prototype applications running on off-the-shelf hardware. Our results indicate that there is a significant difference in capture time between interfaces but that users do not prefer the fastest interface.
诸如护照、签证和钞票等纸质文件经常由保安部门检查。特别是,像全息图这样的依赖于视图的元素很有趣,但执行任务的个人的专业知识差异很大。增强现实系统可以在标准移动设备上提供所有相关信息。全息图验证仍然需要很长时间,并且会给用户带来相当大的负担。我们的目标是解决这个缺点,首先提出了一个工作流程的记录和自动匹配的全息图补丁。提出了几种用于全息图验证的用户界面,旨在显著缩短验证时间。我们在用户研究中用运行在现成硬件上的原型应用程序评估最有前途的接口。我们的研究结果表明,界面之间的捕获时间有显著差异,但用户并不喜欢最快的界面。
{"title":"Mobile user interfaces for efficient verification of holograms","authors":"Andreas Hartl, Jens Grubert, Christian Reinbacher, Clemens Arth, D. Schmalstieg","doi":"10.1109/VR.2015.7223333","DOIUrl":"https://doi.org/10.1109/VR.2015.7223333","url":null,"abstract":"Paper documents such as passports, visas and banknotes are frequently checked by inspection of security elements. In particular, view-dependent elements such as holograms are interesting, but the expertise of individuals performing the task varies greatly. Augmented Reality systems can provide all relevant information on standard mobile devices. Hologram verification still takes long and causes considerable load for the user. We aim to address this drawback by first presenting a work flow for recording and automatic matching of hologram patches. Several user interfaces for hologram verification are presented, aiming to noticeably reduce verification time. We evaluate the most promising interfaces in a user study with prototype applications running on off-the-shelf hardware. Our results indicate that there is a significant difference in capture time between interfaces but that users do not prefer the fastest interface.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123077462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
MRI overlay system using optical see-through for marking assistance 磁共振成像覆盖系统使用光学透视标记辅助
Pub Date : 2015-03-23 DOI: 10.1109/VR.2015.7223384
Jun Morita, S. Shimamura, Motoko Kanegae, Yuji Uema, Maiko Takahashi, M. Inami, T. Hayashida, M. Sugimoto
In this paper we propose an augmented reality system that superimposes MRI onto the patient model. We use a half-silvered mirror and a handheld device to superimpose the MRI onto the patient model. By tracking the coordinates of the patient model and the handheld device using optical markers, we are able to transform the images to the correlated position. Voxel data of the MRI are made so that the user is able to view the MRI from many different angles.
在本文中,我们提出了一个增强现实系统,将MRI叠加到患者模型上。我们使用半镀银镜和手持设备将核磁共振成像叠加到患者模型上。通过使用光学标记跟踪患者模型和手持设备的坐标,我们能够将图像转换到相关位置。磁共振成像的体素数据被制作出来,这样用户就可以从许多不同的角度来观察磁共振成像。
{"title":"MRI overlay system using optical see-through for marking assistance","authors":"Jun Morita, S. Shimamura, Motoko Kanegae, Yuji Uema, Maiko Takahashi, M. Inami, T. Hayashida, M. Sugimoto","doi":"10.1109/VR.2015.7223384","DOIUrl":"https://doi.org/10.1109/VR.2015.7223384","url":null,"abstract":"In this paper we propose an augmented reality system that superimposes MRI onto the patient model. We use a half-silvered mirror and a handheld device to superimpose the MRI onto the patient model. By tracking the coordinates of the patient model and the handheld device using optical markers, we are able to transform the images to the correlated position. Voxel data of the MRI are made so that the user is able to view the MRI from many different angles.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123274857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Shark punch: A virtual reality game for aquatic rehabilitation Shark punch:一款用于水生康复的虚拟现实游戏
Pub Date : 2015-03-23 DOI: 10.1109/VR.2015.7223397
J. Quarles
We present a novel underwater VR game - Shark Punch - in which the user must fend off a virtual Great White shark with real punches in a real underwater environment. This poster presents our underwater VR system and our iterative design process through field tests with a user with disabilities. We conclude with proposed usability, accessibility, and system design guidelines for future underwater VR rehabilitation games.
我们提出了一种新颖的水下VR游戏- Shark Punch -在这个游戏中,用户必须在真实的水下环境中用真实的拳击来抵御虚拟的大白鲨。这张海报展示了我们的水下VR系统和我们的迭代设计过程,通过与残疾用户的现场测试。最后,我们提出了未来水下VR康复游戏的可用性、可访问性和系统设计指南。
{"title":"Shark punch: A virtual reality game for aquatic rehabilitation","authors":"J. Quarles","doi":"10.1109/VR.2015.7223397","DOIUrl":"https://doi.org/10.1109/VR.2015.7223397","url":null,"abstract":"We present a novel underwater VR game - Shark Punch - in which the user must fend off a virtual Great White shark with real punches in a real underwater environment. This poster presents our underwater VR system and our iterative design process through field tests with a user with disabilities. We conclude with proposed usability, accessibility, and system design guidelines for future underwater VR rehabilitation games.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124605783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Marioneta: Virtual puppeteer experience 木偶师:虚拟木偶师体验
Pub Date : 2015-03-23 DOI: 10.1109/VR.2015.7223448
H. Byun, Emily Chang, Maria Alejandra Montenegro, Alexander Moser, Christina Tarn, Shirley J. Saldamarco, R. Comley
Marioneta is an installation for the Children's Museum of Pittsburgh which uses the Microsoft Kinect v2 to allow guests to embody a collection of antique puppets in a virtual environment. Final installation in the museum is shown in Fig 1. The focus is on creating an experience wherein elements in the world react to the users' actions through these puppets[1]. Puppet models in the experience are based on a collection of puppets donated to the museum by Margo Lovelace. The original puppets are carefully exhibited in a large display case on the wall in the museum. Many of the puppets in the museum's collection are antiques, fragile or valuable and not suited to hands-on play by the museum's young visitors. Marioneta uses technology to make museum puppets available for imaginative and interesting play[2]. The experience is composed of auto-rotating seasonal stages and season related interactive objects that have visual and audial feedback. Users can throw a pumpkin in fall, pick up an ice ball in winter, play with cowbells in spring, and break a lantern filled with fireflies in summer. One of the stage scenes is shown in Fig 2. Marioneta is an updated version of Virpets, which began in 2001 and remained over 10 years in the museum[3].
Marioneta是匹兹堡儿童博物馆的一个装置,它使用微软Kinect v2让游客在虚拟环境中体现一系列古董木偶。在博物馆的最终安装如图1所示。重点是创造一种体验,其中世界中的元素通过这些木偶对用户的行为做出反应[1]。体验中的木偶模型是基于Margo Lovelace捐赠给博物馆的木偶收藏。原来的木偶被小心翼翼地陈列在博物馆墙上的一个大陈列柜里。博物馆收藏的许多木偶都是古董,易碎或贵重,不适合年轻游客亲手玩。Marioneta使用技术使博物馆木偶可以进行富有想象力和有趣的游戏[2]。这种体验由自动旋转的季节舞台和与季节相关的交互式对象组成,这些对象具有视觉和听觉反馈。用户可以在秋天扔南瓜,在冬天捡起冰球,在春天玩牛铃,在夏天打破装满萤火虫的灯笼。其中一个舞台场景如图2所示。《Marioneta》是《Virpets》的升级版,从2001年开始,在博物馆保存了10多年[3]。
{"title":"Marioneta: Virtual puppeteer experience","authors":"H. Byun, Emily Chang, Maria Alejandra Montenegro, Alexander Moser, Christina Tarn, Shirley J. Saldamarco, R. Comley","doi":"10.1109/VR.2015.7223448","DOIUrl":"https://doi.org/10.1109/VR.2015.7223448","url":null,"abstract":"Marioneta is an installation for the Children's Museum of Pittsburgh which uses the Microsoft Kinect v2 to allow guests to embody a collection of antique puppets in a virtual environment. Final installation in the museum is shown in Fig 1. The focus is on creating an experience wherein elements in the world react to the users' actions through these puppets[1]. Puppet models in the experience are based on a collection of puppets donated to the museum by Margo Lovelace. The original puppets are carefully exhibited in a large display case on the wall in the museum. Many of the puppets in the museum's collection are antiques, fragile or valuable and not suited to hands-on play by the museum's young visitors. Marioneta uses technology to make museum puppets available for imaginative and interesting play[2]. The experience is composed of auto-rotating seasonal stages and season related interactive objects that have visual and audial feedback. Users can throw a pumpkin in fall, pick up an ice ball in winter, play with cowbells in spring, and break a lantern filled with fireflies in summer. One of the stage scenes is shown in Fig 2. Marioneta is an updated version of Virpets, which began in 2001 and remained over 10 years in the museum[3].","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132257014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Touch sensing on non-parametric rear-projection surfaces: A physical-virtual head for hands-on healthcare training 非参数后投影表面上的触摸感应:用于实际医疗保健培训的物理虚拟头
Pub Date : 2015-03-23 DOI: 10.1109/VR.2015.7223326
Jason Hochreiter, Salam Daher, A. Nagendran, Laura González, G. Welch
We demonstrate a generalizable method for unified multitouch detection and response on a human head-shaped surface with a rear-projection animated 3D face. The method helps achieve hands-on touch-sensitive training with dynamic physical-virtual patient behavior. The method, which is generalizable to other non-parametric rear-projection surfaces, requires one or more infrared (IR) cameras, one or more projectors, IR light sources, and a rear-projection surface. IR light reflected off of human fingers is captured by cameras with matched IR pass filters, allowing for the localization of multiple finger touch events. These events are tightly coupled with the rendering system to produce auditory and visual responses on the animated face displayed using the projector(s), resulting in a responsive, interactive experience. We illustrate the applicability of our physical prototype in a medical training scenario.
我们展示了一种可推广的方法,用于在具有后投影动画3D面部的人头形表面上进行统一的多点触摸检测和响应。该方法有助于实现动手触摸敏感训练与动态物理虚拟患者行为。该方法可推广到其他非参数后投影曲面,需要一个或多个红外摄像机、一个或多个投影仪、红外光源和一个后投影曲面。人类手指反射的红外光被带有匹配的红外光通滤镜的摄像机捕捉,从而实现多个手指触摸事件的定位。这些事件与渲染系统紧密耦合,在使用投影仪显示的动画面部上产生听觉和视觉响应,从而产生响应,互动体验。我们说明了我们的物理原型在医疗培训场景中的适用性。
{"title":"Touch sensing on non-parametric rear-projection surfaces: A physical-virtual head for hands-on healthcare training","authors":"Jason Hochreiter, Salam Daher, A. Nagendran, Laura González, G. Welch","doi":"10.1109/VR.2015.7223326","DOIUrl":"https://doi.org/10.1109/VR.2015.7223326","url":null,"abstract":"We demonstrate a generalizable method for unified multitouch detection and response on a human head-shaped surface with a rear-projection animated 3D face. The method helps achieve hands-on touch-sensitive training with dynamic physical-virtual patient behavior. The method, which is generalizable to other non-parametric rear-projection surfaces, requires one or more infrared (IR) cameras, one or more projectors, IR light sources, and a rear-projection surface. IR light reflected off of human fingers is captured by cameras with matched IR pass filters, allowing for the localization of multiple finger touch events. These events are tightly coupled with the rendering system to produce auditory and visual responses on the animated face displayed using the projector(s), resulting in a responsive, interactive experience. We illustrate the applicability of our physical prototype in a medical training scenario.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132190837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Non-obscuring binocular eye tracking for wide field-of-view head-mounted-displays 用于大视场头戴式显示器的非模糊双目眼动追踪
Pub Date : 2015-03-23 DOI: 10.1109/VR.2015.7223443
Michael Stengel, S. Grogorick, M. Eisemann, E. Eisemann, M. Magnor
We present a complete hardware and software solution for integrating binocular eye tracking into current state-of-the-art lens-based Head-mounted Displays (HMDs) without affecting the user's wide field-of-view off the display. The system uses robust and efficient new algorithms for calibration and pupil tracking and allows realtime eye tracking and gaze estimation. Estimating the relative gaze direction of the user opens the door to a much wider spectrum of virtual reality applications and games when using HMDs. We show a 3d-printed prototype of a low-cost HMD with eye tracking that is simple to fabricate and discuss a variety of VR applications utilizing gaze estimation.
我们提出了一个完整的硬件和软件解决方案,将双目眼动追踪集成到当前最先进的基于镜头的头戴式显示器(hmd)中,而不会影响用户在显示器外的宽视野。该系统使用鲁棒和高效的新算法进行校准和瞳孔跟踪,并允许实时眼球跟踪和凝视估计。在使用头戴式显示器时,估计用户的相对凝视方向为更广泛的虚拟现实应用和游戏打开了大门。我们展示了一个具有眼动追踪的低成本HMD的3d打印原型,该原型易于制造,并讨论了利用凝视估计的各种VR应用。
{"title":"Non-obscuring binocular eye tracking for wide field-of-view head-mounted-displays","authors":"Michael Stengel, S. Grogorick, M. Eisemann, E. Eisemann, M. Magnor","doi":"10.1109/VR.2015.7223443","DOIUrl":"https://doi.org/10.1109/VR.2015.7223443","url":null,"abstract":"We present a complete hardware and software solution for integrating binocular eye tracking into current state-of-the-art lens-based Head-mounted Displays (HMDs) without affecting the user's wide field-of-view off the display. The system uses robust and efficient new algorithms for calibration and pupil tracking and allows realtime eye tracking and gaze estimation. Estimating the relative gaze direction of the user opens the door to a much wider spectrum of virtual reality applications and games when using HMDs. We show a 3d-printed prototype of a low-cost HMD with eye tracking that is simple to fabricate and discuss a variety of VR applications utilizing gaze estimation.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114853732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2015 IEEE Virtual Reality (VR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1