首页 > 最新文献

Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology最新文献

英文 中文
MR video fusion: interactive 3D modeling and stitching on wide-baseline videos 磁共振视频融合:交互式3D建模和拼接宽基线视频
Yi Zhou, Mingjun Cao, Jingdi You, Ming Meng, Yuehua Wang, Zhong Zhou
A major challenge facing camera networks today is how to effectively organizing and visualizing videos in the presence of complicated network connection and overwhelming and even increasing amount of data. Previous works focus on 2D stitching or dynamic projection to 3D models, such as panorama and Augmented Virtual Environment (AVE), and haven't given an ideal solution. We present a novel method of multiple video fusion in 3D environment, which produces a highly comprehensive imagery and yields a spatio-temporal consistent scene. User initially interact with a newly designed background model named video model to register and stitch videos' background frames offline. The method then fuses the offline results to render videos in a real time manner. We demonstrate our system on 3 real scenes, each of which contains dozens of wide-baseline videos. The experimental results show that, our 3D modeling interface developed with the our presented model and method can efficiently assist the users to seamlessly integrate videos by comparing to commercial-off-the-shelf software with less operating complexity and more accurate 3D environment. The stitching method proposed by us is much more robust against the position, orientation, attribute differences among videos than the start-of-the-art methods. More importantly, this study sheds light on how to use the 3D techniques to solve 2D problems in realistic and we validate its feasibility.
当今摄像机网络面临的一个主要挑战是如何在复杂的网络连接和压倒性甚至不断增加的数据量的存在下有效地组织和可视化视频。以往的工作主要集中在二维拼接或动态投影到三维模型上,如全景和增强虚拟环境(AVE),并没有给出一个理想的解决方案。提出了一种三维环境下多视频融合的新方法,该方法可以产生高度综合的图像,并产生时空一致的场景。用户首先与新设计的名为视频模型的背景模型进行交互,以离线注册和缝合视频的背景帧。然后,该方法融合离线结果以实时方式呈现视频。我们在3个真实场景中演示了我们的系统,每个场景都包含数十个宽基线视频。实验结果表明,与商用软件相比,基于该模型和方法开发的三维建模界面可以有效地帮助用户实现视频的无缝集成,操作复杂度更低,3D环境更准确。我们提出的拼接方法对视频之间的位置、方向、属性差异的鲁棒性要比初始化的方法强得多。更重要的是,本研究揭示了如何在现实中使用3D技术来解决2D问题,并验证了其可行性。
{"title":"MR video fusion: interactive 3D modeling and stitching on wide-baseline videos","authors":"Yi Zhou, Mingjun Cao, Jingdi You, Ming Meng, Yuehua Wang, Zhong Zhou","doi":"10.1145/3281505.3281513","DOIUrl":"https://doi.org/10.1145/3281505.3281513","url":null,"abstract":"A major challenge facing camera networks today is how to effectively organizing and visualizing videos in the presence of complicated network connection and overwhelming and even increasing amount of data. Previous works focus on 2D stitching or dynamic projection to 3D models, such as panorama and Augmented Virtual Environment (AVE), and haven't given an ideal solution. We present a novel method of multiple video fusion in 3D environment, which produces a highly comprehensive imagery and yields a spatio-temporal consistent scene. User initially interact with a newly designed background model named video model to register and stitch videos' background frames offline. The method then fuses the offline results to render videos in a real time manner. We demonstrate our system on 3 real scenes, each of which contains dozens of wide-baseline videos. The experimental results show that, our 3D modeling interface developed with the our presented model and method can efficiently assist the users to seamlessly integrate videos by comparing to commercial-off-the-shelf software with less operating complexity and more accurate 3D environment. The stitching method proposed by us is much more robust against the position, orientation, attribute differences among videos than the start-of-the-art methods. More importantly, this study sheds light on how to use the 3D techniques to solve 2D problems in realistic and we validate its feasibility.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116872969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Perceived weight of a rod under augmented and diminished reality visual effects 感知杆的重量在增强和减弱的现实视觉效果
Satoshi Hashiguchi, Shohei Mori, Miho Tanaka, F. Shibata, Asako Kimura
We can use augmented reality (AR) and diminished reality (DR) in combination, in practice. However, to the best of our knowledge, there is no research on the validation of the cross-modal effects in AR and DR. Our research interest here is to investigate how this continuous visual changes between AR and DR would change our weight sensation of an object. In this paper, we built a system that can continuously extend and reduce the amount of visual entity of real objects using AR and DR renderings to confirm that users can perceive things heavier and lighter than they actually are in the same manner as SWI. Different from the existing research where either AR or DR visual effects were used, we validated one of cross-modal effects in the context of both continuous AR and DR visuo-haptic. Regarding the weight sensation, we found that such cross-modal effect can be approximated with a continuous linear relationship between the weight and length of real objects. Our experimental results suggested that the weight sensation is closely related to the positions of the center of gravity (CoG) and perceived CoG positions lie within the object's entity under the examined conditions.
我们可以在实践中结合使用增强现实(AR)和缩小现实(DR)。然而,据我们所知,目前还没有关于AR和DR中交叉模态效应验证的研究。我们在这里的研究兴趣是调查AR和DR之间的这种连续视觉变化如何改变我们对物体的重量感觉。在本文中,我们构建了一个系统,该系统可以使用AR和DR渲染不断扩展和减少真实物体的视觉实体数量,以确认用户可以以与SWI相同的方式感知比实际更重或更轻的事物。与现有研究中使用AR或DR视觉效果不同,我们在连续AR和DR视觉触觉的背景下验证了一种跨模态效应。对于重量感觉,我们发现这种跨模态效应可以用真实物体的重量和长度之间的连续线性关系来近似。我们的实验结果表明,在实验条件下,重量感觉与重心(CoG)的位置密切相关,并且感知到的重心位置位于物体的实体内。
{"title":"Perceived weight of a rod under augmented and diminished reality visual effects","authors":"Satoshi Hashiguchi, Shohei Mori, Miho Tanaka, F. Shibata, Asako Kimura","doi":"10.1145/3281505.3281545","DOIUrl":"https://doi.org/10.1145/3281505.3281545","url":null,"abstract":"We can use augmented reality (AR) and diminished reality (DR) in combination, in practice. However, to the best of our knowledge, there is no research on the validation of the cross-modal effects in AR and DR. Our research interest here is to investigate how this continuous visual changes between AR and DR would change our weight sensation of an object. In this paper, we built a system that can continuously extend and reduce the amount of visual entity of real objects using AR and DR renderings to confirm that users can perceive things heavier and lighter than they actually are in the same manner as SWI. Different from the existing research where either AR or DR visual effects were used, we validated one of cross-modal effects in the context of both continuous AR and DR visuo-haptic. Regarding the weight sensation, we found that such cross-modal effect can be approximated with a continuous linear relationship between the weight and length of real objects. Our experimental results suggested that the weight sensation is closely related to the positions of the center of gravity (CoG) and perceived CoG positions lie within the object's entity under the examined conditions.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127264918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Comparison of the usability of a car infotainment system in a mixed reality environment and in a real car 汽车信息娱乐系统在混合现实环境和真实汽车中的可用性比较
Anna Bolder, S. Grünvogel, E. Angelescu
Instead of installing new control modes for infotainments systems in a real vehicle for testing, it is an attractive idea (saving time and cost) to evaluate and develop these systems in a mixed reality (MR) environment. The central question of the study is whether the usability evaluation of a car entertainment system within a MR environment provides the same results as the evaluation of the car entertainment system within a real car. For this purpose a prototypical car infotainment system was built and integrated into a real car and into a MR environment. The MR environment represents the interior of the car and uses finger tracking and real haptic control elements of the center console of a car. Two test groups were assigned to the two different test environments. The study shows, that the usability is rated similar in both environments although readability and representation within the infotainment system is problematic.
与其在真实车辆中为信息娱乐系统安装新的控制模式进行测试,在混合现实(MR)环境中评估和开发这些系统是一个有吸引力的想法(节省时间和成本)。该研究的核心问题是,在MR环境中对汽车娱乐系统的可用性评估是否与在真实汽车中对汽车娱乐系统的评估提供相同的结果。为此,建立了一个原型汽车信息娱乐系统,并将其集成到真实的汽车和MR环境中。MR环境代表汽车内部,使用手指跟踪和汽车中控台的真实触觉控制元件。两个测试组被分配到两个不同的测试环境中。研究表明,尽管信息娱乐系统的可读性和表现性存在问题,但两种环境下的可用性评级相似。
{"title":"Comparison of the usability of a car infotainment system in a mixed reality environment and in a real car","authors":"Anna Bolder, S. Grünvogel, E. Angelescu","doi":"10.1145/3281505.3281512","DOIUrl":"https://doi.org/10.1145/3281505.3281512","url":null,"abstract":"Instead of installing new control modes for infotainments systems in a real vehicle for testing, it is an attractive idea (saving time and cost) to evaluate and develop these systems in a mixed reality (MR) environment. The central question of the study is whether the usability evaluation of a car entertainment system within a MR environment provides the same results as the evaluation of the car entertainment system within a real car. For this purpose a prototypical car infotainment system was built and integrated into a real car and into a MR environment. The MR environment represents the interior of the car and uses finger tracking and real haptic control elements of the center console of a car. Two test groups were assigned to the two different test environments. The study shows, that the usability is rated similar in both environments although readability and representation within the infotainment system is problematic.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"56 34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124952071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Indoor AR navigation using tilesets 室内AR导航使用瓷砖组
Tarush Rustagi, Kyungjin Yoo
This paper demonstrates the methodology and findings of creating an augmented reality navigation app that uses tilesets to create the navigation. It illustrates the method in which the app was created - using vector data and uploading it to MapBox, then accessing that data in Unity through the MapBox API and map editor and then overlaying the camera input with the navigation path layer. The application was tested by creating multiple arbitrary navigation scenarios and checking them for various factors. The main finding of this research is that this navigation solution works better than GPS indoor navigation.
本文演示了创建一个增强现实导航应用程序的方法和发现,该应用程序使用标题集来创建导航。它说明了创建应用程序的方法-使用矢量数据并将其上传到MapBox,然后通过MapBox API和地图编辑器在Unity中访问该数据,然后将相机输入与导航路径层叠加。通过创建多个任意导航场景并检查各种因素,对应用程序进行了测试。这项研究的主要发现是,这种导航解决方案比GPS室内导航效果更好。
{"title":"Indoor AR navigation using tilesets","authors":"Tarush Rustagi, Kyungjin Yoo","doi":"10.1145/3281505.3281575","DOIUrl":"https://doi.org/10.1145/3281505.3281575","url":null,"abstract":"This paper demonstrates the methodology and findings of creating an augmented reality navigation app that uses tilesets to create the navigation. It illustrates the method in which the app was created - using vector data and uploading it to MapBox, then accessing that data in Unity through the MapBox API and map editor and then overlaying the camera input with the navigation path layer. The application was tested by creating multiple arbitrary navigation scenarios and checking them for various factors. The main finding of this research is that this navigation solution works better than GPS indoor navigation.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114482866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Design-led 3D visualization of nanomedicines in virtual reality 以设计为主导的纳米药物在虚拟现实中的三维可视化
Andrew R. Lilja, Campbell W. Strong, Benjamin J. Bailey, K. Thurecht, Z. H. Houston, N. Fletcher, J. McGhee
Nanomedicines are a promising addition to the arsenal of new cancer therapies. During development, scientists must precisely track their distribution in the body, a task that can be severely limited by traditional 2D displays. With its stereoscopic capacity and real-time interactivity, virtual reality (VR) provides an encouraging platform to accurately visualize dynamic 3D volumetric data. In this research, we develop a prototype application to track nanomedicines in VR. This platform has the potential to enhance data assessment, comprehension and communication in preclinical research which may ultimately influence the paradigm of future clinical protocols.
纳米药物是新的癌症治疗方法的一个有希望的补充。在研发过程中,科学家必须精确地追踪它们在体内的分布,这一任务可能受到传统2D显示器的严重限制。虚拟现实(VR)以其立体能力和实时交互性,为准确可视化动态三维体数据提供了一个令人鼓舞的平台。在这项研究中,我们开发了一个原型应用程序来跟踪虚拟现实中的纳米药物。该平台有潜力加强临床前研究中的数据评估、理解和交流,最终可能影响未来临床方案的范例。
{"title":"Design-led 3D visualization of nanomedicines in virtual reality","authors":"Andrew R. Lilja, Campbell W. Strong, Benjamin J. Bailey, K. Thurecht, Z. H. Houston, N. Fletcher, J. McGhee","doi":"10.1145/3281505.3281572","DOIUrl":"https://doi.org/10.1145/3281505.3281572","url":null,"abstract":"Nanomedicines are a promising addition to the arsenal of new cancer therapies. During development, scientists must precisely track their distribution in the body, a task that can be severely limited by traditional 2D displays. With its stereoscopic capacity and real-time interactivity, virtual reality (VR) provides an encouraging platform to accurately visualize dynamic 3D volumetric data. In this research, we develop a prototype application to track nanomedicines in VR. This platform has the potential to enhance data assessment, comprehension and communication in preclinical research which may ultimately influence the paradigm of future clinical protocols.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130316085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Realistic simulation of progressive vision diseases in virtual reality 虚拟现实中进行性视力疾病的逼真模拟
Simon Stock, Christina Erler, W. Stork
People with a visual impairment perceive their surroundings differently than those with healthy vision. It can be difficult to understand how affected perceive their surroundings, even for themselves. We introduce a virtual reality (VR) platform capable of simulating the effects of common visual impairments. With this system we are able to create a realistic VR representation of actual visual fields obtained from a medical perimeter.
视力受损的人对周围环境的感知与视力健康的人不同。很难理解受影响的人是如何感知周围环境的,甚至对他们自己来说也是如此。我们介绍了一个虚拟现实(VR)平台,能够模拟常见的视觉障碍的影响。有了这个系统,我们能够创建一个真实的VR表示,从医疗周边获得的实际视野。
{"title":"Realistic simulation of progressive vision diseases in virtual reality","authors":"Simon Stock, Christina Erler, W. Stork","doi":"10.1145/3281505.3283395","DOIUrl":"https://doi.org/10.1145/3281505.3283395","url":null,"abstract":"People with a visual impairment perceive their surroundings differently than those with healthy vision. It can be difficult to understand how affected perceive their surroundings, even for themselves. We introduce a virtual reality (VR) platform capable of simulating the effects of common visual impairments. With this system we are able to create a realistic VR representation of actual visual fields obtained from a medical perimeter.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121956612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Training in IVR: investigating the effect of instructor design on social presence and performance of the VR user IVR培训:调查讲师设计对VR用户社会存在和表现的影响
Ceenu George, M. Spitzer, H. Hussmann
We investigate instructor representations (IRs) in the context of virtual trainings with head mounted displays (HMD). Despite the recently increased industry and research focus on virtual training in immersive virtual reality (IVR), the effect of IRs on the performer (VR user) has received little attention. We present the results of a study (N=33), evaluating the effect of three IRs - webcam, avatar and sound-only - on social presence (SP) and performance (PE) of the VR user during task completion. Our results show that instructor representation has an effect on SP and that, contrary to our assumption based on prior work, it affects performance negatively.
我们在头戴式显示器(HMD)的虚拟训练背景下研究教练表征(IRs)。尽管最近越来越多的行业和研究关注沉浸式虚拟现实(IVR)中的虚拟培训,但IRs对表演者(VR用户)的影响却很少受到关注。我们提出了一项研究结果(N=33),评估了三种ir -网络摄像头,虚拟化身和纯声音-对VR用户在任务完成过程中的社会存在(SP)和表现(PE)的影响。我们的研究结果表明,教师表征对SP有影响,与我们基于先前工作的假设相反,它对绩效有负面影响。
{"title":"Training in IVR: investigating the effect of instructor design on social presence and performance of the VR user","authors":"Ceenu George, M. Spitzer, H. Hussmann","doi":"10.1145/3281505.3281543","DOIUrl":"https://doi.org/10.1145/3281505.3281543","url":null,"abstract":"We investigate instructor representations (IRs) in the context of virtual trainings with head mounted displays (HMD). Despite the recently increased industry and research focus on virtual training in immersive virtual reality (IVR), the effect of IRs on the performer (VR user) has received little attention. We present the results of a study (N=33), evaluating the effect of three IRs - webcam, avatar and sound-only - on social presence (SP) and performance (PE) of the VR user during task completion. Our results show that instructor representation has an effect on SP and that, contrary to our assumption based on prior work, it affects performance negatively.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"33 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134092360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Effects of low video latency between visual information and physical sensation in immersive environments 沉浸式环境中视觉信息和物理感觉之间低视频延迟的影响
Takuya Kadowaki, M. Maruyama, T. Hayakawa, Naoki Matsuzawa, Kenichiro Iwasaki, M. Ishikawa
This study aims to investigate the impact on the user's performance when there is latency between the user's physical input to the system and the visual feedback. We developed a video latency control system to film the user's hand movements and control the latency when displaying the video (The standard deviation is 0.38 ms). The minimum latency of the system is 4.3 ms, hence this enables us to investigate the performance for unknown low latency ranges. Using this system, we conducted experiments wherein 20 subjects performed a pointing task based on Fitts' law to clarify the effect of video latency, particularly for low latency. Experimental results showed that when the latency is over 24.3 ms, the user performance begins to decrease. This result will be applied to determine a standard limit for video latency in interactive video devices.
本研究旨在探讨当用户对系统的物理输入与视觉反馈之间存在延迟时对用户性能的影响。我们开发了一个视频延迟控制系统,用于拍摄用户的手部运动,并控制视频显示时的延迟(标准偏差为0.38 ms)。系统的最小延迟为4.3 ms,因此这使我们能够在未知的低延迟范围内研究性能。利用该系统,我们进行了实验,其中20名受试者基于Fitts定律执行指向任务,以阐明视频延迟的影响,特别是对于低延迟。实验结果表明,当延迟超过24.3 ms时,用户性能开始下降。该结果将用于确定交互式视频设备中视频延迟的标准限制。
{"title":"Effects of low video latency between visual information and physical sensation in immersive environments","authors":"Takuya Kadowaki, M. Maruyama, T. Hayakawa, Naoki Matsuzawa, Kenichiro Iwasaki, M. Ishikawa","doi":"10.1145/3281505.3281609","DOIUrl":"https://doi.org/10.1145/3281505.3281609","url":null,"abstract":"This study aims to investigate the impact on the user's performance when there is latency between the user's physical input to the system and the visual feedback. We developed a video latency control system to film the user's hand movements and control the latency when displaying the video (The standard deviation is 0.38 ms). The minimum latency of the system is 4.3 ms, hence this enables us to investigate the performance for unknown low latency ranges. Using this system, we conducted experiments wherein 20 subjects performed a pointing task based on Fitts' law to clarify the effect of video latency, particularly for low latency. Experimental results showed that when the latency is over 24.3 ms, the user performance begins to decrease. This result will be applied to determine a standard limit for video latency in interactive video devices.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129404864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Gigapixel virtual reality employing live superzoom cameras 使用实时超变焦相机的十亿像素虚拟现实
Olli Koskinen, I. Rakkolainen, R. Raisamo
We present a live gigapixel virtual reality system employing a 360° camera, a superzoom camera with a pan-tilt robotic head, and a head-mounted display (HMD). The system is capable of showing on-demand gigapixel-level subregions of 360° videos. Similar systems could be used to have live feed for foveated rendering HMDs.
我们提出了一个实时的十亿像素虚拟现实系统,该系统采用360°相机,带有平移机器人头的超级变焦相机和头戴式显示器(HMD)。该系统能够按需显示360°视频的千兆像素级子区域。类似的系统可以用于为注视点渲染的hmd提供实时馈送。
{"title":"Gigapixel virtual reality employing live superzoom cameras","authors":"Olli Koskinen, I. Rakkolainen, R. Raisamo","doi":"10.1145/3281505.3281586","DOIUrl":"https://doi.org/10.1145/3281505.3281586","url":null,"abstract":"We present a live gigapixel virtual reality system employing a 360° camera, a superzoom camera with a pan-tilt robotic head, and a head-mounted display (HMD). The system is capable of showing on-demand gigapixel-level subregions of 360° videos. Similar systems could be used to have live feed for foveated rendering HMDs.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131187704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Phantazuma
Kazuma Chiba, Yunosuke Nakayama, Tomoko Hashida
In this research, we aim to create a stage machinery enabling audience members of a stage performance to watch different contents dependent to their position. To achieve this goal, we combined a vision control film whose transparency changes depending on the viewing angle with the classic Pepper's ghost effect. Therefore, the system enables the audience members in the same theater to watch different scenes (live actors only, ghosts only, or both) depending on their position. This paper will describe our research motivation, design and implementation of the proposed system, and the operation results.
{"title":"Phantazuma","authors":"Kazuma Chiba, Yunosuke Nakayama, Tomoko Hashida","doi":"10.1145/3281505.3281596","DOIUrl":"https://doi.org/10.1145/3281505.3281596","url":null,"abstract":"In this research, we aim to create a stage machinery enabling audience members of a stage performance to watch different contents dependent to their position. To achieve this goal, we combined a vision control film whose transparency changes depending on the viewing angle with the classic Pepper's ghost effect. Therefore, the system enables the audience members in the same theater to watch different scenes (live actors only, ghosts only, or both) depending on their position. This paper will describe our research motivation, design and implementation of the proposed system, and the operation results.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115711961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1