Voxel-Based Immersive Mixed Reality: A Framework for Ad Hoc Immersive Storytelling

Presence Pub Date : 2021-12-01 DOI:10.1162/pres_a_00364
Stuart Duncan;Noel Park;Claudia Ott;Tobias Langlotz;Holger Regenbrecht
{"title":"Voxel-Based Immersive Mixed Reality: A Framework for Ad Hoc Immersive Storytelling","authors":"Stuart Duncan;Noel Park;Claudia Ott;Tobias Langlotz;Holger Regenbrecht","doi":"10.1162/pres_a_00364","DOIUrl":null,"url":null,"abstract":"Abstract Volumetric video recordings of storytellers, when experienced in immersive virtual reality, can elicit a sense of copresence between the user and the storyteller. Combining a volumetric storyteller with an appropriate virtual environment presents a compelling experience that can convey the story with a depth that is hard to achieve with traditional forms of media. Volumetric video production remains difficult, time-consuming, and expensive, often excluding cultural groups who would benefit most. The difficulty is partly due to ever-increasing levels of visual detail in computer graphics, and resulting hardware and software requirements. A high level of detail is not a requirement for convincing immersive experiences, and by reducing the level of detail, experiences can be produced and delivered using readily available, nonspecialized equipment. By reducing computational requirements in this way, storytelling scenes can be created ad hoc and experienced immediately—this is what we are addressing with our approach. We present our portable real-time volumetric capture system, and our framework for using it to produce immersive storytelling experiences. The real-time capability of the system, and the low data rates resulting from lower levels of visual detail, allow us to stream volumetric video in real time to enrich experiences with embodiment (seeing oneself) and with copresence (seeing others). Our system has supported collaborative research with Māori partners with the aim of reconnecting the dispersed Māori population in Aotearoa, New Zealand to their ancestral land through immersive storytelling. We present our system in the context of this collaborative work.","PeriodicalId":101038,"journal":{"name":"Presence","volume":"30 ","pages":"5-29"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Presence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10159604/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Abstract Volumetric video recordings of storytellers, when experienced in immersive virtual reality, can elicit a sense of copresence between the user and the storyteller. Combining a volumetric storyteller with an appropriate virtual environment presents a compelling experience that can convey the story with a depth that is hard to achieve with traditional forms of media. Volumetric video production remains difficult, time-consuming, and expensive, often excluding cultural groups who would benefit most. The difficulty is partly due to ever-increasing levels of visual detail in computer graphics, and resulting hardware and software requirements. A high level of detail is not a requirement for convincing immersive experiences, and by reducing the level of detail, experiences can be produced and delivered using readily available, nonspecialized equipment. By reducing computational requirements in this way, storytelling scenes can be created ad hoc and experienced immediately—this is what we are addressing with our approach. We present our portable real-time volumetric capture system, and our framework for using it to produce immersive storytelling experiences. The real-time capability of the system, and the low data rates resulting from lower levels of visual detail, allow us to stream volumetric video in real time to enrich experiences with embodiment (seeing oneself) and with copresence (seeing others). Our system has supported collaborative research with Māori partners with the aim of reconnecting the dispersed Māori population in Aotearoa, New Zealand to their ancestral land through immersive storytelling. We present our system in the context of this collaborative work.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于体素的沉浸式混合现实:一种特殊的沉浸式故事讲述框架
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Mismatches between 3D Content Acquisition and Perception Cause More Visually Induced Motion Sickness Tokens of Reality: On the Prospective Nature of Virtual Consciousness Bridging the Headset: Engagement, Collaboration, and Learning in and around Virtual Reality How Interaction Techniques Affect Workload in a Virtual Environment During Multitasking Ten Years of Operating a Center with Large-Scale Virtual-Reality Installations: Developments and Learnings
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1