Image Blending and View Clustering for Multi-Viewer Immersive Projection Environments

J. Marbach
{"title":"Image Blending and View Clustering for Multi-Viewer Immersive Projection Environments","authors":"J. Marbach","doi":"10.1109/VR.2009.4810998","DOIUrl":null,"url":null,"abstract":"Investment into multi-wall Immersive Virtual Environments is often motivated by the potential for small groups of users to work collaboratively, yet most systems only allow for stereographic rendering from a single viewpoint. This paper discusses approaches for supporting copresent head-tracked users in an immersive projection environment, such as the CAVETM, without relying on additional projection and frame-multiplexing technology. The primary technique presented here is called Image Blending and consists of rendering independent views for each head-tracked user to an off-screen buffer and blending the images into a final composite view using view-vector incidence angles as weighting factors. Additionally, users whose view-vectors intersect a projection screen at similar locations are grouped into a view-cluster. Clustered user views are rendered from the average head position and orientation of all users in that cluster. The clustering approach minimizes users' exposure to undesirable display artifacts such as inverted stereo pairs and nonlinear object projections by distributing projection error over all tracked viewers. These techniques have the added advantage that they can be easily integrated into existing systems with minimally increased hardware and software requirements. We compare Image Blending and View Clustering with previously published techniques and discuss possible implementation optimizations and their tradeoffs.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 IEEE Virtual Reality Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VR.2009.4810998","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 14

Abstract

Investment into multi-wall Immersive Virtual Environments is often motivated by the potential for small groups of users to work collaboratively, yet most systems only allow for stereographic rendering from a single viewpoint. This paper discusses approaches for supporting copresent head-tracked users in an immersive projection environment, such as the CAVETM, without relying on additional projection and frame-multiplexing technology. The primary technique presented here is called Image Blending and consists of rendering independent views for each head-tracked user to an off-screen buffer and blending the images into a final composite view using view-vector incidence angles as weighting factors. Additionally, users whose view-vectors intersect a projection screen at similar locations are grouped into a view-cluster. Clustered user views are rendered from the average head position and orientation of all users in that cluster. The clustering approach minimizes users' exposure to undesirable display artifacts such as inverted stereo pairs and nonlinear object projections by distributing projection error over all tracked viewers. These techniques have the added advantage that they can be easily integrated into existing systems with minimally increased hardware and software requirements. We compare Image Blending and View Clustering with previously published techniques and discuss possible implementation optimizations and their tradeoffs.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
多观察者沉浸式投影环境的图像混合和视图聚类
对多墙沉浸式虚拟环境的投资通常是出于小群体用户协同工作的潜力,然而大多数系统只允许从单一视点进行立体渲染。本文讨论了在沉浸式投影环境中支持当前头部跟踪用户的方法,例如CAVETM,而不依赖于额外的投影和帧复用技术。这里介绍的主要技术称为图像混合,包括将每个头部跟踪用户的独立视图渲染到屏幕外缓冲区,并使用视图矢量入射角作为加权因子将图像混合到最终的合成视图中。此外,视图向量在相似位置与投影屏幕相交的用户被分组到视图簇中。集群用户视图是根据该集群中所有用户的平均头部位置和方向呈现的。聚类方法通过将投影误差分布在所有跟踪的观看者上,最大限度地减少用户暴露于不良显示工件(如倒置的立体对和非线性对象投影)的可能性。这些技术还有一个额外的优点,就是它们可以很容易地集成到现有的系统中,并且对硬件和软件的需求增加最少。我们将图像混合和视图聚类与先前发布的技术进行比较,并讨论可能的实现优化及其权衡。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Creating Virtual 3D See-Through Experiences on Large-size 2D Displays A Game Theoretic Approach for Modeling User-System Interaction in Networked Virtual Environments Explosion Diagrams in Augmented Reality Multiple Behaviors Generation by 1 D.O.F. Mobile Robot Efficient Large-Scale Sweep and Prune Methods with AABB Insertion and Removal
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1