A perceptual approach to trimming unstructured lumigraphs

Y. Morvan, C. O'Sullivan
{"title":"A perceptual approach to trimming unstructured lumigraphs","authors":"Y. Morvan, C. O'Sullivan","doi":"10.1145/1272582.1272594","DOIUrl":null,"url":null,"abstract":"We present a novel perceptual method to reduce the visual redundancy of unstructured lumigraphs, an image based representation designed for interactive rendering. We combine features of the unstructured lumigraph algorithm and image fidelity metrics to efficiently rank the perceptual impact of the removal of sub-regions of input views (sub-views). We use a greedy approach to estimate the order in which sub-views should be pruned to minimize perceptual degradation at each step. Renderings using varying numbers of sub-views can then be easily visualized with confidence that the retained sub-views are well chosen, thus facilitating the choice of how many to retain. The regions of the input views that are left are repacked into a texture atlas. Our method takes advantage of any scene geometry information available but only requires a very coarse approximation. We perform a user study to validate its behaviour, as well as investigate the impact of the choice of image fidelity metric. The three metrics considered fall in the physical, statistical and perceptual categories. The overall benefit of our method is the semi-automation of the view selection process, resulting in unstructured lumigraphs that are thriftier in texture memory use and faster to render. (Note to reviewers: a video is available at http://isg.cs.tcd.ie/ymorvan/paper37.avi. The figure occupying the ninth page is intended to appear on a color plate.)","PeriodicalId":121004,"journal":{"name":"Proceedings of the 4th symposium on Applied perception in graphics and visualization","volume":"559 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2007-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 4th symposium on Applied perception in graphics and visualization","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/1272582.1272594","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

We present a novel perceptual method to reduce the visual redundancy of unstructured lumigraphs, an image based representation designed for interactive rendering. We combine features of the unstructured lumigraph algorithm and image fidelity metrics to efficiently rank the perceptual impact of the removal of sub-regions of input views (sub-views). We use a greedy approach to estimate the order in which sub-views should be pruned to minimize perceptual degradation at each step. Renderings using varying numbers of sub-views can then be easily visualized with confidence that the retained sub-views are well chosen, thus facilitating the choice of how many to retain. The regions of the input views that are left are repacked into a texture atlas. Our method takes advantage of any scene geometry information available but only requires a very coarse approximation. We perform a user study to validate its behaviour, as well as investigate the impact of the choice of image fidelity metric. The three metrics considered fall in the physical, statistical and perceptual categories. The overall benefit of our method is the semi-automation of the view selection process, resulting in unstructured lumigraphs that are thriftier in texture memory use and faster to render. (Note to reviewers: a video is available at http://isg.cs.tcd.ie/ymorvan/paper37.avi. The figure occupying the ninth page is intended to appear on a color plate.)
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
一种修剪非结构化光图的感知方法
我们提出了一种新的感知方法来减少非结构化光图的视觉冗余,这是一种基于图像的交互式呈现方法。我们将非结构化lumigraph算法的特征与图像保真度度量相结合,以有效地对输入视图的子区域(子视图)的移除的感知影响进行排名。我们使用贪婪的方法来估计子视图应该被修剪的顺序,以最小化每一步的感知退化。使用不同数量的子视图的效果图可以很容易地可视化,并确信保留的子视图是精心选择的,从而便于选择保留多少子视图。剩下的输入视图区域被重新打包到纹理图集中。我们的方法利用了任何可用的场景几何信息,但只需要一个非常粗糙的近似。我们进行了一项用户研究来验证其行为,并调查了图像保真度度量选择的影响。所考虑的三个指标分为物理、统计和感知三类。我们的方法的总体好处是视图选择过程的半自动化,导致非结构化的lumigraphs在纹理内存使用方面更节省,渲染速度更快。(评论者注意:视频可以在http://isg.cs.tcd.ie/ymorvan/paper37.avi上找到。占据第九页的图形是要出现在彩板上的。)
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Gazing with pEYE: new concepts in eye typing Consistent left-right errors for visual path integration in virtual reality: more than a failure to update one's heading? A statistical approach for image difficulty estimation in x-ray screening using image measurements Perceptual uniformity of contrast scaling in complex images Proceedings of the 4th symposium on Applied perception in graphics and visualization
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1