{"title":"一种修剪非结构化光图的感知方法","authors":"Y. Morvan, C. O'Sullivan","doi":"10.1145/1272582.1272594","DOIUrl":null,"url":null,"abstract":"We present a novel perceptual method to reduce the visual redundancy of unstructured lumigraphs, an image based representation designed for interactive rendering. We combine features of the unstructured lumigraph algorithm and image fidelity metrics to efficiently rank the perceptual impact of the removal of sub-regions of input views (sub-views). We use a greedy approach to estimate the order in which sub-views should be pruned to minimize perceptual degradation at each step. Renderings using varying numbers of sub-views can then be easily visualized with confidence that the retained sub-views are well chosen, thus facilitating the choice of how many to retain. The regions of the input views that are left are repacked into a texture atlas. Our method takes advantage of any scene geometry information available but only requires a very coarse approximation. We perform a user study to validate its behaviour, as well as investigate the impact of the choice of image fidelity metric. The three metrics considered fall in the physical, statistical and perceptual categories. The overall benefit of our method is the semi-automation of the view selection process, resulting in unstructured lumigraphs that are thriftier in texture memory use and faster to render. (Note to reviewers: a video is available at http://isg.cs.tcd.ie/ymorvan/paper37.avi. The figure occupying the ninth page is intended to appear on a color plate.)","PeriodicalId":121004,"journal":{"name":"Proceedings of the 4th symposium on Applied perception in graphics and visualization","volume":"559 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2007-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"A perceptual approach to trimming unstructured lumigraphs\",\"authors\":\"Y. Morvan, C. O'Sullivan\",\"doi\":\"10.1145/1272582.1272594\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We present a novel perceptual method to reduce the visual redundancy of unstructured lumigraphs, an image based representation designed for interactive rendering. We combine features of the unstructured lumigraph algorithm and image fidelity metrics to efficiently rank the perceptual impact of the removal of sub-regions of input views (sub-views). We use a greedy approach to estimate the order in which sub-views should be pruned to minimize perceptual degradation at each step. Renderings using varying numbers of sub-views can then be easily visualized with confidence that the retained sub-views are well chosen, thus facilitating the choice of how many to retain. The regions of the input views that are left are repacked into a texture atlas. Our method takes advantage of any scene geometry information available but only requires a very coarse approximation. We perform a user study to validate its behaviour, as well as investigate the impact of the choice of image fidelity metric. The three metrics considered fall in the physical, statistical and perceptual categories. The overall benefit of our method is the semi-automation of the view selection process, resulting in unstructured lumigraphs that are thriftier in texture memory use and faster to render. (Note to reviewers: a video is available at http://isg.cs.tcd.ie/ymorvan/paper37.avi. The figure occupying the ninth page is intended to appear on a color plate.)\",\"PeriodicalId\":121004,\"journal\":{\"name\":\"Proceedings of the 4th symposium on Applied perception in graphics and visualization\",\"volume\":\"559 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2007-07-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 4th symposium on Applied perception in graphics and visualization\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/1272582.1272594\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 4th symposium on Applied perception in graphics and visualization","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/1272582.1272594","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A perceptual approach to trimming unstructured lumigraphs
We present a novel perceptual method to reduce the visual redundancy of unstructured lumigraphs, an image based representation designed for interactive rendering. We combine features of the unstructured lumigraph algorithm and image fidelity metrics to efficiently rank the perceptual impact of the removal of sub-regions of input views (sub-views). We use a greedy approach to estimate the order in which sub-views should be pruned to minimize perceptual degradation at each step. Renderings using varying numbers of sub-views can then be easily visualized with confidence that the retained sub-views are well chosen, thus facilitating the choice of how many to retain. The regions of the input views that are left are repacked into a texture atlas. Our method takes advantage of any scene geometry information available but only requires a very coarse approximation. We perform a user study to validate its behaviour, as well as investigate the impact of the choice of image fidelity metric. The three metrics considered fall in the physical, statistical and perceptual categories. The overall benefit of our method is the semi-automation of the view selection process, resulting in unstructured lumigraphs that are thriftier in texture memory use and faster to render. (Note to reviewers: a video is available at http://isg.cs.tcd.ie/ymorvan/paper37.avi. The figure occupying the ninth page is intended to appear on a color plate.)