首页 > 最新文献

ACM SIGGRAPH 2015 Posters最新文献

英文 中文
Augmented dynamic shape for live high quality rendering 增强动态形状,实时高质量渲染
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2787643
Tony Tung
Consumer RGBD sensors are becoming ubiquitous and can be found in many devices such as laptops (e.g., Intel's RealSense) or tablets (e.g., Google Tango, Structure, etc.). They have become popular in graphics, vision, and HCI communities as they enable numerous applications such as 3D capture, gesture recognition, virtual fitting, etc. Nowadays, common sensors can deliver a stream of color images and depth maps in VGA resolution at 30 fps. While the color image is usually of sufficient quality for visualization, depth information (represented as a point cloud) is usually too sparse and noisy for readable rendering.
消费类RGBD传感器正变得无处不在,可以在许多设备中找到,例如笔记本电脑(例如英特尔的RealSense)或平板电脑(例如谷歌Tango, Structure等)。它们在图形,视觉和HCI社区中变得流行,因为它们支持许多应用程序,如3D捕获,手势识别,虚拟试装等。如今,常见的传感器可以在30 fps的VGA分辨率下提供彩色图像流和深度图。虽然彩色图像通常具有足够的可视化质量,但深度信息(表示为点云)通常过于稀疏和嘈杂,无法进行可读渲染。
{"title":"Augmented dynamic shape for live high quality rendering","authors":"Tony Tung","doi":"10.1145/2787626.2787643","DOIUrl":"https://doi.org/10.1145/2787626.2787643","url":null,"abstract":"Consumer RGBD sensors are becoming ubiquitous and can be found in many devices such as laptops (e.g., Intel's RealSense) or tablets (e.g., Google Tango, Structure, etc.). They have become popular in graphics, vision, and HCI communities as they enable numerous applications such as 3D capture, gesture recognition, virtual fitting, etc. Nowadays, common sensors can deliver a stream of color images and depth maps in VGA resolution at 30 fps. While the color image is usually of sufficient quality for visualization, depth information (represented as a point cloud) is usually too sparse and noisy for readable rendering.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121125591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fracture in augmented reality 增强现实中的断裂
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792636
Nazim Haouchine, A. Bilger, Jérémie Dequidt, S. Cotin
The considerable advances in Computer Vision for hand and finger tracking made it possible to have several sorts of interactions in Augmented Reality systems (AR), such as object grasping, object translation or surface deformation [Chun and Höllerer 2013]. However, no method has yet considered interaction than involves topological changes of the augmented model (like mesh cutting).
计算机视觉在手部和手指跟踪方面的巨大进步使得增强现实系统(AR)中的几种交互成为可能,例如物体抓取,物体平移或表面变形[Chun和Höllerer 2013]。然而,目前还没有一种方法考虑到增强模型的拓扑变化(如网格切割)之外的相互作用。
{"title":"Fracture in augmented reality","authors":"Nazim Haouchine, A. Bilger, Jérémie Dequidt, S. Cotin","doi":"10.1145/2787626.2792636","DOIUrl":"https://doi.org/10.1145/2787626.2792636","url":null,"abstract":"The considerable advances in Computer Vision for hand and finger tracking made it possible to have several sorts of interactions in Augmented Reality systems (AR), such as object grasping, object translation or surface deformation [Chun and Höllerer 2013]. However, no method has yet considered interaction than involves topological changes of the augmented model (like mesh cutting).","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115017306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Component segmentation of sketches used in 3D model retrieval 三维模型检索中草图的构件分割
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792655
Yang Kang, Chi Xu, Shujin Lin, Songhua Xu, Xiaonan Luo, Qiang Chen
Sketching is a natural human practice. With the popularity of multi-touch tablets and styluses, sketching has become a more popular means of human-computer interaction. However, accurately recognizing sketches is rather challenging, especially when they are drawn by non-professionals. Therefore, automatic sketch understanding has attracted much research attention. To tackle the problem, we propose to segment sketch drawings before analyzing the semantic meanings of sketches for the purpose of developing a sketch-based 3D model retrieval system.
素描是一种自然的人类实践。随着多点触控平板电脑和触控笔的普及,素描已经成为一种更流行的人机交互方式。然而,准确地识别草图是相当具有挑战性的,尤其是当它们是由非专业人士绘制的时候。因此,草图的自动理解受到了广泛的关注。为了解决这个问题,我们提出在分析草图的语义之前对草图进行分割,以开发一个基于草图的三维模型检索系统。
{"title":"Component segmentation of sketches used in 3D model retrieval","authors":"Yang Kang, Chi Xu, Shujin Lin, Songhua Xu, Xiaonan Luo, Qiang Chen","doi":"10.1145/2787626.2792655","DOIUrl":"https://doi.org/10.1145/2787626.2792655","url":null,"abstract":"Sketching is a natural human practice. With the popularity of multi-touch tablets and styluses, sketching has become a more popular means of human-computer interaction. However, accurately recognizing sketches is rather challenging, especially when they are drawn by non-professionals. Therefore, automatic sketch understanding has attracted much research attention. To tackle the problem, we propose to segment sketch drawings before analyzing the semantic meanings of sketches for the purpose of developing a sketch-based 3D model retrieval system.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116542473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Z-drawing: a flying agent system for computer-assisted drawing z绘图:用于计算机辅助绘图的飞行代理系统
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2787652
Sang-won Leigh, Harshit Agrawal, P. Maes
We present a drone-based drawing system where a user's sketch on a desk is transformed across scale and time, and transferred onto a larger canvas at a distance in real-time. Various spatio-temporal transformations like scaling, mirroring, time stretching, recording and playing back over time, and simultaneously drawing at multiple locations allow for creating various artistic effects. The unrestricted motion of the drone promises scalability and a huge potential as an artistic medium.
我们提出了一个基于无人机的绘图系统,在这个系统中,用户在桌子上的草图可以跨越比例和时间进行转换,并实时转移到远处更大的画布上。各种时空转换,如缩放、镜像、时间拉伸、随时间记录和回放,以及在多个位置同时绘制,可以创造各种艺术效果。无人机的无限制运动保证了可扩展性和作为艺术媒介的巨大潜力。
{"title":"Z-drawing: a flying agent system for computer-assisted drawing","authors":"Sang-won Leigh, Harshit Agrawal, P. Maes","doi":"10.1145/2787626.2787652","DOIUrl":"https://doi.org/10.1145/2787626.2787652","url":null,"abstract":"We present a drone-based drawing system where a user's sketch on a desk is transformed across scale and time, and transferred onto a larger canvas at a distance in real-time. Various spatio-temporal transformations like scaling, mirroring, time stretching, recording and playing back over time, and simultaneously drawing at multiple locations allow for creating various artistic effects. The unrestricted motion of the drone promises scalability and a huge potential as an artistic medium.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122355716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Mobile haptic system design to evoke relaxation through paced breathing 移动触觉系统设计,通过有节奏的呼吸唤起放松
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792627
A. Bumatay, J. Seo
Stress is physical response that affects everyone in varying degrees. Throughout history, people have developed various practices to help cope with stress. Many of these practices focus on bringing awareness to the body and breath. Studies have shown that mindfulness meditation and paced breathing are effective tools for stress management [Brown, 2005].
压力是一种身体反应,在不同程度上影响着每个人。纵观历史,人们已经发展出各种各样的方法来帮助应对压力。这些练习的重点是将意识带到身体和呼吸上。研究表明,正念冥想和有节奏的呼吸是有效的压力管理工具[Brown, 2005]。
{"title":"Mobile haptic system design to evoke relaxation through paced breathing","authors":"A. Bumatay, J. Seo","doi":"10.1145/2787626.2792627","DOIUrl":"https://doi.org/10.1145/2787626.2792627","url":null,"abstract":"Stress is physical response that affects everyone in varying degrees. Throughout history, people have developed various practices to help cope with stress. Many of these practices focus on bringing awareness to the body and breath. Studies have shown that mindfulness meditation and paced breathing are effective tools for stress management [Brown, 2005].","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114283049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Dynamic realistic lip animation using a limited number of control points 动态逼真的嘴唇动画使用有限数量的控制点
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2787628
Slim Ouni, Guillaume Gris
One main concern of audiovisual speech research is the intelligibility of audiovisual speech (i.e., talking head). In fact, lip reading is crucial for challenged population as hard of hearing people. For audiovisual synthesis and animation, this suggests that one should pay careful attention to modeling the region of the face that participates actively during speech. Above all, a facial animation system needs extremely good representations of lip motion and deformation in order to achieve realism and effective communication.
视听语音研究的一个主要问题是视听语音的可理解性(即说话头)。事实上,唇读对于听力障碍人群来说是至关重要的。对于视听合成和动画,这表明人们应该仔细注意在讲话时积极参与的面部区域的建模。最重要的是,面部动画系统需要非常好的嘴唇运动和变形的表示,以实现真实感和有效的交流。
{"title":"Dynamic realistic lip animation using a limited number of control points","authors":"Slim Ouni, Guillaume Gris","doi":"10.1145/2787626.2787628","DOIUrl":"https://doi.org/10.1145/2787626.2787628","url":null,"abstract":"One main concern of audiovisual speech research is the intelligibility of audiovisual speech (i.e., talking head). In fact, lip reading is crucial for challenged population as hard of hearing people. For audiovisual synthesis and animation, this suggests that one should pay careful attention to modeling the region of the face that participates actively during speech. Above all, a facial animation system needs extremely good representations of lip motion and deformation in order to achieve realism and effective communication.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129772402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Real-time rendering of atmospheric glories 实时渲染大气荣耀
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2787632
Ari Rapkin Blenkhorn
The glory is a colorful atmospheric phenomenon which resembles a small circular rainbow on the front surface of a cloudbank. It is most frequently seen from aircraft when the observer is directly between the sun and the clouds. Glories are also sometimes seen by skydivers looking down through thin cloud layers. They are always centered around the shadow of the observer's head (or camera).
这种荣耀是一种色彩斑斓的大气现象,类似于云滩前表面的小圆形彩虹。当观测者正处于太阳和云层之间时,从飞机上最常看到它。有时,跳伞者透过薄薄的云层向下看也能看到辉煌。它们总是以观察者头部(或相机)的阴影为中心。
{"title":"Real-time rendering of atmospheric glories","authors":"Ari Rapkin Blenkhorn","doi":"10.1145/2787626.2787632","DOIUrl":"https://doi.org/10.1145/2787626.2787632","url":null,"abstract":"The glory is a colorful atmospheric phenomenon which resembles a small circular rainbow on the front surface of a cloudbank. It is most frequently seen from aircraft when the observer is directly between the sun and the clouds. Glories are also sometimes seen by skydivers looking down through thin cloud layers. They are always centered around the shadow of the observer's head (or camera).","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128502957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Synthesizing close combat using sequential Monte Carlo 利用时序蒙特卡罗合成近战
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2787638
I. Chiang, Po-Han Lin, Yuan-Hung Chang, M. Ouhyoung
Synthesizing competitive interactions between two avatars in a physics-based simulation remains challenging. Most previous works rely on reusing motion capture data. They also need an offline preprocessing step to either build motion graphs or perform motion analysis. On the other hand, an online motion synthesis algorithm [Hämäläinen et al. 2014] can produce physically plausible motions including balance recovery and dodge projectiles without prior data. They use a kd-tree sequential Monte Carlo sampler to optimize the joint angle trajectories. We extend their approach and propose a new objective function to create two-character animations in a close-range combat. The principles of attack and defense are designed according to fundamental theory of Chinese martial arts. Instead of following a series of fixed Kung Fu forms, our method gives 3D avatars the freedom to explore diverse movements and through pruning can finally evolve an optimal way for fighting.
在基于物理的模拟中,合成两个角色之间的竞争性互动仍然具有挑战性。大多数先前的工作依赖于重用动作捕捉数据。它们还需要离线预处理步骤来构建运动图形或执行运动分析。另一方面,一种在线运动合成算法[Hämäläinen et al. 2014]可以在没有先验数据的情况下产生物理上合理的运动,包括平衡恢复和闪避弹丸。他们使用kd-tree顺序蒙特卡罗采样器来优化关节角度轨迹。我们扩展了他们的方法,并提出了一个新的目标函数来创建近距离战斗中的双角色动画。进攻和防御的原则是根据中国武术的基本理论设计的。我们的方法不是遵循一系列固定的功夫形式,而是让3D化身自由地探索不同的动作,并通过修剪最终进化出最佳的战斗方式。
{"title":"Synthesizing close combat using sequential Monte Carlo","authors":"I. Chiang, Po-Han Lin, Yuan-Hung Chang, M. Ouhyoung","doi":"10.1145/2787626.2787638","DOIUrl":"https://doi.org/10.1145/2787626.2787638","url":null,"abstract":"Synthesizing competitive interactions between two avatars in a physics-based simulation remains challenging. Most previous works rely on reusing motion capture data. They also need an offline preprocessing step to either build motion graphs or perform motion analysis. On the other hand, an online motion synthesis algorithm [Hämäläinen et al. 2014] can produce physically plausible motions including balance recovery and dodge projectiles without prior data. They use a kd-tree sequential Monte Carlo sampler to optimize the joint angle trajectories. We extend their approach and propose a new objective function to create two-character animations in a close-range combat. The principles of attack and defense are designed according to fundamental theory of Chinese martial arts. Instead of following a series of fixed Kung Fu forms, our method gives 3D avatars the freedom to explore diverse movements and through pruning can finally evolve an optimal way for fighting.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129486623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Twech: a mobile platform to search and share visuo-tactile experiences Twech:一个搜索和分享视觉触觉体验的移动平台
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792628
Nobuhisa Hanamitsu, Kanata Nakamura, M. Y. Saraiji, K. Minamizawa, S. Tachi
Twech is a mobile platform that enables users to share visuo-tactile experience and search other experiences for tactile data. User can record and share visuo-tactile experiences by using a visuo-tactile recording and displaying attachment for smartphone, allows the user to instantly such as tweet, and re-experience shared data such as visuo-motor coupling. Further, Twech's search engine finds similar other experiences, which were scratched material surfaces, communicated with animals or other experiences, for uploaded tactile data by using search engine is based on deep learning that ware expanded for recognizing tactile materials. Twech provides a sharing and finding haptic experiences and users re-experience uploaded visual-tactile data from cloud server.
Twech是一个移动平台,可以让用户分享视觉触觉体验,并搜索其他体验来获取触觉数据。用户可以使用智能手机的视触觉记录和显示附件记录和分享视触觉体验,允许用户即时发送如tweet,并重新体验共享数据,如视觉-运动耦合。此外,Twech的搜索引擎还发现了类似的其他体验,比如划伤材料表面、与动物交流或其他体验,因为使用搜索引擎上传的触觉数据是基于深度学习,而深度学习是为了识别触觉材料而扩展的。Twech提供分享和寻找触觉体验,用户可以重新体验从云服务器上传的视觉触觉数据。
{"title":"Twech: a mobile platform to search and share visuo-tactile experiences","authors":"Nobuhisa Hanamitsu, Kanata Nakamura, M. Y. Saraiji, K. Minamizawa, S. Tachi","doi":"10.1145/2787626.2792628","DOIUrl":"https://doi.org/10.1145/2787626.2792628","url":null,"abstract":"Twech is a mobile platform that enables users to share visuo-tactile experience and search other experiences for tactile data. User can record and share visuo-tactile experiences by using a visuo-tactile recording and displaying attachment for smartphone, allows the user to instantly such as tweet, and re-experience shared data such as visuo-motor coupling. Further, Twech's search engine finds similar other experiences, which were scratched material surfaces, communicated with animals or other experiences, for uploaded tactile data by using search engine is based on deep learning that ware expanded for recognizing tactile materials. Twech provides a sharing and finding haptic experiences and users re-experience uploaded visual-tactile data from cloud server.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133013573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Reducing geometry-processing overhead for novel viewpoint creation 减少新视点创建的几何处理开销
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792599
Francisco Inácio, J. P. Springer
Maintaining a high steady frame rate is an important aspect in interactive real-time graphics. It is mainly influenced by the number of objects and the number of lights to be processed for a 3d scene. The upper-bound effort for rendering a scene is then defined by the number of objects times the number of lights, i. e. O(NO · NL). Deferred shading reduces this upper bound to the number of objects plus the number of lights, i. e. O(NO + NL), by separating the rendering process into two phases: geometry processing and lighting evaluation. The geometry processing rasterizes all objects but only retains visible fragments in a G-Buffer for the current viewpoint. The lighting evaluation then only needs to process those surviving fragments to compute the final image (for the current viewpoint). Unfortunately, this approach not only trades computational effort for memory but also requires the re-creation of the G-Buffer every time the viewpoint changes. Additionally, transparent objects cannot be encoded into a G-Buffer and must be separately processed. Post-rendering 3d warping [Mark et al. 1997] is one particular technique that allows to create images from G-Buffer information for new viewpoints. However, this only works with sufficient fragment information. Objects not encoded in the G-Buffer, because they were not visible from the original viewpoint, will create visual artifacts at discontinuities between objects. We propose fragment-history volumes (FHV) to create novel viewpoints from a discrete representation of the entire scene using current graphics hardware and present an initial performance comparison.
保持高稳定的帧率是交互式实时图形的一个重要方面。它主要受3d场景中要处理的物体数量和灯光数量的影响。然后,渲染场景的上限努力由物体数量乘以灯光数量来定义,即0 (NO·NL)。延迟着色通过将渲染过程分为两个阶段:几何处理和照明评估,将这个上限减少到物体数量加上灯光数量,即0 (NO + NL)。几何处理栅格化所有对象,但只保留当前视点的G-Buffer中的可见片段。然后照明评估只需要处理那些幸存的碎片来计算最终图像(对于当前的视点)。不幸的是,这种方法不仅需要计算量来换取内存,而且每次视点改变时都需要重新创建G-Buffer。此外,透明对象不能编码到G-Buffer中,必须单独处理。渲染后3d翘曲[Mark et al. 1997]是一种特殊的技术,允许从G-Buffer信息为新的视点创建图像。然而,这只适用于足够的片段信息。未在G-Buffer中编码的对象,因为它们从原始视点不可见,将在对象之间的不连续处创建视觉伪影。我们提出片段历史卷(FHV),利用当前图形硬件从整个场景的离散表示中创建新的视点,并提供初步的性能比较。
{"title":"Reducing geometry-processing overhead for novel viewpoint creation","authors":"Francisco Inácio, J. P. Springer","doi":"10.1145/2787626.2792599","DOIUrl":"https://doi.org/10.1145/2787626.2792599","url":null,"abstract":"Maintaining a high steady frame rate is an important aspect in interactive real-time graphics. It is mainly influenced by the number of objects and the number of lights to be processed for a 3d scene. The upper-bound effort for rendering a scene is then defined by the number of objects times the number of lights, i. e. O(NO · NL). Deferred shading reduces this upper bound to the number of objects plus the number of lights, i. e. O(NO + NL), by separating the rendering process into two phases: geometry processing and lighting evaluation. The geometry processing rasterizes all objects but only retains visible fragments in a G-Buffer for the current viewpoint. The lighting evaluation then only needs to process those surviving fragments to compute the final image (for the current viewpoint). Unfortunately, this approach not only trades computational effort for memory but also requires the re-creation of the G-Buffer every time the viewpoint changes. Additionally, transparent objects cannot be encoded into a G-Buffer and must be separately processed. Post-rendering 3d warping [Mark et al. 1997] is one particular technique that allows to create images from G-Buffer information for new viewpoints. However, this only works with sufficient fragment information. Objects not encoded in the G-Buffer, because they were not visible from the original viewpoint, will create visual artifacts at discontinuities between objects. We propose fragment-history volumes (FHV) to create novel viewpoints from a discrete representation of the entire scene using current graphics hardware and present an initial performance comparison.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133117488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM SIGGRAPH 2015 Posters
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1