首页 > 最新文献

ACM Transactions on Graphics最新文献

英文 中文
Walkin’ Robin: Walk on Stars with Robin Boundary Conditions 漫步罗宾:与罗宾一起漫步星空边界条件
IF 7.8 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-19 DOI: 10.1145/3658153
Bailey Miller, Rohan Sawhney, Keenan Crane, Ioannis Gkioulekas
Numerous scientific and engineering applications require solutions to boundary value problems (BVPs) involving elliptic partial differential equations, such as the Laplace or Poisson equations, on geometrically intricate domains. We develop a Monte Carlo method for solving such BVPs with arbitrary first-order linear boundary conditions---Dirichlet, Neumann, and Robin. Our method directly generalizes the walk on stars (WoSt) algorithm, which previously tackled only the first two types of boundary conditions, with a few simple modifications. Unlike conventional numerical methods, WoSt does not need finite element meshing or global solves. Similar to Monte Carlo rendering, it instead computes pointwise solution estimates by simulating random walks along star-shaped regions inside the BVP domain, using efficient ray-intersection and distance queries. To ensure WoSt produces bounded-variance estimates in the presence of Robin boundary conditions, we show that it is sufficient to modify how WoSt selects the size of these star-shaped regions. Our generalized WoSt algorithm reduces estimation error by orders of magnitude relative to alternative grid-free methods such as the walk on boundary algorithm. We also develop bidirectional and boundary value caching strategies to further reduce estimation error. Our algorithm is trivial to parallelize, scales sublinearly with increasing geometric detail, and enables progressive and view-dependent evaluation.
许多科学和工程应用都需要解决涉及椭圆偏微分方程(如拉普拉斯方程或泊松方程)的边界值问题(BVPs)。我们开发了一种蒙特卡罗方法,用于求解具有任意一阶线性边界条件(Dirichlet、Neumann 和 Robin)的此类 BVP。我们的方法直接推广了 "星上行走"(WoSt)算法,该算法以前只处理前两类边界条件,只做了一些简单的修改。与传统数值方法不同,WoSt 不需要进行有限元网格划分或全局求解。它与蒙特卡洛渲染类似,而是通过模拟沿 BVP 域内星形区域的随机行走,利用高效的射线交汇和距离查询来计算点解估计值。为了确保 WoSt 在罗宾边界条件下产生有界方差估计值,我们证明只需修改 WoSt 选择这些星形区域大小的方式即可。我们的广义 WoSt 算法与其他无网格方法(如边界行走算法)相比,将估计误差降低了几个数量级。我们还开发了双向和边界值缓存策略,以进一步减少估计误差。我们的算法易于并行化,随着几何细节的增加呈亚线性扩展,并可进行渐进式和视图依赖性评估。
{"title":"Walkin’ Robin: Walk on Stars with Robin Boundary Conditions","authors":"Bailey Miller, Rohan Sawhney, Keenan Crane, Ioannis Gkioulekas","doi":"10.1145/3658153","DOIUrl":"https://doi.org/10.1145/3658153","url":null,"abstract":"\u0000 Numerous scientific and engineering applications require solutions to boundary value problems (BVPs) involving elliptic partial differential equations, such as the Laplace or Poisson equations, on geometrically intricate domains. We develop a Monte Carlo method for solving such BVPs with arbitrary first-order linear boundary conditions---Dirichlet, Neumann, and Robin. Our method directly generalizes the\u0000 walk on stars (WoSt)\u0000 algorithm, which previously tackled only the first two types of boundary conditions, with a few simple modifications. Unlike conventional numerical methods, WoSt does not need finite element meshing or global solves. Similar to Monte Carlo rendering, it instead computes pointwise solution estimates by simulating random walks along star-shaped regions inside the BVP domain, using efficient ray-intersection and distance queries. To ensure WoSt produces\u0000 bounded-variance\u0000 estimates in the presence of Robin boundary conditions, we show that it is sufficient to modify how WoSt selects the size of these star-shaped regions. Our generalized WoSt algorithm reduces estimation error by orders of magnitude relative to alternative grid-free methods such as the\u0000 walk on boundary\u0000 algorithm. We also develop\u0000 bidirectional\u0000 and\u0000 boundary value caching\u0000 strategies to further reduce estimation error. Our algorithm is trivial to parallelize, scales sublinearly with increasing geometric detail, and enables progressive and view-dependent evaluation.\u0000","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":null,"pages":null},"PeriodicalIF":7.8,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141823777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Variational Feature Extraction in Scientific Visualization 科学可视化中的变量特征提取
IF 7.8 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-19 DOI: 10.1145/3658219
Nico Daßler, Tobias Günther
Across many scientific disciplines, the pursuit of even higher grid resolutions leads to a severe scalability problem in scientific computing. Feature extraction is a commonly chosen approach to reduce the amount of information from dense fields down to geometric primitives that further enable a quantitative analysis. Examples of common features are isolines, extremal lines, or vortex corelines. Due to the rising complexity of the observed phenomena, or in the event of discretization issues with the data, a straightforward application of textbook feature definitions is unfortunately insufficient. Thus, feature extraction from spatial data often requires substantial pre- or post-processing to either clean up the results or to include additional domain knowledge about the feature in question. Such a separate pre- or post-processing of features not only leads to suboptimal and incomparable solutions, it also results in many specialized feature extraction algorithms arising in the different application domains. In this paper, we establish a mathematical language that not only encompasses commonly used feature definitions, it also provides a set of regularizers that can be applied across the bounds of individual application domains. By using the language of variational calculus, we treat features as variational minimizers, which can be combined and regularized as needed. Our formulation not only encompasses existing feature definitions as special case, it also opens the path to novel feature definitions. This work lays the foundations for many new research directions regarding formal definitions, data representations, and numerical extraction algorithms.
在许多科学学科中,对更高网格分辨率的追求导致了科学计算中严重的可扩展性问题。特征提取是一种常用的方法,可将密集场的信息量缩减为几何基元,从而进一步实现定量分析。常见特征的例子有孤立线、极值线或涡旋核心线。由于观测到的现象越来越复杂,或者在数据离散化的情况下,直接应用教科书上的特征定义是不够的。因此,从空间数据中提取特征往往需要大量的预处理或后处理,以清理结果或纳入有关特征的其他领域知识。这种单独的特征预处理或后处理不仅会导致次优和无法比较的解决方案,还会导致不同应用领域出现许多专门的特征提取算法。在本文中,我们建立了一种数学语言,它不仅包含了常用的特征定义,还提供了一组可应用于不同应用领域的正则。通过使用变分微积分语言,我们将特征视为变分最小值,可根据需要对其进行组合和正则化。我们的表述不仅将现有的特征定义作为特例,还为新颖的特征定义开辟了道路。这项工作为有关形式定义、数据表示和数值提取算法的许多新研究方向奠定了基础。
{"title":"Variational Feature Extraction in Scientific Visualization","authors":"Nico Daßler, Tobias Günther","doi":"10.1145/3658219","DOIUrl":"https://doi.org/10.1145/3658219","url":null,"abstract":"Across many scientific disciplines, the pursuit of even higher grid resolutions leads to a severe scalability problem in scientific computing. Feature extraction is a commonly chosen approach to reduce the amount of information from dense fields down to geometric primitives that further enable a quantitative analysis. Examples of common features are isolines, extremal lines, or vortex corelines. Due to the rising complexity of the observed phenomena, or in the event of discretization issues with the data, a straightforward application of textbook feature definitions is unfortunately insufficient. Thus, feature extraction from spatial data often requires substantial pre- or post-processing to either clean up the results or to include additional domain knowledge about the feature in question. Such a separate pre- or post-processing of features not only leads to suboptimal and incomparable solutions, it also results in many specialized feature extraction algorithms arising in the different application domains. In this paper, we establish a mathematical language that not only encompasses commonly used feature definitions, it also provides a set of regularizers that can be applied across the bounds of individual application domains. By using the language of variational calculus, we treat features as variational minimizers, which can be combined and regularized as needed. Our formulation not only encompasses existing feature definitions as special case, it also opens the path to novel feature definitions. This work lays the foundations for many new research directions regarding formal definitions, data representations, and numerical extraction algorithms.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":null,"pages":null},"PeriodicalIF":7.8,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141821925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LightFormer: Light-Oriented Global Neural Rendering in Dynamic Scene LightFormer:动态场景中以光为导向的全局神经渲染
IF 7.8 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-19 DOI: 10.1145/3658229
Haocheng Ren, Yuchi Huo, Yifan Peng, Hongtao Sheng, Weidong Xue, Hongxiang Huang, Jingzhen Lan, Rui Wang, Hujun Bao
The generation of global illumination in real time has been a long-standing challenge in the graphics community, particularly in dynamic scenes with complex illumination. Recent neural rendering techniques have shown great promise by utilizing neural networks to represent the illumination of scenes and then decoding the final radiance. However, incorporating object parameters into the representation may limit their effectiveness in handling fully dynamic scenes. This work presents a neural rendering approach, dubbed LightFormer , that can generate realistic global illumination for fully dynamic scenes, including dynamic lighting, materials, cameras, and animated objects, in real time. Inspired by classic many-lights methods, the proposed approach focuses on the neural representation of light sources in the scene rather than the entire scene, leading to the overall better generalizability. The neural prediction is achieved by leveraging the virtual point lights and shading clues for each light. Specifically, two stages are explored. In the light encoding stage, each light generates a set of virtual point lights in the scene, which are then encoded into an implicit neural light representation, along with screen-space shading clues like visibility. In the light gathering stage, a pixel-light attention mechanism composites all light representations for each shading point. Given the geometry and material representation, in tandem with the composed light representations of all lights, a lightweight neural network predicts the final radiance. Experimental results demonstrate that the proposed LightFormer can yield reasonable and realistic global illumination in fully dynamic scenes with real-time performance.
实时生成全局光照一直是图形学界长期面临的挑战,尤其是在光照复杂的动态场景中。最近的神经渲染技术通过利用神经网络来表示场景的光照度,然后解码最终的辐射度,显示了巨大的前景。然而,将物体参数纳入表示可能会限制其处理全动态场景的效果。本研究提出了一种神经渲染方法,称为 LightFormer,它能为全动态场景实时生成逼真的全局照明,包括动态照明、材料、摄像机和动画对象。受经典多光源方法的启发,所提出的方法侧重于场景中光源的神经表征,而不是整个场景,从而在整体上具有更好的通用性。神经预测是通过利用虚拟点光源和每个光源的阴影线索来实现的。具体来说,我们探索了两个阶段。在光线编码阶段,每盏灯都会在场景中生成一组虚拟点光源,然后将其与屏幕空间的阴影线索(如可见度)一起编码为隐式神经光线表示。在光线收集阶段,像素-光线关注机制会对每个阴影点的所有光线表示进行合成。考虑到几何图形和材料表示法,以及所有光线的合成光线表示法,轻量级神经网络会预测最终的辐射度。实验结果表明,所提出的 LightFormer 能够在全动态场景中产生合理而逼真的全局照明,并具有实时性能。
{"title":"LightFormer: Light-Oriented Global Neural Rendering in Dynamic Scene","authors":"Haocheng Ren, Yuchi Huo, Yifan Peng, Hongtao Sheng, Weidong Xue, Hongxiang Huang, Jingzhen Lan, Rui Wang, Hujun Bao","doi":"10.1145/3658229","DOIUrl":"https://doi.org/10.1145/3658229","url":null,"abstract":"\u0000 The generation of global illumination in real time has been a long-standing challenge in the graphics community, particularly in dynamic scenes with complex illumination. Recent neural rendering techniques have shown great promise by utilizing neural networks to represent the illumination of scenes and then decoding the final radiance. However, incorporating object parameters into the representation may limit their effectiveness in handling fully dynamic scenes. This work presents a neural rendering approach, dubbed\u0000 LightFormer\u0000 , that can generate realistic global illumination for fully dynamic scenes, including dynamic lighting, materials, cameras, and animated objects, in real time. Inspired by classic many-lights methods, the proposed approach focuses on the neural representation of light sources in the scene rather than the entire scene, leading to the overall better generalizability. The neural prediction is achieved by leveraging the virtual point lights and shading clues for each light. Specifically, two stages are explored. In the light encoding stage, each light generates a set of virtual point lights in the scene, which are then encoded into an implicit neural light representation, along with screen-space shading clues like visibility. In the light gathering stage, a pixel-light attention mechanism composites all light representations for each shading point. Given the geometry and material representation, in tandem with the composed light representations of all lights, a lightweight neural network predicts the final radiance. Experimental results demonstrate that the proposed LightFormer can yield reasonable and realistic global illumination in fully dynamic scenes with real-time performance.\u0000","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":null,"pages":null},"PeriodicalIF":7.8,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141822292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Digital Garment Initialization from Sewing Patterns 根据缝纫图样自动初始化数码服装
IF 7.8 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-19 DOI: 10.1145/3658128
Chen Liu, Weiwei Xu, Yin Yang, Huamin Wang
The rapid advancement of digital fashion and generative AI technology calls for an automated approach to transform digital sewing patterns into well-fitted garments on human avatars. When given a sewing pattern with its associated sewing relationships, the primary challenge is to establish an initial arrangement of sewing pieces that is free from folding and intersections. This setup enables a physics-based simulator to seamlessly stitch them into a digital garment, avoiding undesirable local minima. To achieve this, we harness AI classification, heuristics, and numerical optimization. This has led to the development of an innovative hybrid system that minimizes the need for user intervention in the initialization of garment pieces. The seeding process of our system involves the training of a classification network for selecting seed pieces, followed by solving an optimization problem to determine their positions and shapes. Subsequently, an iterative selection-arrangement procedure automates the selection of pattern pieces and employs a phased initialization approach to mitigate local minima associated with numerical optimization. Our experiments confirm the reliability, efficiency, and scalability of our system when handling intricate garments with multiple layers and numerous pieces. According to our findings, 68 percent of garments can be initialized with zero user intervention, while the remaining garments can be easily corrected through user operations.
数字时尚和生成式人工智能技术的快速发展要求采用一种自动化方法,将数字缝纫图样转化为人类化身的合体服装。如果给定一个缝纫图案及其相关的缝纫关系,首要的挑战是建立一个没有折叠和交叉的缝纫件初始排列。这样的设置可以让基于物理的模拟器将它们无缝地缝合到数字服装中,避免出现不理想的局部最小值。为了实现这一目标,我们利用了人工智能分类、启发式和数值优化技术。由此,我们开发出了一种创新的混合系统,最大限度地减少了用户对服装部件初始化的干预。我们系统的播种过程包括训练一个分类网络来选择种子衣片,然后解决一个优化问题来确定它们的位置和形状。随后,迭代选择-排列程序自动选择样片,并采用分阶段初始化方法来减少与数值优化相关的局部最小值。我们的实验证实了我们的系统在处理多层多片复杂服装时的可靠性、效率和可扩展性。根据我们的研究结果,68% 的服装可以在用户零干预的情况下完成初始化,而其余服装则可以通过用户操作轻松修正。
{"title":"Automatic Digital Garment Initialization from Sewing Patterns","authors":"Chen Liu, Weiwei Xu, Yin Yang, Huamin Wang","doi":"10.1145/3658128","DOIUrl":"https://doi.org/10.1145/3658128","url":null,"abstract":"The rapid advancement of digital fashion and generative AI technology calls for an automated approach to transform digital sewing patterns into well-fitted garments on human avatars. When given a sewing pattern with its associated sewing relationships, the primary challenge is to establish an initial arrangement of sewing pieces that is free from folding and intersections. This setup enables a physics-based simulator to seamlessly stitch them into a digital garment, avoiding undesirable local minima. To achieve this, we harness AI classification, heuristics, and numerical optimization. This has led to the development of an innovative hybrid system that minimizes the need for user intervention in the initialization of garment pieces. The seeding process of our system involves the training of a classification network for selecting seed pieces, followed by solving an optimization problem to determine their positions and shapes. Subsequently, an iterative selection-arrangement procedure automates the selection of pattern pieces and employs a phased initialization approach to mitigate local minima associated with numerical optimization. Our experiments confirm the reliability, efficiency, and scalability of our system when handling intricate garments with multiple layers and numerous pieces. According to our findings, 68 percent of garments can be initialized with zero user intervention, while the remaining garments can be easily corrected through user operations.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":null,"pages":null},"PeriodicalIF":7.8,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141822896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Proxy Tracing: Unbiased Reciprocal Estimation for Optimized Sampling in BDPT 代理追踪:优化 BDPT 采样的无偏互易估算
IF 7.8 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-19 DOI: 10.1145/3658216
Fujia Su, Bingxuan Li, Qingyang Yin, Yanchen Zhang, Sheng Li
Robust light transport algorithms, particularly bidirectional path tracing (BDPT), face significant challenges when dealing with specular or highly glossy involved paths. BDPT constructs the full path by connecting sub-paths traced individually from the light source and camera. However, it remains difficult to sample by connecting vertices on specular and glossy surfaces with narrow-lobed BSDF, as it poses severe constraints on sampling in the feasible direction. To address this issue, we propose a novel approach, called proxy sampling , that enables efficient sub-path connection of these challenging paths. When a low-contribution specular/glossy connection occurs, we drop out the problematic neighboring vertex next to this specular/glossy vertex from the original path, then retrace an alternative sub-path as a proxy to complement this incomplete path. This newly constructed complete path ensures that the connection adheres to the constraint of the narrow lobe within the BSDF of the specular/glossy surface. Unbiased reciprocal estimation is the key to our method to obtain a probability density function (PDF) reciprocal to ensure unbiased rendering. We derive the reciprocal estimation method and provide an efficiency-optimized setting for efficient sampling and connection. Our method provides a robust tool for substituting problematic paths with favorable alternatives while ensuring unbiasedness. We validate this approach in the probabilistic connections BDPT for addressing specular-involved difficult paths. Experimental results have proved the effectiveness and efficiency of our approach, showcasing high-performance rendering capabilities across diverse settings.
强大的光传输算法,尤其是双向路径跟踪(BDPT),在处理镜面或高光泽路径时面临着巨大的挑战。双向路径跟踪通过连接光源和摄像机分别跟踪的子路径来构建完整路径。然而,用窄叶 BSDF 连接镜面和光泽表面上的顶点仍然很难进行采样,因为它对可行方向上的采样造成了严重限制。为了解决这个问题,我们提出了一种称为代理采样的新方法,它可以实现这些具有挑战性的路径的高效子路径连接。当出现低贡献率的镜面/光泽连接时,我们会从原始路径中剔除该镜面/光泽顶点旁边有问题的相邻顶点,然后回溯另一条子路径作为代理来补充这条不完整的路径。这条新构建的完整路径可确保连接符合镜面/亮面 BSDF 内窄叶的约束条件。无偏倒数估计是我们获得概率密度函数(PDF)倒数以确保无偏渲染方法的关键。我们推导了倒数估计方法,并为高效采样和连接提供了效率优化设置。我们的方法提供了一种稳健的工具,在确保无偏性的同时,用有利的替代路径替代有问题的路径。我们在概率连接 BDPT 中验证了这种方法,以解决涉及镜面的困难路径。实验结果证明了我们方法的有效性和效率,展示了在不同环境下的高性能渲染能力。
{"title":"Proxy Tracing: Unbiased Reciprocal Estimation for Optimized Sampling in BDPT","authors":"Fujia Su, Bingxuan Li, Qingyang Yin, Yanchen Zhang, Sheng Li","doi":"10.1145/3658216","DOIUrl":"https://doi.org/10.1145/3658216","url":null,"abstract":"\u0000 Robust light transport algorithms, particularly bidirectional path tracing (BDPT), face significant challenges when dealing with specular or highly glossy involved paths. BDPT constructs the full path by connecting sub-paths traced individually from the light source and camera. However, it remains difficult to sample by connecting vertices on specular and glossy surfaces with narrow-lobed BSDF, as it poses severe constraints on sampling in the feasible direction. To address this issue, we propose a novel approach, called\u0000 proxy sampling\u0000 , that enables efficient sub-path connection of these challenging paths. When a low-contribution specular/glossy connection occurs, we drop out the problematic neighboring vertex next to this specular/glossy vertex from the original path, then retrace an alternative sub-path as a proxy to complement this incomplete path. This newly constructed complete path ensures that the connection adheres to the constraint of the narrow lobe within the BSDF of the specular/glossy surface. Unbiased reciprocal estimation is the key to our method to obtain a probability density function (PDF) reciprocal to ensure unbiased rendering. We derive the reciprocal estimation method and provide an efficiency-optimized setting for efficient sampling and connection. Our method provides a robust tool for substituting problematic paths with favorable alternatives while ensuring unbiasedness. We validate this approach in the probabilistic connections BDPT for addressing specular-involved difficult paths. Experimental results have proved the effectiveness and efficiency of our approach, showcasing high-performance rendering capabilities across diverse settings.\u0000","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":null,"pages":null},"PeriodicalIF":7.8,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141823724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Temporally Stable Metropolis Light Transport Denoising using Recurrent Transformer Blocks 利用递归变压器块实现时态稳定的 Metropolis 光传输去噪
IF 7.8 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-19 DOI: 10.1145/3658218
Chuhao Chen, Yuze He, Tzu-Mao Li
Metropolis Light Transport (MLT) is a global illumination algorithm that is well-known for rendering challenging scenes with intricate light paths. However, MLT methods tend to produce unpredictable correlation artifacts in images, which can introduce visual inconsistencies for animation rendering. This drawback also makes it challenging to denoise MLT renderings while maintaining temporal stability. We tackle this issue with modern learning-based methods and build a sequence denoiser combining the recurrent connections with the cutting-edge vision transformer architecture. We demonstrate that our sophisticated denoiser can consistently improve the quality and temporal stability of MLT renderings with difficult light paths. Our method is efficient and scalable for complex scene renderings that require high sample counts.
Metropolis Light Transport(MLT)是一种全局照明算法,以渲染具有复杂光路的挑战性场景而闻名。然而,MLT 方法往往会在图像中产生不可预知的相关伪影,这会给动画渲染带来视觉上的不一致性。这一缺点也使得在保持时间稳定性的同时对 MLT 渲染进行去噪具有挑战性。我们利用基于现代学习的方法解决了这一问题,并建立了一种序列去噪器,将循环连接与最先进的视觉变换器架构相结合。我们证明,我们复杂的去噪器可以持续改善具有困难光路的 MLT 渲染的质量和时间稳定性。我们的方法高效且可扩展,适用于需要大量样本的复杂场景渲染。
{"title":"Temporally Stable Metropolis Light Transport Denoising using Recurrent Transformer Blocks","authors":"Chuhao Chen, Yuze He, Tzu-Mao Li","doi":"10.1145/3658218","DOIUrl":"https://doi.org/10.1145/3658218","url":null,"abstract":"Metropolis Light Transport (MLT) is a global illumination algorithm that is well-known for rendering challenging scenes with intricate light paths. However, MLT methods tend to produce unpredictable correlation artifacts in images, which can introduce visual inconsistencies for animation rendering. This drawback also makes it challenging to denoise MLT renderings while maintaining temporal stability. We tackle this issue with modern learning-based methods and build a sequence denoiser combining the recurrent connections with the cutting-edge vision transformer architecture. We demonstrate that our sophisticated denoiser can consistently improve the quality and temporal stability of MLT renderings with difficult light paths. Our method is efficient and scalable for complex scene renderings that require high sample counts.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":null,"pages":null},"PeriodicalIF":7.8,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141821283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stochastic Computation of Barycentric Coordinates 随机计算重心坐标
IF 7.8 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-19 DOI: 10.1145/3658131
Fernando de Goes, Mathieu Desbrun
This paper presents a practical and general approach for computing barycentric coordinates through stochastic sampling. Our key insight is a reformulation of the kernel integral defining barycentric coordinates into a weighted least-squares minimization that enables Monte Carlo integration without sacrificing linear precision. Our method can thus compute barycentric coordinates directly at the points of interest, both inside and outside the cage, using just proximity queries to the cage such as closest points and ray intersections. As a result, we can evaluate barycentric coordinates for a large variety of cage representations (from quadrangulated surface meshes to parametric curves) seamlessly, bypassing any volumetric discretization or custom solves. To address the archetypal noise induced by sample-based estimates, we also introduce a denoising scheme tailored to barycentric coordinates. We demonstrate the efficiency and flexibility of our formulation by implementing a stochastic generation of harmonic coordinates, mean-value coordinates, and positive mean-value coordinates.
本文提出了一种通过随机抽样计算原心坐标的实用通用方法。我们的主要见解是将定义重心坐标的核积分重新表述为加权最小二乘最小化,从而在不牺牲线性精度的情况下实现蒙特卡罗积分。因此,我们的方法可以直接在笼子内外的兴趣点计算重心坐标,只需使用笼子的近似查询,如最近点和射线交点。因此,我们可以绕过任何体积离散化或自定义求解,无缝评估各种笼子表示(从四面体网格到参数曲线)的重心坐标。为了解决基于样本的估计所引起的典型噪声,我们还引入了一种专门针对重心坐标的去噪方案。我们通过随机生成谐波坐标、均值坐标和正均值坐标,证明了我们的方法的高效性和灵活性。
{"title":"Stochastic Computation of Barycentric Coordinates","authors":"Fernando de Goes, Mathieu Desbrun","doi":"10.1145/3658131","DOIUrl":"https://doi.org/10.1145/3658131","url":null,"abstract":"This paper presents a practical and general approach for computing barycentric coordinates through stochastic sampling. Our key insight is a reformulation of the kernel integral defining barycentric coordinates into a weighted least-squares minimization that enables Monte Carlo integration without sacrificing linear precision. Our method can thus compute barycentric coordinates directly at the points of interest, both inside and outside the cage, using just proximity queries to the cage such as closest points and ray intersections. As a result, we can evaluate barycentric coordinates for a large variety of cage representations (from quadrangulated surface meshes to parametric curves) seamlessly, bypassing any volumetric discretization or custom solves. To address the archetypal noise induced by sample-based estimates, we also introduce a denoising scheme tailored to barycentric coordinates. We demonstrate the efficiency and flexibility of our formulation by implementing a stochastic generation of harmonic coordinates, mean-value coordinates, and positive mean-value coordinates.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":null,"pages":null},"PeriodicalIF":7.8,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141821729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Categorical Codebook Matching for Embodied Character Controllers 嵌入式字符控制器的分类码表匹配
IF 7.8 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-19 DOI: 10.1145/3658209
Sebastian Starke, Paul Starke, Nicky He, Taku Komura, Yuting Ye
Translating motions from a real user onto a virtual embodied avatar is a key challenge for character animation in the metaverse. In this work, we present a novel generative framework that enables mapping from a set of sparse sensor signals to a full body avatar motion in real-time while faithfully preserving the motion context of the user. In contrast to existing techniques that require training a motion prior and its mapping from control to motion separately, our framework is able to learn the motion manifold as well as how to sample from it at the same time in an end-to-end manner. To achieve that, we introduce a technique called codebook matching which matches the probability distribution between two categorical codebooks for the inputs and outputs for synthesizing the character motions. We demonstrate this technique can successfully handle ambiguity in motion generation and produce high quality character controllers from unstructured motion capture data. Our method is especially useful for interactive applications like virtual reality or video games where high accuracy and responsiveness are needed.
将真实用户的动作转化为虚拟化身是元宇宙中角色动画的一个关键挑战。在这项工作中,我们提出了一个新颖的生成框架,该框架能够将一组稀疏的传感器信号实时映射到全身化身的运动,同时忠实地保留用户的运动背景。与需要分别训练运动先验及其从控制到运动的映射的现有技术相比,我们的框架能够以端到端的方式学习运动流形以及如何同时从中采样。为此,我们引入了一种称为编码本匹配的技术,该技术可匹配输入和输出的两个分类编码本之间的概率分布,从而合成角色动作。我们证明了这种技术可以成功处理动作生成中的模糊性,并从非结构化动作捕捉数据中生成高质量的角色控制器。我们的方法尤其适用于虚拟现实或视频游戏等需要高精度和高响应性的交互式应用。
{"title":"Categorical Codebook Matching for Embodied Character Controllers","authors":"Sebastian Starke, Paul Starke, Nicky He, Taku Komura, Yuting Ye","doi":"10.1145/3658209","DOIUrl":"https://doi.org/10.1145/3658209","url":null,"abstract":"Translating motions from a real user onto a virtual embodied avatar is a key challenge for character animation in the metaverse. In this work, we present a novel generative framework that enables mapping from a set of sparse sensor signals to a full body avatar motion in real-time while faithfully preserving the motion context of the user. In contrast to existing techniques that require training a motion prior and its mapping from control to motion separately, our framework is able to learn the motion manifold as well as how to sample from it at the same time in an end-to-end manner. To achieve that, we introduce a technique called codebook matching which matches the probability distribution between two categorical codebooks for the inputs and outputs for synthesizing the character motions. We demonstrate this technique can successfully handle ambiguity in motion generation and produce high quality character controllers from unstructured motion capture data. Our method is especially useful for interactive applications like virtual reality or video games where high accuracy and responsiveness are needed.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":null,"pages":null},"PeriodicalIF":7.8,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141823378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NeuralTO: Neural Reconstruction and View Synthesis of Translucent Objects NeuralTO:半透明物体的神经重构和视图合成
IF 7.8 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-19 DOI: 10.1145/3658186
Yuxiang Cai, Jiaxiong Qiu, Zhong Li, Bo-Ning Ren
Learning from multi-view images using neural implicit signed distance functions shows impressive performance on 3D Reconstruction of opaque objects. However, existing methods struggle to reconstruct accurate geometry when applied to translucent objects due to the non-negligible bias in their rendering function. To address the inaccuracies in the existing model, we have reparameterized the density function of the neural radiance field by incorporating an estimated constant extinction coefficient. This modification forms the basis of our innovative framework, which is geared towards highfidelity surface reconstruction and the novel-view synthesis of translucent objects. Our framework contains two stages. In the reconstruction stage, we introduce a novel weight function to achieve accurate surface geometry reconstruction. Following the recovery of geometry, the second phase involves learning the distinct scattering properties of the participating media to enhance rendering. A comprehensive dataset, comprising both synthetic and real translucent objects, has been built for conducting extensive experiments. Experiments reveal that our method outperforms existing approaches in terms of reconstruction and novel-view synthesis.
使用神经隐式符号距离函数从多视角图像中学习,在不透明物体的三维重建方面表现出色。然而,现有方法在应用于半透明物体时,由于其渲染函数存在不可忽略的偏差,很难重建精确的几何图形。为了解决现有模型的不准确性,我们对神经辐射场的密度函数进行了重新参数化,加入了一个估计的恒定消光系数。这一修改构成了我们创新框架的基础,该框架面向半透明物体的高保真表面重建和新视角合成。我们的框架包含两个阶段。在重建阶段,我们引入了一个新颖的权重函数,以实现精确的表面几何重建。在恢复几何图形后,第二阶段涉及学习参与介质的不同散射特性,以增强渲染效果。为了进行广泛的实验,我们建立了一个由合成和真实半透明物体组成的综合数据集。实验表明,我们的方法在重建和新视图合成方面优于现有方法。
{"title":"NeuralTO: Neural Reconstruction and View Synthesis of Translucent Objects","authors":"Yuxiang Cai, Jiaxiong Qiu, Zhong Li, Bo-Ning Ren","doi":"10.1145/3658186","DOIUrl":"https://doi.org/10.1145/3658186","url":null,"abstract":"Learning from multi-view images using neural implicit signed distance functions shows impressive performance on 3D Reconstruction of opaque objects. However, existing methods struggle to reconstruct accurate geometry when applied to translucent objects due to the non-negligible bias in their rendering function. To address the inaccuracies in the existing model, we have reparameterized the density function of the neural radiance field by incorporating an estimated constant extinction coefficient. This modification forms the basis of our innovative framework, which is geared towards highfidelity surface reconstruction and the novel-view synthesis of translucent objects. Our framework contains two stages. In the reconstruction stage, we introduce a novel weight function to achieve accurate surface geometry reconstruction. Following the recovery of geometry, the second phase involves learning the distinct scattering properties of the participating media to enhance rendering. A comprehensive dataset, comprising both synthetic and real translucent objects, has been built for conducting extensive experiments. Experiments reveal that our method outperforms existing approaches in terms of reconstruction and novel-view synthesis.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":null,"pages":null},"PeriodicalIF":7.8,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141823712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cyclogenesis: Simulating Hurricanes and Tornadoes 气旋生成:模拟飓风和龙卷风
IF 7.8 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-19 DOI: 10.1145/3658149
Jorge Alejandro Amador Herrera, Jonathan Klein, Daoming Liu, Wojciech Palubicki, S. Pirk, D. L. Michels
Cyclones are large-scale phenomena that result from complex heat and water transfer processes in the atmosphere, as well as from the interaction of multiple hydrometeors , i.e., water and ice particles. When cyclones make landfall, they are considered natural disasters and spawn dread and awe alike. We propose a physically-based approach to describe the 3D development of cyclones in a visually convincing and physically plausible manner. Our approach allows us to capture large-scale heat and water continuity, turbulent microphysical dynamics of hydrometeors, and mesoscale cyclonic processes within the planetary boundary layer. Modeling these processes enables us to simulate multiple hurricane and tornado phenomena. We evaluate our simulations quantitatively by comparing to real data from storm soundings and observations of hurricane landfall from climatology research. Additionally, qualitative comparisons to previous methods are performed to validate the different parts of our scheme. In summary, our model simulates cyclogenesis in a comprehensive way that allows us to interactively render animations of some of the most complex weather events.
气旋是大气中复杂的热量和水分传递过程以及多种水介质(即水和冰颗粒)相互作用的结果,是一种大尺度现象。当气旋登陆时,它们会被视为自然灾害,引起人们的恐惧和敬畏。我们提出了一种基于物理的方法,以视觉上令人信服、物理上可信的方式描述气旋的三维发展过程。我们的方法允许我们捕捉大尺度热量和水的连续性、水介质的湍流微物理动力学以及行星边界层内的中尺度气旋过程。通过模拟这些过程,我们可以模拟多种飓风和龙卷风现象。我们通过与风暴探测的真实数据和气候学研究中的飓风登陆观测数据进行比较,对模拟结果进行定量评估。此外,我们还与以前的方法进行了定性比较,以验证我们方案的不同部分。总之,我们的模型以一种全面的方式模拟气旋生成,使我们能够以交互方式渲染一些最复杂天气事件的动画。
{"title":"Cyclogenesis: Simulating Hurricanes and Tornadoes","authors":"Jorge Alejandro Amador Herrera, Jonathan Klein, Daoming Liu, Wojciech Palubicki, S. Pirk, D. L. Michels","doi":"10.1145/3658149","DOIUrl":"https://doi.org/10.1145/3658149","url":null,"abstract":"\u0000 Cyclones are large-scale phenomena that result from complex heat and water transfer processes in the atmosphere, as well as from the interaction of multiple\u0000 hydrometeors\u0000 , i.e., water and ice particles. When cyclones make landfall, they are considered natural disasters and spawn dread and awe alike. We propose a physically-based approach to describe the 3D development of cyclones in a visually convincing and physically plausible manner. Our approach allows us to capture large-scale heat and water continuity, turbulent microphysical dynamics of hydrometeors, and mesoscale cyclonic processes within the planetary boundary layer. Modeling these processes enables us to simulate multiple hurricane and tornado phenomena. We evaluate our simulations quantitatively by comparing to real data from storm soundings and observations of hurricane landfall from climatology research. Additionally, qualitative comparisons to previous methods are performed to validate the different parts of our scheme. In summary, our model simulates cyclogenesis in a comprehensive way that allows us to interactively render animations of some of the most complex weather events.\u0000","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":null,"pages":null},"PeriodicalIF":7.8,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141824231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1