Bailey Miller, Rohan Sawhney, Keenan Crane, Ioannis Gkioulekas
Numerous scientific and engineering applications require solutions to boundary value problems (BVPs) involving elliptic partial differential equations, such as the Laplace or Poisson equations, on geometrically intricate domains. We develop a Monte Carlo method for solving such BVPs with arbitrary first-order linear boundary conditions---Dirichlet, Neumann, and Robin. Our method directly generalizes the walk on stars (WoSt) algorithm, which previously tackled only the first two types of boundary conditions, with a few simple modifications. Unlike conventional numerical methods, WoSt does not need finite element meshing or global solves. Similar to Monte Carlo rendering, it instead computes pointwise solution estimates by simulating random walks along star-shaped regions inside the BVP domain, using efficient ray-intersection and distance queries. To ensure WoSt produces bounded-variance estimates in the presence of Robin boundary conditions, we show that it is sufficient to modify how WoSt selects the size of these star-shaped regions. Our generalized WoSt algorithm reduces estimation error by orders of magnitude relative to alternative grid-free methods such as the walk on boundary algorithm. We also develop bidirectional and boundary value caching strategies to further reduce estimation error. Our algorithm is trivial to parallelize, scales sublinearly with increasing geometric detail, and enables progressive and view-dependent evaluation.
{"title":"Walkin’ Robin: Walk on Stars with Robin Boundary Conditions","authors":"Bailey Miller, Rohan Sawhney, Keenan Crane, Ioannis Gkioulekas","doi":"10.1145/3658153","DOIUrl":"https://doi.org/10.1145/3658153","url":null,"abstract":"\u0000 Numerous scientific and engineering applications require solutions to boundary value problems (BVPs) involving elliptic partial differential equations, such as the Laplace or Poisson equations, on geometrically intricate domains. We develop a Monte Carlo method for solving such BVPs with arbitrary first-order linear boundary conditions---Dirichlet, Neumann, and Robin. Our method directly generalizes the\u0000 walk on stars (WoSt)\u0000 algorithm, which previously tackled only the first two types of boundary conditions, with a few simple modifications. Unlike conventional numerical methods, WoSt does not need finite element meshing or global solves. Similar to Monte Carlo rendering, it instead computes pointwise solution estimates by simulating random walks along star-shaped regions inside the BVP domain, using efficient ray-intersection and distance queries. To ensure WoSt produces\u0000 bounded-variance\u0000 estimates in the presence of Robin boundary conditions, we show that it is sufficient to modify how WoSt selects the size of these star-shaped regions. Our generalized WoSt algorithm reduces estimation error by orders of magnitude relative to alternative grid-free methods such as the\u0000 walk on boundary\u0000 algorithm. We also develop\u0000 bidirectional\u0000 and\u0000 boundary value caching\u0000 strategies to further reduce estimation error. Our algorithm is trivial to parallelize, scales sublinearly with increasing geometric detail, and enables progressive and view-dependent evaluation.\u0000","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":null,"pages":null},"PeriodicalIF":7.8,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141823777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Across many scientific disciplines, the pursuit of even higher grid resolutions leads to a severe scalability problem in scientific computing. Feature extraction is a commonly chosen approach to reduce the amount of information from dense fields down to geometric primitives that further enable a quantitative analysis. Examples of common features are isolines, extremal lines, or vortex corelines. Due to the rising complexity of the observed phenomena, or in the event of discretization issues with the data, a straightforward application of textbook feature definitions is unfortunately insufficient. Thus, feature extraction from spatial data often requires substantial pre- or post-processing to either clean up the results or to include additional domain knowledge about the feature in question. Such a separate pre- or post-processing of features not only leads to suboptimal and incomparable solutions, it also results in many specialized feature extraction algorithms arising in the different application domains. In this paper, we establish a mathematical language that not only encompasses commonly used feature definitions, it also provides a set of regularizers that can be applied across the bounds of individual application domains. By using the language of variational calculus, we treat features as variational minimizers, which can be combined and regularized as needed. Our formulation not only encompasses existing feature definitions as special case, it also opens the path to novel feature definitions. This work lays the foundations for many new research directions regarding formal definitions, data representations, and numerical extraction algorithms.
{"title":"Variational Feature Extraction in Scientific Visualization","authors":"Nico Daßler, Tobias Günther","doi":"10.1145/3658219","DOIUrl":"https://doi.org/10.1145/3658219","url":null,"abstract":"Across many scientific disciplines, the pursuit of even higher grid resolutions leads to a severe scalability problem in scientific computing. Feature extraction is a commonly chosen approach to reduce the amount of information from dense fields down to geometric primitives that further enable a quantitative analysis. Examples of common features are isolines, extremal lines, or vortex corelines. Due to the rising complexity of the observed phenomena, or in the event of discretization issues with the data, a straightforward application of textbook feature definitions is unfortunately insufficient. Thus, feature extraction from spatial data often requires substantial pre- or post-processing to either clean up the results or to include additional domain knowledge about the feature in question. Such a separate pre- or post-processing of features not only leads to suboptimal and incomparable solutions, it also results in many specialized feature extraction algorithms arising in the different application domains. In this paper, we establish a mathematical language that not only encompasses commonly used feature definitions, it also provides a set of regularizers that can be applied across the bounds of individual application domains. By using the language of variational calculus, we treat features as variational minimizers, which can be combined and regularized as needed. Our formulation not only encompasses existing feature definitions as special case, it also opens the path to novel feature definitions. This work lays the foundations for many new research directions regarding formal definitions, data representations, and numerical extraction algorithms.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":null,"pages":null},"PeriodicalIF":7.8,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141821925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The generation of global illumination in real time has been a long-standing challenge in the graphics community, particularly in dynamic scenes with complex illumination. Recent neural rendering techniques have shown great promise by utilizing neural networks to represent the illumination of scenes and then decoding the final radiance. However, incorporating object parameters into the representation may limit their effectiveness in handling fully dynamic scenes. This work presents a neural rendering approach, dubbed LightFormer , that can generate realistic global illumination for fully dynamic scenes, including dynamic lighting, materials, cameras, and animated objects, in real time. Inspired by classic many-lights methods, the proposed approach focuses on the neural representation of light sources in the scene rather than the entire scene, leading to the overall better generalizability. The neural prediction is achieved by leveraging the virtual point lights and shading clues for each light. Specifically, two stages are explored. In the light encoding stage, each light generates a set of virtual point lights in the scene, which are then encoded into an implicit neural light representation, along with screen-space shading clues like visibility. In the light gathering stage, a pixel-light attention mechanism composites all light representations for each shading point. Given the geometry and material representation, in tandem with the composed light representations of all lights, a lightweight neural network predicts the final radiance. Experimental results demonstrate that the proposed LightFormer can yield reasonable and realistic global illumination in fully dynamic scenes with real-time performance.
{"title":"LightFormer: Light-Oriented Global Neural Rendering in Dynamic Scene","authors":"Haocheng Ren, Yuchi Huo, Yifan Peng, Hongtao Sheng, Weidong Xue, Hongxiang Huang, Jingzhen Lan, Rui Wang, Hujun Bao","doi":"10.1145/3658229","DOIUrl":"https://doi.org/10.1145/3658229","url":null,"abstract":"\u0000 The generation of global illumination in real time has been a long-standing challenge in the graphics community, particularly in dynamic scenes with complex illumination. Recent neural rendering techniques have shown great promise by utilizing neural networks to represent the illumination of scenes and then decoding the final radiance. However, incorporating object parameters into the representation may limit their effectiveness in handling fully dynamic scenes. This work presents a neural rendering approach, dubbed\u0000 LightFormer\u0000 , that can generate realistic global illumination for fully dynamic scenes, including dynamic lighting, materials, cameras, and animated objects, in real time. Inspired by classic many-lights methods, the proposed approach focuses on the neural representation of light sources in the scene rather than the entire scene, leading to the overall better generalizability. The neural prediction is achieved by leveraging the virtual point lights and shading clues for each light. Specifically, two stages are explored. In the light encoding stage, each light generates a set of virtual point lights in the scene, which are then encoded into an implicit neural light representation, along with screen-space shading clues like visibility. In the light gathering stage, a pixel-light attention mechanism composites all light representations for each shading point. Given the geometry and material representation, in tandem with the composed light representations of all lights, a lightweight neural network predicts the final radiance. Experimental results demonstrate that the proposed LightFormer can yield reasonable and realistic global illumination in fully dynamic scenes with real-time performance.\u0000","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":null,"pages":null},"PeriodicalIF":7.8,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141822292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The rapid advancement of digital fashion and generative AI technology calls for an automated approach to transform digital sewing patterns into well-fitted garments on human avatars. When given a sewing pattern with its associated sewing relationships, the primary challenge is to establish an initial arrangement of sewing pieces that is free from folding and intersections. This setup enables a physics-based simulator to seamlessly stitch them into a digital garment, avoiding undesirable local minima. To achieve this, we harness AI classification, heuristics, and numerical optimization. This has led to the development of an innovative hybrid system that minimizes the need for user intervention in the initialization of garment pieces. The seeding process of our system involves the training of a classification network for selecting seed pieces, followed by solving an optimization problem to determine their positions and shapes. Subsequently, an iterative selection-arrangement procedure automates the selection of pattern pieces and employs a phased initialization approach to mitigate local minima associated with numerical optimization. Our experiments confirm the reliability, efficiency, and scalability of our system when handling intricate garments with multiple layers and numerous pieces. According to our findings, 68 percent of garments can be initialized with zero user intervention, while the remaining garments can be easily corrected through user operations.
{"title":"Automatic Digital Garment Initialization from Sewing Patterns","authors":"Chen Liu, Weiwei Xu, Yin Yang, Huamin Wang","doi":"10.1145/3658128","DOIUrl":"https://doi.org/10.1145/3658128","url":null,"abstract":"The rapid advancement of digital fashion and generative AI technology calls for an automated approach to transform digital sewing patterns into well-fitted garments on human avatars. When given a sewing pattern with its associated sewing relationships, the primary challenge is to establish an initial arrangement of sewing pieces that is free from folding and intersections. This setup enables a physics-based simulator to seamlessly stitch them into a digital garment, avoiding undesirable local minima. To achieve this, we harness AI classification, heuristics, and numerical optimization. This has led to the development of an innovative hybrid system that minimizes the need for user intervention in the initialization of garment pieces. The seeding process of our system involves the training of a classification network for selecting seed pieces, followed by solving an optimization problem to determine their positions and shapes. Subsequently, an iterative selection-arrangement procedure automates the selection of pattern pieces and employs a phased initialization approach to mitigate local minima associated with numerical optimization. Our experiments confirm the reliability, efficiency, and scalability of our system when handling intricate garments with multiple layers and numerous pieces. According to our findings, 68 percent of garments can be initialized with zero user intervention, while the remaining garments can be easily corrected through user operations.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":null,"pages":null},"PeriodicalIF":7.8,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141822896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fujia Su, Bingxuan Li, Qingyang Yin, Yanchen Zhang, Sheng Li
Robust light transport algorithms, particularly bidirectional path tracing (BDPT), face significant challenges when dealing with specular or highly glossy involved paths. BDPT constructs the full path by connecting sub-paths traced individually from the light source and camera. However, it remains difficult to sample by connecting vertices on specular and glossy surfaces with narrow-lobed BSDF, as it poses severe constraints on sampling in the feasible direction. To address this issue, we propose a novel approach, called proxy sampling , that enables efficient sub-path connection of these challenging paths. When a low-contribution specular/glossy connection occurs, we drop out the problematic neighboring vertex next to this specular/glossy vertex from the original path, then retrace an alternative sub-path as a proxy to complement this incomplete path. This newly constructed complete path ensures that the connection adheres to the constraint of the narrow lobe within the BSDF of the specular/glossy surface. Unbiased reciprocal estimation is the key to our method to obtain a probability density function (PDF) reciprocal to ensure unbiased rendering. We derive the reciprocal estimation method and provide an efficiency-optimized setting for efficient sampling and connection. Our method provides a robust tool for substituting problematic paths with favorable alternatives while ensuring unbiasedness. We validate this approach in the probabilistic connections BDPT for addressing specular-involved difficult paths. Experimental results have proved the effectiveness and efficiency of our approach, showcasing high-performance rendering capabilities across diverse settings.
{"title":"Proxy Tracing: Unbiased Reciprocal Estimation for Optimized Sampling in BDPT","authors":"Fujia Su, Bingxuan Li, Qingyang Yin, Yanchen Zhang, Sheng Li","doi":"10.1145/3658216","DOIUrl":"https://doi.org/10.1145/3658216","url":null,"abstract":"\u0000 Robust light transport algorithms, particularly bidirectional path tracing (BDPT), face significant challenges when dealing with specular or highly glossy involved paths. BDPT constructs the full path by connecting sub-paths traced individually from the light source and camera. However, it remains difficult to sample by connecting vertices on specular and glossy surfaces with narrow-lobed BSDF, as it poses severe constraints on sampling in the feasible direction. To address this issue, we propose a novel approach, called\u0000 proxy sampling\u0000 , that enables efficient sub-path connection of these challenging paths. When a low-contribution specular/glossy connection occurs, we drop out the problematic neighboring vertex next to this specular/glossy vertex from the original path, then retrace an alternative sub-path as a proxy to complement this incomplete path. This newly constructed complete path ensures that the connection adheres to the constraint of the narrow lobe within the BSDF of the specular/glossy surface. Unbiased reciprocal estimation is the key to our method to obtain a probability density function (PDF) reciprocal to ensure unbiased rendering. We derive the reciprocal estimation method and provide an efficiency-optimized setting for efficient sampling and connection. Our method provides a robust tool for substituting problematic paths with favorable alternatives while ensuring unbiasedness. We validate this approach in the probabilistic connections BDPT for addressing specular-involved difficult paths. Experimental results have proved the effectiveness and efficiency of our approach, showcasing high-performance rendering capabilities across diverse settings.\u0000","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":null,"pages":null},"PeriodicalIF":7.8,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141823724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Metropolis Light Transport (MLT) is a global illumination algorithm that is well-known for rendering challenging scenes with intricate light paths. However, MLT methods tend to produce unpredictable correlation artifacts in images, which can introduce visual inconsistencies for animation rendering. This drawback also makes it challenging to denoise MLT renderings while maintaining temporal stability. We tackle this issue with modern learning-based methods and build a sequence denoiser combining the recurrent connections with the cutting-edge vision transformer architecture. We demonstrate that our sophisticated denoiser can consistently improve the quality and temporal stability of MLT renderings with difficult light paths. Our method is efficient and scalable for complex scene renderings that require high sample counts.
{"title":"Temporally Stable Metropolis Light Transport Denoising using Recurrent Transformer Blocks","authors":"Chuhao Chen, Yuze He, Tzu-Mao Li","doi":"10.1145/3658218","DOIUrl":"https://doi.org/10.1145/3658218","url":null,"abstract":"Metropolis Light Transport (MLT) is a global illumination algorithm that is well-known for rendering challenging scenes with intricate light paths. However, MLT methods tend to produce unpredictable correlation artifacts in images, which can introduce visual inconsistencies for animation rendering. This drawback also makes it challenging to denoise MLT renderings while maintaining temporal stability. We tackle this issue with modern learning-based methods and build a sequence denoiser combining the recurrent connections with the cutting-edge vision transformer architecture. We demonstrate that our sophisticated denoiser can consistently improve the quality and temporal stability of MLT renderings with difficult light paths. Our method is efficient and scalable for complex scene renderings that require high sample counts.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":null,"pages":null},"PeriodicalIF":7.8,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141821283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a practical and general approach for computing barycentric coordinates through stochastic sampling. Our key insight is a reformulation of the kernel integral defining barycentric coordinates into a weighted least-squares minimization that enables Monte Carlo integration without sacrificing linear precision. Our method can thus compute barycentric coordinates directly at the points of interest, both inside and outside the cage, using just proximity queries to the cage such as closest points and ray intersections. As a result, we can evaluate barycentric coordinates for a large variety of cage representations (from quadrangulated surface meshes to parametric curves) seamlessly, bypassing any volumetric discretization or custom solves. To address the archetypal noise induced by sample-based estimates, we also introduce a denoising scheme tailored to barycentric coordinates. We demonstrate the efficiency and flexibility of our formulation by implementing a stochastic generation of harmonic coordinates, mean-value coordinates, and positive mean-value coordinates.
{"title":"Stochastic Computation of Barycentric Coordinates","authors":"Fernando de Goes, Mathieu Desbrun","doi":"10.1145/3658131","DOIUrl":"https://doi.org/10.1145/3658131","url":null,"abstract":"This paper presents a practical and general approach for computing barycentric coordinates through stochastic sampling. Our key insight is a reformulation of the kernel integral defining barycentric coordinates into a weighted least-squares minimization that enables Monte Carlo integration without sacrificing linear precision. Our method can thus compute barycentric coordinates directly at the points of interest, both inside and outside the cage, using just proximity queries to the cage such as closest points and ray intersections. As a result, we can evaluate barycentric coordinates for a large variety of cage representations (from quadrangulated surface meshes to parametric curves) seamlessly, bypassing any volumetric discretization or custom solves. To address the archetypal noise induced by sample-based estimates, we also introduce a denoising scheme tailored to barycentric coordinates. We demonstrate the efficiency and flexibility of our formulation by implementing a stochastic generation of harmonic coordinates, mean-value coordinates, and positive mean-value coordinates.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":null,"pages":null},"PeriodicalIF":7.8,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141821729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sebastian Starke, Paul Starke, Nicky He, Taku Komura, Yuting Ye
Translating motions from a real user onto a virtual embodied avatar is a key challenge for character animation in the metaverse. In this work, we present a novel generative framework that enables mapping from a set of sparse sensor signals to a full body avatar motion in real-time while faithfully preserving the motion context of the user. In contrast to existing techniques that require training a motion prior and its mapping from control to motion separately, our framework is able to learn the motion manifold as well as how to sample from it at the same time in an end-to-end manner. To achieve that, we introduce a technique called codebook matching which matches the probability distribution between two categorical codebooks for the inputs and outputs for synthesizing the character motions. We demonstrate this technique can successfully handle ambiguity in motion generation and produce high quality character controllers from unstructured motion capture data. Our method is especially useful for interactive applications like virtual reality or video games where high accuracy and responsiveness are needed.
{"title":"Categorical Codebook Matching for Embodied Character Controllers","authors":"Sebastian Starke, Paul Starke, Nicky He, Taku Komura, Yuting Ye","doi":"10.1145/3658209","DOIUrl":"https://doi.org/10.1145/3658209","url":null,"abstract":"Translating motions from a real user onto a virtual embodied avatar is a key challenge for character animation in the metaverse. In this work, we present a novel generative framework that enables mapping from a set of sparse sensor signals to a full body avatar motion in real-time while faithfully preserving the motion context of the user. In contrast to existing techniques that require training a motion prior and its mapping from control to motion separately, our framework is able to learn the motion manifold as well as how to sample from it at the same time in an end-to-end manner. To achieve that, we introduce a technique called codebook matching which matches the probability distribution between two categorical codebooks for the inputs and outputs for synthesizing the character motions. We demonstrate this technique can successfully handle ambiguity in motion generation and produce high quality character controllers from unstructured motion capture data. Our method is especially useful for interactive applications like virtual reality or video games where high accuracy and responsiveness are needed.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":null,"pages":null},"PeriodicalIF":7.8,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141823378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Learning from multi-view images using neural implicit signed distance functions shows impressive performance on 3D Reconstruction of opaque objects. However, existing methods struggle to reconstruct accurate geometry when applied to translucent objects due to the non-negligible bias in their rendering function. To address the inaccuracies in the existing model, we have reparameterized the density function of the neural radiance field by incorporating an estimated constant extinction coefficient. This modification forms the basis of our innovative framework, which is geared towards highfidelity surface reconstruction and the novel-view synthesis of translucent objects. Our framework contains two stages. In the reconstruction stage, we introduce a novel weight function to achieve accurate surface geometry reconstruction. Following the recovery of geometry, the second phase involves learning the distinct scattering properties of the participating media to enhance rendering. A comprehensive dataset, comprising both synthetic and real translucent objects, has been built for conducting extensive experiments. Experiments reveal that our method outperforms existing approaches in terms of reconstruction and novel-view synthesis.
{"title":"NeuralTO: Neural Reconstruction and View Synthesis of Translucent Objects","authors":"Yuxiang Cai, Jiaxiong Qiu, Zhong Li, Bo-Ning Ren","doi":"10.1145/3658186","DOIUrl":"https://doi.org/10.1145/3658186","url":null,"abstract":"Learning from multi-view images using neural implicit signed distance functions shows impressive performance on 3D Reconstruction of opaque objects. However, existing methods struggle to reconstruct accurate geometry when applied to translucent objects due to the non-negligible bias in their rendering function. To address the inaccuracies in the existing model, we have reparameterized the density function of the neural radiance field by incorporating an estimated constant extinction coefficient. This modification forms the basis of our innovative framework, which is geared towards highfidelity surface reconstruction and the novel-view synthesis of translucent objects. Our framework contains two stages. In the reconstruction stage, we introduce a novel weight function to achieve accurate surface geometry reconstruction. Following the recovery of geometry, the second phase involves learning the distinct scattering properties of the participating media to enhance rendering. A comprehensive dataset, comprising both synthetic and real translucent objects, has been built for conducting extensive experiments. Experiments reveal that our method outperforms existing approaches in terms of reconstruction and novel-view synthesis.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":null,"pages":null},"PeriodicalIF":7.8,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141823712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jorge Alejandro Amador Herrera, Jonathan Klein, Daoming Liu, Wojciech Palubicki, S. Pirk, D. L. Michels
Cyclones are large-scale phenomena that result from complex heat and water transfer processes in the atmosphere, as well as from the interaction of multiple hydrometeors , i.e., water and ice particles. When cyclones make landfall, they are considered natural disasters and spawn dread and awe alike. We propose a physically-based approach to describe the 3D development of cyclones in a visually convincing and physically plausible manner. Our approach allows us to capture large-scale heat and water continuity, turbulent microphysical dynamics of hydrometeors, and mesoscale cyclonic processes within the planetary boundary layer. Modeling these processes enables us to simulate multiple hurricane and tornado phenomena. We evaluate our simulations quantitatively by comparing to real data from storm soundings and observations of hurricane landfall from climatology research. Additionally, qualitative comparisons to previous methods are performed to validate the different parts of our scheme. In summary, our model simulates cyclogenesis in a comprehensive way that allows us to interactively render animations of some of the most complex weather events.
{"title":"Cyclogenesis: Simulating Hurricanes and Tornadoes","authors":"Jorge Alejandro Amador Herrera, Jonathan Klein, Daoming Liu, Wojciech Palubicki, S. Pirk, D. L. Michels","doi":"10.1145/3658149","DOIUrl":"https://doi.org/10.1145/3658149","url":null,"abstract":"\u0000 Cyclones are large-scale phenomena that result from complex heat and water transfer processes in the atmosphere, as well as from the interaction of multiple\u0000 hydrometeors\u0000 , i.e., water and ice particles. When cyclones make landfall, they are considered natural disasters and spawn dread and awe alike. We propose a physically-based approach to describe the 3D development of cyclones in a visually convincing and physically plausible manner. Our approach allows us to capture large-scale heat and water continuity, turbulent microphysical dynamics of hydrometeors, and mesoscale cyclonic processes within the planetary boundary layer. Modeling these processes enables us to simulate multiple hurricane and tornado phenomena. We evaluate our simulations quantitatively by comparing to real data from storm soundings and observations of hurricane landfall from climatology research. Additionally, qualitative comparisons to previous methods are performed to validate the different parts of our scheme. In summary, our model simulates cyclogenesis in a comprehensive way that allows us to interactively render animations of some of the most complex weather events.\u0000","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":null,"pages":null},"PeriodicalIF":7.8,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141824231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}