Shinyoung Yi, Donggun Kim, Jiwoong Na, Xin Tong, Min H. Kim
The objective of polarization rendering is to simulate the interaction of light with materials exhibiting polarization-dependent behavior. However, integrating polarization into rendering is challenging and increases computational costs significantly. The primary difficulty lies in efficiently modeling and computing the complex reflection phenomena associated with polarized light. Specifically, frequency-domain analysis, essential for efficient environment lighting and storage of complex light interactions, is lacking. To efficiently simulate and reproduce polarized light interactions using frequency-domain techniques, we address the challenge of maintaining continuity in polarized light transport represented by Stokes vectors within angular domains. The conventional spherical harmonics method cannot effectively handle continuity and rotation invariance for Stokes vectors. To overcome this, we develop a new method called polarized spherical harmonics (PSH) based on the spin-weighted spherical harmonics theory. Our method provides a rotation-invariant representation of Stokes vector fields. Furthermore, we introduce frequency domain formulations of polarized rendering equations and spherical convolution based on PSH. We first define spherical convolution on Stokes vector fields in the angular domain, and it also provides efficient computation of polarized light transport, nearly on an entry-wise product in the frequency domain. Our frequency domain formulation, including spherical convolution, led to the development of the first real-time polarization rendering technique under polarized environmental illumination, named precomputed polarized radiance transfer, using our polarized spherical harmonics. Results demonstrate that our method can effectively and accurately simulate and reproduce polarized light interactions in complex reflection phenomena, including polarized environmental illumination and soft shadows.
{"title":"Spin-Weighted Spherical Harmonics for Polarized Light Transport","authors":"Shinyoung Yi, Donggun Kim, Jiwoong Na, Xin Tong, Min H. Kim","doi":"10.1145/3658139","DOIUrl":"https://doi.org/10.1145/3658139","url":null,"abstract":"The objective of polarization rendering is to simulate the interaction of light with materials exhibiting polarization-dependent behavior. However, integrating polarization into rendering is challenging and increases computational costs significantly. The primary difficulty lies in efficiently modeling and computing the complex reflection phenomena associated with polarized light. Specifically, frequency-domain analysis, essential for efficient environment lighting and storage of complex light interactions, is lacking. To efficiently simulate and reproduce polarized light interactions using frequency-domain techniques, we address the challenge of maintaining continuity in polarized light transport represented by Stokes vectors within angular domains. The conventional spherical harmonics method cannot effectively handle continuity and rotation invariance for Stokes vectors. To overcome this, we develop a new method called polarized spherical harmonics (PSH) based on the spin-weighted spherical harmonics theory. Our method provides a rotation-invariant representation of Stokes vector fields. Furthermore, we introduce frequency domain formulations of polarized rendering equations and spherical convolution based on PSH. We first define spherical convolution on Stokes vector fields in the angular domain, and it also provides efficient computation of polarized light transport, nearly on an entry-wise product in the frequency domain. Our frequency domain formulation, including spherical convolution, led to the development of the first real-time polarization rendering technique under polarized environmental illumination, named precomputed polarized radiance transfer, using our polarized spherical harmonics. Results demonstrate that our method can effectively and accurately simulate and reproduce polarized light interactions in complex reflection phenomena, including polarized environmental illumination and soft shadows.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":null,"pages":null},"PeriodicalIF":7.8,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141821116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Physically based differentiable rendering allows an accurate light transport simulation to be differentiated with respect to the rendering input, i.e., scene parameters, and it enables inferring scene parameters from target images, e.g., photos or synthetic images, via an iterative optimization. However, this inverse Monte Carlo rendering inherits the fundamental problem of the Monte Carlo integration, i.e., noise, resulting in a slow optimization convergence. An appealing approach to addressing such noise is exploiting an image denoiser to improve optimization convergence. Unfortunately, the direct adoption of existing image denoisers designed for ordinary rendering scenarios can drive the optimization into undesirable local minima due to denoising bias. It motivates us to reformulate a new image denoiser specialized for inverse rendering. Unlike existing image denoisers, we conduct our denoising by considering the target images, i.e., specific information in inverse rendering. For our target-aware denoising, we determine our denoising weights via a linear regression technique using the target. We demonstrate that our denoiser enables inverse rendering optimization to infer scene parameters robustly through a diverse set of tests.
{"title":"Target-Aware Image Denoising for Inverse Monte Carlo Rendering","authors":"Jeongmin Gu, Jonghee Back, Sung-Eui Yoon, Bochang Moon","doi":"10.1145/3658182","DOIUrl":"https://doi.org/10.1145/3658182","url":null,"abstract":"Physically based differentiable rendering allows an accurate light transport simulation to be differentiated with respect to the rendering input, i.e., scene parameters, and it enables inferring scene parameters from target images, e.g., photos or synthetic images, via an iterative optimization. However, this inverse Monte Carlo rendering inherits the fundamental problem of the Monte Carlo integration, i.e., noise, resulting in a slow optimization convergence. An appealing approach to addressing such noise is exploiting an image denoiser to improve optimization convergence. Unfortunately, the direct adoption of existing image denoisers designed for ordinary rendering scenarios can drive the optimization into undesirable local minima due to denoising bias. It motivates us to reformulate a new image denoiser specialized for inverse rendering. Unlike existing image denoisers, we conduct our denoising by considering the target images, i.e., specific information in inverse rendering. For our target-aware denoising, we determine our denoising weights via a linear regression technique using the target. We demonstrate that our denoiser enables inverse rendering optimization to infer scene parameters robustly through a diverse set of tests.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":null,"pages":null},"PeriodicalIF":7.8,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141821172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Josua Sassen, Henrik Schumacher, M. Rumpf, Keenan Crane
This paper develops a shape space framework for collision-aware geometric modeling, where basic geometric operations automatically avoid inter-penetration. Shape spaces are a powerful tool for surface modeling, shape analysis, nonrigid motion planning, and animation, but past formulations permit nonphysical intersections. Our framework augments an existing shape space using a repulsive energy such that collision avoidance becomes a first-class property, encoded in the Riemannian metric itself. In turn, tasks like intersection-free shape interpolation or motion extrapolation amount to simply computing geodesic paths via standard numerical algorithms. To make optimization practical, we develop an adaptive collision penalty that prevents mesh self-intersection, and converges to a meaningful limit energy under refinement. The final algorithms apply to any category of shape, and do not require a dataset of examples, training, rigging, nor any other prior information. For instance, to interpolate between two shapes we need only a single pair of meshes with the same connectivity. We evaluate our method on a variety of challenging examples from modeling and animation.
{"title":"Repulsive Shells","authors":"Josua Sassen, Henrik Schumacher, M. Rumpf, Keenan Crane","doi":"10.1145/3658174","DOIUrl":"https://doi.org/10.1145/3658174","url":null,"abstract":"This paper develops a shape space framework for collision-aware geometric modeling, where basic geometric operations automatically avoid inter-penetration. Shape spaces are a powerful tool for surface modeling, shape analysis, nonrigid motion planning, and animation, but past formulations permit nonphysical intersections. Our framework augments an existing shape space using a repulsive energy such that collision avoidance becomes a first-class property, encoded in the Riemannian metric itself. In turn, tasks like intersection-free shape interpolation or motion extrapolation amount to simply computing geodesic paths via standard numerical algorithms. To make optimization practical, we develop an adaptive collision penalty that prevents mesh self-intersection, and converges to a meaningful limit energy under refinement. The final algorithms apply to any category of shape, and do not require a dataset of examples, training, rigging, nor any other prior information. For instance, to interpolate between two shapes we need only a single pair of meshes with the same connectivity. We evaluate our method on a variety of challenging examples from modeling and animation.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":null,"pages":null},"PeriodicalIF":7.8,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141821321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kenneth Chen, Thomas Wan, Nathan Matsuda, Ajit Ninan, Alexandre Chapiro, Qi Sun
Display power consumption is an emerging concern for untethered devices. This goes double for augmented and virtual extended reality (XR) displays, which target high refresh rates and high resolutions while conforming to an ergonomically light form factor. A number of image mapping techniques have been proposed to extend battery usage. However, there is currently no comprehensive quantitative understanding of how the power savings provided by these methods compare to their impact on visual quality. We set out to answer this question. To this end, we present a perceptual evaluation of algorithms (PEA) for power optimization in XR displays (PODs). Consolidating a portfolio of six power-saving display mapping approaches, we begin by performing a large-scale perceptual study to understand the impact of each method on perceived quality in the wild. This results in a unified quality score for each technique, scaled in just-objectionable-difference (JOD) units. In parallel, each technique is analyzed using hardware-accurate power models. The resulting JOD-to-Milliwatt transfer function provides a first-of-its-kind look into tradeoffs offered by display mapping techniques, and can be directly employed to make architectural decisions for power budgets on XR displays. Finally, we leverage our study data and power models to address important display power applications like the choice of display primary, power implications of eye tracking, and more 1 .
{"title":"PEA-PODs: Perceptual Evaluation of Algorithms for Power Optimization in XR Displays","authors":"Kenneth Chen, Thomas Wan, Nathan Matsuda, Ajit Ninan, Alexandre Chapiro, Qi Sun","doi":"10.1145/3658126","DOIUrl":"https://doi.org/10.1145/3658126","url":null,"abstract":"Display power consumption is an emerging concern for untethered devices. This goes double for augmented and virtual extended reality (XR) displays, which target high refresh rates and high resolutions while conforming to an ergonomically light form factor. A number of image mapping techniques have been proposed to extend battery usage. However, there is currently no comprehensive quantitative understanding of how the power savings provided by these methods compare to their impact on visual quality. We set out to answer this question.\u0000 To this end, we present a perceptual evaluation of algorithms (PEA) for power optimization in XR displays (PODs). Consolidating a portfolio of six power-saving display mapping approaches, we begin by performing a large-scale perceptual study to understand the impact of each method on perceived quality in the wild. This results in a unified quality score for each technique, scaled in just-objectionable-difference (JOD) units. In parallel, each technique is analyzed using hardware-accurate power models.\u0000 \u0000 The resulting JOD-to-Milliwatt transfer function provides a first-of-its-kind look into tradeoffs offered by display mapping techniques, and can be directly employed to make architectural decisions for power budgets on XR displays. Finally, we leverage our study data and power models to address important display power applications like the choice of display primary, power implications of eye tracking, and more\u0000 1\u0000 .\u0000","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":null,"pages":null},"PeriodicalIF":7.8,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141821640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Erosion simulation is a common approach used for generating and authoring mountainous terrains. While water is considered the primary erosion factor, its simulation fails to capture steep slopes near the ridges. In these low-drainage areas, erosion is often approximated with slope-reducing erosion, which yields unrealistically uniform slopes. However, geomorphology observed that another process dominates the low-drainage areas: erosion by debris flow, which is a mixture of mud and rocks triggered by strong climatic events. We propose a new method to capture the interactions between debris flow and fluvial erosion thanks to a new mathematical formulation for debris flow erosion derived from geomorphology and a unified GPU algorithm for erosion and deposition. In particular, we observe that sediment and debris deposition tend to intersect river paths, which motivates the design of a new, approximate flow routing algorithm on the GPU to estimate the water path out of these newly formed depressions. We demonstrate that debris flow carves distinct patterns in the form of erosive scars on steep slopes and cones of deposited debris competing with fluvial erosion downstream.
{"title":"Efficient Debris-flow Simulation for Steep Terrain Erosion","authors":"Aryamaan Jain, Bedrich Benes, Guillaume Cordonnier","doi":"10.1145/3658213","DOIUrl":"https://doi.org/10.1145/3658213","url":null,"abstract":"Erosion simulation is a common approach used for generating and authoring mountainous terrains. While water is considered the primary erosion factor, its simulation fails to capture steep slopes near the ridges. In these low-drainage areas, erosion is often approximated with slope-reducing erosion, which yields unrealistically uniform slopes. However, geomorphology observed that another process dominates the low-drainage areas: erosion by debris flow, which is a mixture of mud and rocks triggered by strong climatic events. We propose a new method to capture the interactions between debris flow and fluvial erosion thanks to a new mathematical formulation for debris flow erosion derived from geomorphology and a unified GPU algorithm for erosion and deposition. In particular, we observe that sediment and debris deposition tend to intersect river paths, which motivates the design of a new, approximate flow routing algorithm on the GPU to estimate the water path out of these newly formed depressions. We demonstrate that debris flow carves distinct patterns in the form of erosive scars on steep slopes and cones of deposited debris competing with fluvial erosion downstream.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":null,"pages":null},"PeriodicalIF":7.8,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141820738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose Progressive Dynamics, a coarse-to-fine, level-of-detail simulation method for the physics-based animation of complex frictionally contacting thin shell and cloth dynamics. Progressive Dynamics provides tight-matching consistency and progressive improvement across levels, with comparable quality and realism to high-fidelity, IPC-based shell simulations [Li et al. 2021] at finest resolutions. Together these features enable an efficient animation-design pipeline with predictive coarse-resolution previews providing rapid design iterations for a final, to-be-generated, high-resolution animation. In contrast, previously, to design such scenes with comparable dynamics would require prohibitively slow design iterations via repeated direct simulations on high-resolution meshes. We evaluate and demonstrate Progressive Dynamics's features over a wide range of challenging stress-tests, benchmarks, and animation design tasks. Here Progressive Dynamics efficiently computes consistent previews at costs comparable to coarsest-level direct simulations. Its matching progressive refinements across levels then generate rich, high-resolution animations with high-speed dynamics, impacts, and the complex detailing of the dynamic wrinkling, folding, and sliding of frictionally contacting thin shells and fabrics.
{"title":"Progressive Dynamics for Cloth and Shell Animation","authors":"J. Zhang, Doug L. James, Danny M. Kaufman","doi":"10.1145/3658214","DOIUrl":"https://doi.org/10.1145/3658214","url":null,"abstract":"We propose Progressive Dynamics, a coarse-to-fine, level-of-detail simulation method for the physics-based animation of complex frictionally contacting thin shell and cloth dynamics. Progressive Dynamics provides tight-matching consistency and progressive improvement across levels, with comparable quality and realism to high-fidelity, IPC-based shell simulations [Li et al. 2021] at finest resolutions. Together these features enable an efficient animation-design pipeline with predictive coarse-resolution previews providing rapid design iterations for a final, to-be-generated, high-resolution animation. In contrast, previously, to design such scenes with comparable dynamics would require prohibitively slow design iterations via repeated direct simulations on high-resolution meshes. We evaluate and demonstrate Progressive Dynamics's features over a wide range of challenging stress-tests, benchmarks, and animation design tasks. Here Progressive Dynamics efficiently computes consistent previews at costs comparable to coarsest-level direct simulations. Its matching progressive refinements across levels then generate rich, high-resolution animations with high-speed dynamics, impacts, and the complex detailing of the dynamic wrinkling, folding, and sliding of frictionally contacting thin shells and fabrics.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":null,"pages":null},"PeriodicalIF":7.8,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141821340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juan Sebastian Montes Maestre, Yinwei Du, R. Hinchet, Stelian Coros, Bernhard Thomaszewski
We present a computational approach for modeling the mechanical behavior of flexible scaled sheet materials---3D-printed hard scales embedded in a soft substrate. Balancing strength and flexibility, these structured materials find applications in protective gear, soft robotics, and 3D-printed fashion. To unlock their full potential, however, we must unravel the complex relation between scale pattern and mechanical properties. To address this problem, we propose a contact-aware homogenization approach that distills native-level simulation data into a novel macromechanical model. This macro-model combines piecewise-quadratic uniaxial fits with polar interpolation using circular harmonics, allowing for efficient simulation of large-scale patterns. We apply our approach to explore the space of isohedral scale patterns, revealing a diverse range of anisotropic and nonlinear material behaviors. Through an extensive set of experiments, we show that our models reproduce various scale-level effects while offering good qualitative agreement with physical prototypes on the macro-level.
{"title":"FlexScale: Modeling and Characterization of Flexible Scaled Sheets","authors":"Juan Sebastian Montes Maestre, Yinwei Du, R. Hinchet, Stelian Coros, Bernhard Thomaszewski","doi":"10.1145/3658175","DOIUrl":"https://doi.org/10.1145/3658175","url":null,"abstract":"We present a computational approach for modeling the mechanical behavior of flexible scaled sheet materials---3D-printed hard scales embedded in a soft substrate. Balancing strength and flexibility, these structured materials find applications in protective gear, soft robotics, and 3D-printed fashion. To unlock their full potential, however, we must unravel the complex relation between scale pattern and mechanical properties. To address this problem, we propose a contact-aware homogenization approach that distills native-level simulation data into a novel macromechanical model. This macro-model combines piecewise-quadratic uniaxial fits with polar interpolation using circular harmonics, allowing for efficient simulation of large-scale patterns. We apply our approach to explore the space of isohedral scale patterns, revealing a diverse range of anisotropic and nonlinear material behaviors. Through an extensive set of experiments, we show that our models reproduce various scale-level effects while offering good qualitative agreement with physical prototypes on the macro-level.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":null,"pages":null},"PeriodicalIF":7.8,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141822630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amy Zhu, Yuxuan Mei, Benjamin T. Jones, Zach Tatlock, Adriana Schulz
Illusion-knit fabrics reveal distinct patterns or images depending on the viewing angle. Artists have manually achieved this effect by exploiting "microgeometry," i.e., small differences in stitch heights. However, past work in computational 3D knitting does not model or exploit designs based on stitch height variation. This paper establishes a foundation for exploring illusion knitting in the context of computational design and fabrication. We observe that the design space is highly constrained, elucidate these constraints, and derive strategies for developing effective, machine-knittable illusion patterns. We partially automate these strategies in a new interactive design tool that reduces difficult patterning tasks to familiar image editing tasks. Illusion patterns also uncover new fabrication challenges regarding mixed colorwork and texture; we describe new algorithms for mitigating fabrication failures and ensuring high-quality knit results.
{"title":"Computational Illusion Knitting","authors":"Amy Zhu, Yuxuan Mei, Benjamin T. Jones, Zach Tatlock, Adriana Schulz","doi":"10.1145/3658231","DOIUrl":"https://doi.org/10.1145/3658231","url":null,"abstract":"Illusion-knit fabrics reveal distinct patterns or images depending on the viewing angle. Artists have manually achieved this effect by exploiting \"microgeometry,\" i.e., small differences in stitch heights. However, past work in computational 3D knitting does not model or exploit designs based on stitch height variation. This paper establishes a foundation for exploring illusion knitting in the context of computational design and fabrication. We observe that the design space is highly constrained, elucidate these constraints, and derive strategies for developing effective, machine-knittable illusion patterns. We partially automate these strategies in a new interactive design tool that reduces difficult patterning tasks to familiar image editing tasks. Illusion patterns also uncover new fabrication challenges regarding mixed colorwork and texture; we describe new algorithms for mitigating fabrication failures and ensuring high-quality knit results.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":null,"pages":null},"PeriodicalIF":7.8,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141822870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yousuf Soliman, Marcel Padilla, Oliver Gross, Felix Knöppel, U. Pinkall, Peter Schröder
Given a sequence of poses of a body we study the motion resulting when the body is immersed in a (possibly) moving, incompressible medium. With the poses given, say, by an animator, the governing second-order ordinary differential equations are those of a rigid body with time-dependent inertia acted upon by various forces. Some of these forces, like lift and drag, depend on the motion of the body in the surrounding medium. Additionally, the inertia must encode the effect of the medium through its added mass. We derive the corresponding dynamics equations which generalize the standard rigid body dynamics equations. All forces are based on local computations using only physical parameters such as mass density. Notably, we approximate the effect of the medium on the body through local computations avoiding any global simulation of the medium. Consequently, the system of equations we must integrate in time is only 6 dimensional (rotation and translation). Our proposed algorithm displays linear complexity and captures intricate natural phenomena that depend on body-fluid interactions.
{"title":"Going with the Flow","authors":"Yousuf Soliman, Marcel Padilla, Oliver Gross, Felix Knöppel, U. Pinkall, Peter Schröder","doi":"10.1145/3658164","DOIUrl":"https://doi.org/10.1145/3658164","url":null,"abstract":"\u0000 Given a sequence of poses of a body we study the motion resulting when the body is immersed in a (possibly) moving, incompressible medium. With the poses given, say, by an animator, the governing second-order ordinary differential equations are those of a rigid body with time-dependent inertia acted upon by various forces. Some of these forces, like lift and drag, depend on the motion of the body in the surrounding medium. Additionally, the inertia must encode the effect of the medium through its\u0000 added mass.\u0000 We derive the corresponding dynamics equations which generalize the standard rigid body dynamics equations. All forces are based on local computations using only physical parameters such as mass density. Notably, we approximate the effect of the medium on the body through local computations avoiding any global simulation of the medium. Consequently, the system of equations we must integrate in time is only 6 dimensional (rotation and translation). Our proposed algorithm displays linear complexity and captures intricate natural phenomena that depend on body-fluid interactions.\u0000","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":null,"pages":null},"PeriodicalIF":7.8,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141823503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Foveated rendering takes advantage of the reduced spatial sensitivity in peripheral vision to greatly reduce rendering cost without noticeable spatial quality degradation. Due to its benefits, it has emerged as a key enabler for real-time high-quality virtual and augmented realities. Interestingly though, a large body of work advocates that a key role of peripheral vision may be motion detection, yet foveated rendering lowers the image quality in these regions, which may impact our ability to detect and quantify motion. The problem is critical for immersive simulations where the ability to detect and quantify movement drives actions and decisions. In this work, we diverge from the contemporary approach towards the goal of foveated graphics, and demonstrate that a loss of high-frequency spatial details in the periphery inhibits motion perception, leading to underestimating motion cues such as velocity. Furthermore, inspired by an interesting visual illusion, we design a perceptually motivated real-time technique that synthesizes controlled spatio-temporal motion energy to offset the loss in motion perception. Finally, we perform user experiments demonstrating our method's effectiveness in recovering motion cues without introducing objectionable quality degradation.
{"title":"Towards Motion Metamers for Foveated Rendering","authors":"Taimoor Tariq, P. Didyk","doi":"10.1145/3658141","DOIUrl":"https://doi.org/10.1145/3658141","url":null,"abstract":"Foveated rendering takes advantage of the reduced spatial sensitivity in peripheral vision to greatly reduce rendering cost without noticeable spatial quality degradation. Due to its benefits, it has emerged as a key enabler for real-time high-quality virtual and augmented realities. Interestingly though, a large body of work advocates that a key role of peripheral vision may be motion detection, yet foveated rendering lowers the image quality in these regions, which may impact our ability to detect and quantify motion. The problem is critical for immersive simulations where the ability to detect and quantify movement drives actions and decisions. In this work, we diverge from the contemporary approach towards the goal of foveated graphics, and demonstrate that a loss of high-frequency spatial details in the periphery inhibits motion perception, leading to underestimating motion cues such as velocity. Furthermore, inspired by an interesting visual illusion, we design a perceptually motivated real-time technique that synthesizes controlled spatio-temporal motion energy to offset the loss in motion perception. Finally, we perform user experiments demonstrating our method's effectiveness in recovering motion cues without introducing objectionable quality degradation.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":null,"pages":null},"PeriodicalIF":7.8,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141823585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}