Atul Rohit Agarwal, Dhawal Sirikonda, Atharv Agashe, Ziang Ren, Dinithi Silva-Sassaman, Charles Carver, Alberto Quattrini Li, Xia Zhou, Adithya Pediredla
We present a high-speed underwater optical backscatter communication technique based on acousto-optic light steering. Our approach enables underwater assets to transmit data at rates potentially reaching hundreds of Mbps, vastly outperforming current state-of-the-art optical and underwater backscatter systems, which typically operate at only a few kbps. In our system, a base station illuminates the backscatter device with a pulsed laser and captures the retroreflected signal using an ultrafast photodetector. The backscatter device comprises a retroreflector and a 2 MHz ultrasound transducer. The transducer generates pressure waves that dynamically modulate the refractive index of the surrounding medium, steering the light either toward the photodetector (encoding bit 1) or away from it (encoding bit 0). Using a 3-bit redundancy scheme, our prototype achieves a communication rate of approximately 0.66 Mbps with an energy consumption of ≤ 1 μJ/bit, representing a 60× improvement over prior techniques. We validate its performance through extensive laboratory experiments in which remote underwater assets wirelessly transmit multimedia data to the base station under various environmental conditions.
{"title":"Underwater Optical Backscatter Communication using Acousto-Optic Beam Steering","authors":"Atul Rohit Agarwal, Dhawal Sirikonda, Atharv Agashe, Ziang Ren, Dinithi Silva-Sassaman, Charles Carver, Alberto Quattrini Li, Xia Zhou, Adithya Pediredla","doi":"10.1145/3763289","DOIUrl":"https://doi.org/10.1145/3763289","url":null,"abstract":"We present a high-speed underwater optical backscatter communication technique based on acousto-optic light steering. Our approach enables underwater assets to transmit data at rates potentially reaching hundreds of Mbps, vastly outperforming current state-of-the-art optical and underwater backscatter systems, which typically operate at only a few kbps. In our system, a base station illuminates the backscatter device with a pulsed laser and captures the retroreflected signal using an ultrafast photodetector. The backscatter device comprises a retroreflector and a 2 MHz ultrasound transducer. The transducer generates pressure waves that dynamically modulate the refractive index of the surrounding medium, steering the light either toward the photodetector (encoding <jats:italic toggle=\"yes\">bit</jats:italic> 1) or away from it (encoding <jats:italic toggle=\"yes\">bit</jats:italic> 0). Using a 3-bit redundancy scheme, our prototype achieves a communication rate of approximately 0.66 Mbps with an energy consumption of ≤ 1 μJ/bit, representing a 60× improvement over prior techniques. We validate its performance through extensive laboratory experiments in which remote underwater assets wirelessly transmit multimedia data to the base station under various environmental conditions.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"2 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces a method for simplifying textured surface triangle meshes in the wild while maintaining high visual quality. While previous methods achieve excellent results on manifold meshes by using the quadric error metric, they struggle to produce high-quality outputs for meshes in the wild, which typically contain non-manifold elements and multiple connected components. In this work, we propose a method for simplifying these "wild" textured triangle meshes. We formulate mesh simplification as a problem of decimating simplicial 2-complexes to handle multiple non-manifold mesh components collectively. Building on the success of quadric error simplification, we iteratively collapse 1-simplices (vertex pairs). Our approach employs a modified quadric error that converges to the original quadric error metric for watertight manifold meshes, while significantly improving the results on wild meshes. For textures, instead of following existing strategies to preserve UVs, we adopt a novel perspective which focuses on computing mesh correspondences throughout the decimation, independent of the UV layout. This combination yields a textured mesh simplification system that is capable of handling arbitrary triangle meshes, achieving to high-quality results on wild inputs without sacrificing the excellent performance on clean inputs. Our method guarantees to avoid common problems in textured mesh simplification, including the prevalent problem of texture bleeding. We extensively evaluate our method on multiple datasets, showing improvements over prior techniques through qualitative, quantitative, and user study evaluations.
{"title":"Simplifying Textured Triangle Meshes in the Wild","authors":"Hsueh-Ti Derek Liu, Xiaoting Zhang, Cem Yuksel","doi":"10.1145/3763277","DOIUrl":"https://doi.org/10.1145/3763277","url":null,"abstract":"This paper introduces a method for simplifying textured surface triangle meshes in the wild while maintaining high visual quality. While previous methods achieve excellent results on <jats:italic toggle=\"yes\">manifold</jats:italic> meshes by using the quadric error metric, they struggle to produce high-quality outputs for meshes in the wild, which typically contain <jats:italic toggle=\"yes\">non-manifold</jats:italic> elements and multiple connected components. In this work, we propose a method for simplifying these \"wild\" textured triangle meshes. We formulate mesh simplification as a problem of decimating <jats:italic toggle=\"yes\">simplicial 2-complexes</jats:italic> to handle multiple non-manifold mesh components collectively. Building on the success of quadric error simplification, we iteratively collapse 1-simplices (vertex pairs). Our approach employs a modified quadric error that converges to the original quadric error metric for watertight manifold meshes, while significantly improving the results on wild meshes. For textures, instead of following existing strategies to preserve UVs, we adopt a novel perspective which focuses on computing mesh correspondences throughout the decimation, independent of the UV layout. This combination yields a textured mesh simplification system that is capable of handling arbitrary triangle meshes, achieving to high-quality results on wild inputs without sacrificing the excellent performance on clean inputs. Our method guarantees to avoid common problems in textured mesh simplification, including the prevalent problem of <jats:italic toggle=\"yes\">texture bleeding.</jats:italic> We extensively evaluate our method on multiple datasets, showing improvements over prior techniques through qualitative, quantitative, and user study evaluations.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"26 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce a novel class of G2 continuous splines constructed using an innovative blending method, which guarantees precise interpolation of given control points. These splines are designed to achieve local curvature maxima specifically at these control points and possess compact local support, thereby eliminating the need for global optimization processes. The formulation ensures the splines are free from cusps and self-intersections and, notably, prevents adjacent segments from intersecting—a significant improvement over prior blending-based curve techniques. This framework utilizes quadratic Bézier splines in conjunction with quartic Bézier blending functions. A constructive algorithm is presented that generates these curvature-controlled curves without relying on global optimization. Through parametric adjustments of curvatures, the curve's geometry near control points can be tuned to create features ranging from smooth to sharp, thus broadening the design possibilities. Rigorous mathematical proofs and visual demonstrations validate all claimed properties of the framework.
{"title":"G 2 Interpolating Spline with Local Maximum Curvature","authors":"Bowen Jiang, Renjie Chen","doi":"10.1145/3763316","DOIUrl":"https://doi.org/10.1145/3763316","url":null,"abstract":"We introduce a novel class of <jats:italic toggle=\"yes\">G</jats:italic> <jats:sup>2</jats:sup> continuous splines constructed using an innovative blending method, which guarantees precise interpolation of given control points. These splines are designed to achieve local curvature maxima specifically at these control points and possess compact local support, thereby eliminating the need for global optimization processes. The formulation ensures the splines are free from cusps and self-intersections and, notably, prevents adjacent segments from intersecting—a significant improvement over prior blending-based curve techniques. This framework utilizes quadratic Bézier splines in conjunction with quartic Bézier blending functions. A constructive algorithm is presented that generates these curvature-controlled curves without relying on global optimization. Through parametric adjustments of curvatures, the curve's geometry near control points can be tuned to create features ranging from smooth to sharp, thus broadening the design possibilities. Rigorous mathematical proofs and visual demonstrations validate all claimed properties of the framework.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"1 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Felix Dellinger, Martin Kilian, Munkyun Lee, Christian Müller, Georg Nawratil, Tomohiro Tachi, Kiumars Sharifmoghaddam
We introduce a novel class of polyhedral tori (PQ-toroids) that snap between two stable configurations - a flat state and a deployed one separated by an energy barrier. Being able to create PQ-toroids from any set of given planar bottom and side faces opens the possibility to assemble the bistable blocks into a thick freeform curved shell structure to follow a planar quadrilateral (PQ) net with coplanar adjacent offset directions. A design pipeline is developed and presented for inversely computing PQ-toroid modules using conjugate net decompositions of a given surface. We analyze the snapping behavior and energy barriers through simulation and build physical prototypes to validate the feasibility of the proposed system. This work expands the geometric design space of multistable origami for lightweight modular structures and offers practical applications in architectural and deployable systems.
{"title":"Snapping Deployable Toroids for Modular Gridshells","authors":"Felix Dellinger, Martin Kilian, Munkyun Lee, Christian Müller, Georg Nawratil, Tomohiro Tachi, Kiumars Sharifmoghaddam","doi":"10.1145/3763808","DOIUrl":"https://doi.org/10.1145/3763808","url":null,"abstract":"We introduce a novel class of polyhedral tori (PQ-toroids) that snap between two stable configurations - a flat state and a deployed one separated by an energy barrier. Being able to create PQ-toroids from any set of given planar bottom and side faces opens the possibility to assemble the bistable blocks into a thick freeform curved shell structure to follow a planar quadrilateral (PQ) net with coplanar adjacent offset directions. A design pipeline is developed and presented for inversely computing PQ-toroid modules using conjugate net decompositions of a given surface. We analyze the snapping behavior and energy barriers through simulation and build physical prototypes to validate the feasibility of the proposed system. This work expands the geometric design space of multistable origami for lightweight modular structures and offers practical applications in architectural and deployable systems.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"155 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The realistic simulation of sand, soil, powders, rubble piles, and large collections of rigid bodies is a common and important problem in the fields of computer graphics, computational physics, and engineering. Direct simulation of these individual bodies quickly becomes expensive, so we often approximate the entire group as a continuum material that can be more easily computed using tools for solving partial differential equations, like the material point method (MPM). In this paper, we present a method for automatically extracting continuum material properties from a collection of rigid bodies. We use numerical homogenization with periodic boundary conditions to simulate an effectively infinite number of rigid bodies in contact. We then record the effective stress-strain relationships from these simulations and convert them into elastic properties and yield criteria for the continuum simulations. Our experiments validate existing theoretical models like the Mohr-Coulomb yield surface by extracting material behaviors from a collection of spheres in contact. We further generalize these existing models to more exotic materials derived from diverse and non-convex shapes. We observe complicated jamming behaviors from non-convex grains, and we introduce a new material model for materials with extremely high levels of internal friction and cohesion. We simulate these new continuum models using MPM with an improved return mapping technique. The end result is a complete system for turning an input rigid body simulation into an efficient continuum simulation with the same effective mechanical properties.
{"title":"Numerical Homogenization of Sand from Grain-level Simulations","authors":"Yi-Lu Chen, Mickaël Ly, Chris Wojtan","doi":"10.1145/3763344","DOIUrl":"https://doi.org/10.1145/3763344","url":null,"abstract":"The realistic simulation of sand, soil, powders, rubble piles, and large collections of rigid bodies is a common and important problem in the fields of computer graphics, computational physics, and engineering. Direct simulation of these individual bodies quickly becomes expensive, so we often approximate the entire group as a continuum material that can be more easily computed using tools for solving partial differential equations, like the material point method (MPM). In this paper, we present a method for automatically extracting continuum material properties from a collection of rigid bodies. We use numerical homogenization with periodic boundary conditions to simulate an effectively infinite number of rigid bodies in contact. We then record the effective stress-strain relationships from these simulations and convert them into elastic properties and yield criteria for the continuum simulations. Our experiments validate existing theoretical models like the Mohr-Coulomb yield surface by extracting material behaviors from a collection of spheres in contact. We further generalize these existing models to more exotic materials derived from diverse and non-convex shapes. We observe complicated jamming behaviors from non-convex grains, and we introduce a new material model for materials with extremely high levels of internal friction and cohesion. We simulate these new continuum models using MPM with an improved return mapping technique. The end result is a complete system for turning an input rigid body simulation into an efficient continuum simulation with the same effective mechanical properties.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"31 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a novel multigrid solver framework that significantly advances the efficiency of physical simulation for unstructured meshes. While multi-grid methods theoretically offer linear scaling, their practical implementation for deformable body simulations faces substantial challenges, particularly on GPUs. Our framework achieves up to 6.9× speedup over traditional methods through an innovative combination of matrix-free vertex block Jacobi smoothing with a Full Approximation Scheme (FAS), enabling both piecewise constant and linear Galerkin formulations without the computational burden of dense coarse matrices. Our approach demonstrates superior performance across varying mesh resolutions and material stiffness values, maintaining consistent convergence even under extreme deformations and challenging initial configurations. Comprehensive evaluations against state-of-the-art methods confirm our approach achieves lower simulation error with reduced computational cost, enabling simulation of tetrahedral meshes with over one million vertices at approximately one frame per second on modern GPUs.
{"title":"Fast Galerkin Multigrid Method for Unstructured Meshes","authors":"Jia-Ming Lu, Tailing Yuan, Zhe-Han Mo, Shi-Min Hu","doi":"10.1145/3763327","DOIUrl":"https://doi.org/10.1145/3763327","url":null,"abstract":"We present a novel multigrid solver framework that significantly advances the efficiency of physical simulation for unstructured meshes. While multi-grid methods theoretically offer linear scaling, their practical implementation for deformable body simulations faces substantial challenges, particularly on GPUs. Our framework achieves up to 6.9× speedup over traditional methods through an innovative combination of matrix-free vertex block Jacobi smoothing with a Full Approximation Scheme (FAS), enabling both piecewise constant and linear Galerkin formulations without the computational burden of dense coarse matrices. Our approach demonstrates superior performance across varying mesh resolutions and material stiffness values, maintaining consistent convergence even under extreme deformations and challenging initial configurations. Comprehensive evaluations against state-of-the-art methods confirm our approach achieves lower simulation error with reduced computational cost, enabling simulation of tetrahedral meshes with over one million vertices at approximately one frame per second on modern GPUs.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"203 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pu Li, Wenhao Zhang, Weize Quan, Biao Zhang, Peter Wonka, Dongming Yan
Boundary representation (B-rep) is the de facto standard for CAD model representation in modern industrial design. The intricate coupling between geometric and topological elements in B-rep structures has forced existing generative methods to rely on cascaded multi-stage networks, resulting in error accumulation and computational inefficiency. We present BrepGPT, a single-stage autoregressive framework for B-rep generation. Our key innovation lies in the Voronoi Half-Patch (VHP) representation, which decomposes B-reps into unified local units by assigning geometry to nearest half-edges and sampling their next pointers. Unlike hierarchical representations that require multiple distinct encodings for different structural levels, our VHP representation facilitates unifying geometric attributes and topological relations in a single, coherent format. We further leverage dual VQ-VAEs to encode both vertex topology and Voronoi Half-Patches into vertex-based tokens, achieving a more compact sequential encoding. A decoder-only Transformer is then trained to autoregressively predict these tokens, which are subsequently mapped to vertex-based features and decoded into complete B-rep models. Experiments demonstrate that BrepGPT achieves state-of-the-art performance in unconditional B-rep generation. The framework also exhibits versatility in various applications, including conditional generation from category labels, point clouds, text descriptions, and images, as well as B-rep autocompletion and interpolation.
{"title":"BrepGPT: Autoregressive B-rep Generation with Voronoi Half-Patch","authors":"Pu Li, Wenhao Zhang, Weize Quan, Biao Zhang, Peter Wonka, Dongming Yan","doi":"10.1145/3763323","DOIUrl":"https://doi.org/10.1145/3763323","url":null,"abstract":"Boundary representation (B-rep) is the de facto standard for CAD model representation in modern industrial design. The intricate coupling between geometric and topological elements in B-rep structures has forced existing generative methods to rely on cascaded multi-stage networks, resulting in error accumulation and computational inefficiency. We present BrepGPT, a single-stage autoregressive framework for B-rep generation. Our key innovation lies in the Voronoi Half-Patch (VHP) representation, which decomposes B-reps into unified local units by assigning geometry to nearest half-edges and sampling their next pointers. Unlike hierarchical representations that require multiple distinct encodings for different structural levels, our VHP representation facilitates unifying geometric attributes and topological relations in a single, coherent format. We further leverage dual VQ-VAEs to encode both vertex topology and Voronoi Half-Patches into vertex-based tokens, achieving a more compact sequential encoding. A decoder-only Transformer is then trained to autoregressively predict these tokens, which are subsequently mapped to vertex-based features and decoded into complete B-rep models. Experiments demonstrate that BrepGPT achieves state-of-the-art performance in unconditional B-rep generation. The framework also exhibits versatility in various applications, including conditional generation from category labels, point clouds, text descriptions, and images, as well as B-rep autocompletion and interpolation.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"101 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The desire for cameras with smaller form factors has recently led to a push for exploring computational imaging systems with reduced optical complexity such as a smaller number of lens elements. Unfortunately such simplified optical systems usually suffer from severe aberrations, especially in off-axis regions, which can be difficult to correct purely in software. In this paper we introduce Fovea Stacking, a new type of imaging system that utilizes an emerging dynamic optical component called the deformable phase plate (DPP) for localized aberration correction anywhere on the image sensor. By optimizing DPP deformations through a differentiable optical model, off-axis aberrations are corrected locally, producing a foveated image with enhanced sharpness at the fixation point - analogous to the eye's fovea. Stacking multiple such foveated images, each with a different fixation point, yields a composite image free from aberrations. To efficiently cover the entire field of view, we propose joint optimization of DPP deformations under imaging budget constraints. Due to the DPP device's non-linear behavior, we introduce a neural network-based control model for improved agreement between simulation and hardware performance. We further demonstrated that for extended depth-of-field imaging, Fovea Stacking outperforms traditional focus stacking in image quality. By integrating object detection or eye-tracking, the system can dynamically adjust the lens to track the object of interest-enabling real-time foveated video suitable for downstream applications such as surveillance or foveated virtual reality displays.
{"title":"Fovea Stacking: Imaging with Dynamic Localized Aberration Correction","authors":"Shi Mao, Yogeshwar Nath Mishra, Wolfgang Heidrich","doi":"10.1145/3763278","DOIUrl":"https://doi.org/10.1145/3763278","url":null,"abstract":"The desire for cameras with smaller form factors has recently led to a push for exploring computational imaging systems with reduced optical complexity such as a smaller number of lens elements. Unfortunately such simplified optical systems usually suffer from severe aberrations, especially in off-axis regions, which can be difficult to correct purely in software. In this paper we introduce Fovea Stacking, a new type of imaging system that utilizes an emerging dynamic optical component called the deformable phase plate (DPP) for localized aberration correction anywhere on the image sensor. By optimizing DPP deformations through a differentiable optical model, off-axis aberrations are corrected locally, producing a foveated image with enhanced sharpness at the fixation point - analogous to the eye's fovea. Stacking multiple such foveated images, each with a different fixation point, yields a composite image free from aberrations. To efficiently cover the entire field of view, we propose joint optimization of DPP deformations under imaging budget constraints. Due to the DPP device's non-linear behavior, we introduce a neural network-based control model for improved agreement between simulation and hardware performance. We further demonstrated that for extended depth-of-field imaging, Fovea Stacking outperforms traditional focus stacking in image quality. By integrating object detection or eye-tracking, the system can dynamically adjust the lens to track the object of interest-enabling real-time foveated video suitable for downstream applications such as surveillance or foveated virtual reality displays.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"1 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Existing 3D Gaussian (3DGS) based methods tend to produce blurriness and artifacts on delicate textures (small objects and high-frequency textures) in aerial large-scale scenes. The reason is that the delicate textures usually occupy a relatively small number of pixels, and the accumulated gradients from loss function are difficult to promote the splitting of 3DGS. To minimize the rendering error, the model will use a small number of large Gaussians to cover these details, resulting in blurriness and artifacts. To solve the above problem, we propose a novel hierarchical Gaussian: JumpingGS. JumpingGS assigns different levels to Gaussians to establish a hierarchical representation. Low-level Gaussians are responsible for the coarse appearance, while high-level Gaussians are responsible for the details. First, we design a splitting strategy that allows low-level Gaussians to skip intermediate levels and directly split the appropriate high-level Gaussians for delicate textures. This level-jump splitting ensures that the weak gradients of delicate textures can always activate a higher level instead of being ignored by the intermediate levels. Second, JumpingGS reduces the gradient and opacity thresholds for density control according to the representation levels, which improves the sensitivity of high-level Gaussians to delicate textures. Third, we design a novel training strategy to detect training views in hard-to-observe regions, and train the model multiple times on these views to alleviate underfitting. Experiments on aerial large-scale scenes demonstrate that JumpingGS outperforms existing 3DGS-based methods, accurately and efficiently recovering delicate textures in large scenes.
{"title":"JumpingGS: Level-jump 3D Gaussian Representation for Delicate Textures in Aerial Large-scale Scene Rendering","authors":"Jiongming Qin, Kaixuan Zhou, Yu Jiang, Huizhi Zhu, Fei Luo, Chunxia Xiao","doi":"10.1145/3763347","DOIUrl":"https://doi.org/10.1145/3763347","url":null,"abstract":"Existing 3D Gaussian (3DGS) based methods tend to produce blurriness and artifacts on delicate textures (small objects and high-frequency textures) in aerial large-scale scenes. The reason is that the delicate textures usually occupy a relatively small number of pixels, and the accumulated gradients from loss function are difficult to promote the splitting of 3DGS. To minimize the rendering error, the model will use a small number of large Gaussians to cover these details, resulting in blurriness and artifacts. To solve the above problem, we propose a novel hierarchical Gaussian: JumpingGS. JumpingGS assigns different levels to Gaussians to establish a hierarchical representation. Low-level Gaussians are responsible for the coarse appearance, while high-level Gaussians are responsible for the details. First, we design a splitting strategy that allows low-level Gaussians to skip intermediate levels and directly split the appropriate high-level Gaussians for delicate textures. This level-jump splitting ensures that the weak gradients of delicate textures can always activate a higher level instead of being ignored by the intermediate levels. Second, JumpingGS reduces the gradient and opacity thresholds for density control according to the representation levels, which improves the sensitivity of high-level Gaussians to delicate textures. Third, we design a novel training strategy to detect training views in hard-to-observe regions, and train the model multiple times on these views to alleviate underfitting. Experiments on aerial large-scale scenes demonstrate that JumpingGS outperforms existing 3DGS-based methods, accurately and efficiently recovering delicate textures in large scenes.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"125 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural implicit representation, the parameterization of a continuous distance function as a Multi-Layer Perceptron (MLP), has emerged as a promising lead in tackling surface reconstruction from unoriented point clouds. In the presence of noise, however, its lack of explicit neighborhood connectivity makes sharp edges identification particularly challenging, hence preventing the separation of smoothing and sharpening operations, as is achievable with its discrete counterparts. In this work, we propose to tackle this challenge with an auxiliary field, the octahedral field. We observe that both smoothness and sharp features in the distance field can be equivalently described by the smoothness in octahedral space. Therefore, by aligning and smoothing an octahedral field alongside the implicit geometry, our method behaves analogously to bilateral filtering, resulting in a smooth reconstruction while preserving sharp edges. Despite being operated purely pointwise, our method outperforms various traditional and neural implicit fitting approaches across extensive experiments, and is very competitive with methods that require normals and data priors. Code and data of our work are available at: https://github.com/Ankbzpx/frame-field.
{"title":"Neural Octahedral Field: Octahedral Prior for Simultaneous Smoothing and Sharp Edge Regularization","authors":"Ruichen Zheng, Tao Yu, Ruizhen Hu","doi":"10.1145/3763362","DOIUrl":"https://doi.org/10.1145/3763362","url":null,"abstract":"Neural implicit representation, the parameterization of a continuous distance function as a Multi-Layer Perceptron (MLP), has emerged as a promising lead in tackling surface reconstruction from unoriented point clouds. In the presence of noise, however, its lack of explicit neighborhood connectivity makes sharp edges identification particularly challenging, hence preventing the separation of smoothing and sharpening operations, as is achievable with its discrete counterparts. In this work, we propose to tackle this challenge with an auxiliary field, the <jats:italic toggle=\"yes\">octahedral field.</jats:italic> We observe that both smoothness and sharp features in the distance field can be equivalently described by the smoothness in octahedral space. Therefore, by aligning and smoothing an octahedral field alongside the implicit geometry, our method behaves analogously to bilateral filtering, resulting in a smooth reconstruction while preserving sharp edges. Despite being operated purely pointwise, our method outperforms various traditional and neural implicit fitting approaches across extensive experiments, and is very competitive with methods that require normals and data priors. Code and data of our work are available at: https://github.com/Ankbzpx/frame-field.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"26 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}