Antoine Guédon, Diego Gomez, Nissim Maruani, Bingchen Gong, George Drettakis, Maks Ovsjanikov
While recent advances in Gaussian Splatting have enabled fast reconstruction of high-quality 3D scenes from images, extracting accurate surface meshes remains a challenge. Current approaches extract the surface through costly post-processing steps, resulting in the loss of fine geometric details or requiring significant time and leading to very dense meshes with millions of vertices. More fundamentally, the a posteriori conversion from a volumetric to a surface representation limits the ability of the final mesh to preserve all geometric structures captured during training. We present MILo, a novel Gaussian Splatting framework that bridges the gap between volumetric and surface representations by differentiably extracting a mesh from the 3D Gaussians. We design a fully differentiable procedure that constructs the mesh—including both vertex locations and connectivity—at every iteration directly from the parameters of the Gaussians, which are the only quantities optimized during training. Our method introduces three key technical contributions: (1) a bidirectional consistency framework ensuring both representations—Gaussians and the extracted mesh—capture the same underlying geometry during training; (2) an adaptive mesh extraction process performed at each training iteration, which uses Gaussians as differentiable pivots for Delaunay triangulation; (3) a novel method for computing signed distance values from the 3D Gaussians that enables precise surface extraction while avoiding geometric erosion. Our approach can reconstruct complete scenes, including backgrounds, with state-of-the-art quality while requiring an order of magnitude fewer mesh vertices than previous methods. Due to their light weight and empty interior, our meshes are well suited for downstream applications such as physics simulations and animation. The code for our approach and an online gallery are available at https://anttwo.github.io/milo/.
{"title":"MILo: Mesh-In-the-Loop Gaussian Splatting for Detailed and Efficient Surface Reconstruction","authors":"Antoine Guédon, Diego Gomez, Nissim Maruani, Bingchen Gong, George Drettakis, Maks Ovsjanikov","doi":"10.1145/3763339","DOIUrl":"https://doi.org/10.1145/3763339","url":null,"abstract":"While recent advances in Gaussian Splatting have enabled fast reconstruction of high-quality 3D scenes from images, extracting accurate surface meshes remains a challenge. Current approaches extract the surface through costly post-processing steps, resulting in the loss of fine geometric details or requiring significant time and leading to very dense meshes with millions of vertices. More fundamentally, the <jats:italic toggle=\"yes\">a posteriori</jats:italic> conversion from a volumetric to a surface representation limits the ability of the final mesh to preserve all geometric structures captured during training. We present MILo, a novel Gaussian Splatting framework that bridges the gap between volumetric and surface representations by differentiably extracting a mesh from the 3D Gaussians. We design a fully differentiable procedure that constructs the mesh—including both vertex locations and connectivity—at every iteration directly from the parameters of the Gaussians, <jats:italic toggle=\"yes\">which are the only quantities optimized during training.</jats:italic> Our method introduces three key technical contributions: (1) a bidirectional consistency framework ensuring both representations—Gaussians and the extracted mesh—capture the same underlying geometry during training; (2) an adaptive mesh extraction process performed at each training iteration, which uses Gaussians as differentiable pivots for Delaunay triangulation; (3) a novel method for computing signed distance values from the 3D Gaussians that enables precise surface extraction while avoiding geometric erosion. Our approach can reconstruct complete scenes, including backgrounds, with state-of-the-art quality while requiring an order of magnitude fewer mesh vertices than previous methods. Due to their light weight and empty interior, our meshes are well suited for downstream applications such as physics simulations and animation. The code for our approach and an online gallery are available at https://anttwo.github.io/milo/.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"55 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Generating artistic and coherent 3D scene layouts is crucial in digital content creation. Traditional optimization-based methods are often constrained by cumbersome manual rules, while deep generative models face challenges in producing content with richness and diversity. Furthermore, approaches that utilize large language models frequently lack robustness and fail to accurately capture complex spatial relationships. To address these challenges, this paper presents a novel vision-guided 3D layout generation system. We first construct a high-quality asset library containing 2,037 scene assets and 147 3D scene layouts. Subsequently, we employ an image generation model to expand prompt representations into images, fine-tuning it to align with our asset library. We then develop a robust image parsing module to recover the 3D layout of scenes based on visual semantics and geometric information. Finally, we optimize the scene layout using scene graphs and overall visual semantics to ensure logical coherence and alignment with the images. Extensive user testing demonstrates that our algorithm significantly outperforms existing methods in terms of layout richness and quality. The code and dataset will be available at https://github.com/HiHiAllen/Imaginarium.
{"title":"Imaginarium: Vision-guided High-Quality 3D Scene Layout Generation","authors":"Xiaoming Zhu, Xu Huang, Qinghongbing Xie, Zhi Deng, Junsheng Yu, Yirui Guan, Zhongyuan Liu, Lin Zhu, Qijun Zhao, Ligang Liu, Long Zeng","doi":"10.1145/3763353","DOIUrl":"https://doi.org/10.1145/3763353","url":null,"abstract":"Generating artistic and coherent 3D scene layouts is crucial in digital content creation. Traditional optimization-based methods are often constrained by cumbersome manual rules, while deep generative models face challenges in producing content with richness and diversity. Furthermore, approaches that utilize large language models frequently lack robustness and fail to accurately capture complex spatial relationships. To address these challenges, this paper presents a novel vision-guided 3D layout generation system. We first construct a high-quality asset library containing 2,037 scene assets and 147 3D scene layouts. Subsequently, we employ an image generation model to expand prompt representations into images, fine-tuning it to align with our asset library. We then develop a robust image parsing module to recover the 3D layout of scenes based on visual semantics and geometric information. Finally, we optimize the scene layout using scene graphs and overall visual semantics to ensure logical coherence and alignment with the images. Extensive user testing demonstrates that our algorithm significantly outperforms existing methods in terms of layout richness and quality. The code and dataset will be available at https://github.com/HiHiAllen/Imaginarium.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"5 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aditya Ganeshan, Kurt Fleischer, Wenzel Jakob, Ariel Shamir, Daniel Ritchie, Takeo Igarashi, Maria Larsson
Traditional integral wood joints, despite their strength, durability, and elegance, remain rare in modern workflows due to the cost and difficulty of manual fabrication. CNC milling offers a scalable alternative, but directly milling traditional joints often fails to produce functional results because milling induces geometric deviations—such as rounded inner corners—that alter the target geometries of the parts. Since joints rely on tightly fitting surfaces, such deviations introduce gaps or overlaps that undermine fit or block assembly. We propose to overcome this problem by (1) designing a language that represent millable geometry, and (2) co-optimizing part geometries to restore coupling. We introduce Millable Extrusion Geometry (MXG), a language for representing geometry as the outcome of milling operations performed with flat-end drill bits. MXG represents each operation as a subtractive extrusion volume defined by a tool direction and drill radius. This parameterization enables the modeling of artifact-free geometry under an idealized zero-radius drill bit, matching traditional joint designs. Increasing the radius then reveals milling-induced deviations, which compromise the integrity of the joint. To restore coupling, we formalize tight coupling in terms of both surface proximity and proximity constraints on the mill-bit paths associated with mating surfaces. We then derive two tractable, differentiable losses that enable efficient optimization of joint geometry. We evaluate our method on 30 traditional joint designs, demonstrating that it produces CNC-compatible, tightly fitting joints that approximates the original geometry. By reinterpreting traditional joints for CNC workflows, we continue the evolution of this heritage craft and help ensure its relevance in future making practices.
{"title":"MiGumi: Making Tightly Coupled Integral Joints Millable","authors":"Aditya Ganeshan, Kurt Fleischer, Wenzel Jakob, Ariel Shamir, Daniel Ritchie, Takeo Igarashi, Maria Larsson","doi":"10.1145/3763304","DOIUrl":"https://doi.org/10.1145/3763304","url":null,"abstract":"Traditional integral wood joints, despite their strength, durability, and elegance, remain rare in modern workflows due to the cost and difficulty of manual fabrication. CNC milling offers a scalable alternative, but directly milling traditional joints often fails to produce functional results because milling induces geometric deviations—such as rounded inner corners—that alter the target geometries of the parts. Since joints rely on tightly fitting surfaces, such deviations introduce gaps or overlaps that undermine fit or block assembly. We propose to overcome this problem by (1) designing a language that represent millable geometry, and (2) co-optimizing part geometries to restore coupling. We introduce Millable Extrusion Geometry (MXG), a language for representing geometry as the outcome of milling operations performed with flat-end drill bits. MXG represents each operation as a subtractive extrusion volume defined by a tool direction and drill radius. This parameterization enables the modeling of artifact-free geometry under an idealized zero-radius drill bit, matching traditional joint designs. Increasing the radius then reveals milling-induced deviations, which compromise the integrity of the joint. To restore coupling, we formalize tight coupling in terms of both surface proximity and proximity constraints on the mill-bit paths associated with mating surfaces. We then derive two tractable, differentiable losses that enable efficient optimization of joint geometry. We evaluate our method on 30 traditional joint designs, demonstrating that it produces CNC-compatible, tightly fitting joints that approximates the original geometry. By reinterpreting traditional joints for CNC workflows, we continue the evolution of this heritage craft and help ensure its relevance in future making practices.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"1 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce a general, scalable computational framework for multi-axis 3D printing based on implicit neural fields (INFs) that unifies all stages of tool-path generation and global collision-free motion planning. In our pipeline, input models are represented as signed distance fields, with fabrication objectives—such as support-free printing, surface finish quality, and extrusion control—directly encoded in the optimization of an implicit guidance field. This unified approach enables toolpath optimization across both surface and interior domains, allowing shell and infill paths to be generated via implicit field interpolation. The printing sequence and multi-axis motion are then jointly optimized over a continuous quaternion field. Our continuous formulation constructs the evolving printing object as a time-varying SDF, supporting differentiable global collision handling throughout INF-based motion planning. Compared to explicit-representation-based methods, INF-3DP achieves up to two orders of magnitude speedup and significantly reduces waypoint-to-surface error. We validate our framework on diverse, complex models and demonstrate its efficiency with physical fabrication experiments using a robot-assisted multi-axis system.
{"title":"INF-3DP: Implicit Neural Fields for Collision-Free Multi-Axis 3D Printing","authors":"Jiasheng Qu, Zhuo Huang, Dezhao Guo, Hailin Sun, Aoran Lyu, Chengkai Dai, Yeung Yam, Guoxin Fang","doi":"10.1145/3763354","DOIUrl":"https://doi.org/10.1145/3763354","url":null,"abstract":"We introduce a general, scalable computational framework for multi-axis 3D printing based on implicit neural fields (INFs) that unifies all stages of tool-path generation and global collision-free motion planning. In our pipeline, input models are represented as signed distance fields, with fabrication objectives—such as support-free printing, surface finish quality, and extrusion control—directly encoded in the optimization of an implicit guidance field. This unified approach enables toolpath optimization across both surface and interior domains, allowing shell and infill paths to be generated via implicit field interpolation. The printing sequence and multi-axis motion are then jointly optimized over a continuous quaternion field. Our continuous formulation constructs the evolving printing object as a time-varying SDF, supporting differentiable global collision handling throughout INF-based motion planning. Compared to explicit-representation-based methods, INF-3DP achieves up to two orders of magnitude speedup and significantly reduces waypoint-to-surface error. We validate our framework on diverse, complex models and demonstrate its efficiency with physical fabrication experiments using a robot-assisted multi-axis system.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"20 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haoyang Zhou, Logan Numerow, Stelian Coros, Bernhard Thomaszewski
Cellular patterns, from planar ornaments to architectural surfaces and mechanical metamaterials, blend aesthetics with functionality. Homogeneous patterns like isohedral tilings offer simplicity and symmetry but lack flexibility, particularly for heterogeneous designs. They cannot smoothly interpolate between tilings or adapt to double-curved surfaces without distortion. Voronoi diagrams provide a more adaptable patterning solution. They can be generalized to star-shaped metrics, enabling diverse cell shapes and continuous grading by interpolating metric parameters. Martínez et al. [2019] explored this idea in 2D using a rasterization-based algorithm to create compelling patterns. However, this discrete approach precludes gradient-based optimization, limiting control over pattern quality. We introduce a novel, closed-form, fully differentiable formulation for Voronoi diagrams with piecewise linear star-shaped metrics, enabling optimization of site positions and metric parameters to meet aesthetic and functional goals. It naturally extends to arbitrary dimensions, including curved 3D surfaces. For improved on-surface patterning, we propose a per-sector parameterization of star-shaped metrics, ensuring uniform cell shapes in non-regular neighborhoods. We demonstrate our approach by generating diverse patterns, from homogeneous to continuously graded designs, with applications in decorative surfaces and metamaterials.
{"title":"Closed-Form Construction of Voronoi Diagrams with Star-Shaped Metrics","authors":"Haoyang Zhou, Logan Numerow, Stelian Coros, Bernhard Thomaszewski","doi":"10.1145/3763296","DOIUrl":"https://doi.org/10.1145/3763296","url":null,"abstract":"Cellular patterns, from planar ornaments to architectural surfaces and mechanical metamaterials, blend aesthetics with functionality. Homogeneous patterns like isohedral tilings offer simplicity and symmetry but lack flexibility, particularly for heterogeneous designs. They cannot smoothly interpolate between tilings or adapt to double-curved surfaces without distortion. Voronoi diagrams provide a more adaptable patterning solution. They can be generalized to star-shaped metrics, enabling diverse cell shapes and continuous grading by interpolating metric parameters. Martínez et al. [2019] explored this idea in 2D using a rasterization-based algorithm to create compelling patterns. However, this discrete approach precludes gradient-based optimization, limiting control over pattern quality. We introduce a novel, closed-form, fully differentiable formulation for Voronoi diagrams with piecewise linear star-shaped metrics, enabling optimization of site positions and metric parameters to meet aesthetic and functional goals. It naturally extends to arbitrary dimensions, including curved 3D surfaces. For improved on-surface patterning, we propose a per-sector parameterization of star-shaped metrics, ensuring uniform cell shapes in non-regular neighborhoods. We demonstrate our approach by generating diverse patterns, from homogeneous to continuously graded designs, with applications in decorative surfaces and metamaterials.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"33 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Atul Rohit Agarwal, Dhawal Sirikonda, Atharv Agashe, Ziang Ren, Dinithi Silva-Sassaman, Charles Carver, Alberto Quattrini Li, Xia Zhou, Adithya Pediredla
We present a high-speed underwater optical backscatter communication technique based on acousto-optic light steering. Our approach enables underwater assets to transmit data at rates potentially reaching hundreds of Mbps, vastly outperforming current state-of-the-art optical and underwater backscatter systems, which typically operate at only a few kbps. In our system, a base station illuminates the backscatter device with a pulsed laser and captures the retroreflected signal using an ultrafast photodetector. The backscatter device comprises a retroreflector and a 2 MHz ultrasound transducer. The transducer generates pressure waves that dynamically modulate the refractive index of the surrounding medium, steering the light either toward the photodetector (encoding bit 1) or away from it (encoding bit 0). Using a 3-bit redundancy scheme, our prototype achieves a communication rate of approximately 0.66 Mbps with an energy consumption of ≤ 1 μJ/bit, representing a 60× improvement over prior techniques. We validate its performance through extensive laboratory experiments in which remote underwater assets wirelessly transmit multimedia data to the base station under various environmental conditions.
{"title":"Underwater Optical Backscatter Communication using Acousto-Optic Beam Steering","authors":"Atul Rohit Agarwal, Dhawal Sirikonda, Atharv Agashe, Ziang Ren, Dinithi Silva-Sassaman, Charles Carver, Alberto Quattrini Li, Xia Zhou, Adithya Pediredla","doi":"10.1145/3763289","DOIUrl":"https://doi.org/10.1145/3763289","url":null,"abstract":"We present a high-speed underwater optical backscatter communication technique based on acousto-optic light steering. Our approach enables underwater assets to transmit data at rates potentially reaching hundreds of Mbps, vastly outperforming current state-of-the-art optical and underwater backscatter systems, which typically operate at only a few kbps. In our system, a base station illuminates the backscatter device with a pulsed laser and captures the retroreflected signal using an ultrafast photodetector. The backscatter device comprises a retroreflector and a 2 MHz ultrasound transducer. The transducer generates pressure waves that dynamically modulate the refractive index of the surrounding medium, steering the light either toward the photodetector (encoding <jats:italic toggle=\"yes\">bit</jats:italic> 1) or away from it (encoding <jats:italic toggle=\"yes\">bit</jats:italic> 0). Using a 3-bit redundancy scheme, our prototype achieves a communication rate of approximately 0.66 Mbps with an energy consumption of ≤ 1 μJ/bit, representing a 60× improvement over prior techniques. We validate its performance through extensive laboratory experiments in which remote underwater assets wirelessly transmit multimedia data to the base station under various environmental conditions.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"2 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces a method for simplifying textured surface triangle meshes in the wild while maintaining high visual quality. While previous methods achieve excellent results on manifold meshes by using the quadric error metric, they struggle to produce high-quality outputs for meshes in the wild, which typically contain non-manifold elements and multiple connected components. In this work, we propose a method for simplifying these "wild" textured triangle meshes. We formulate mesh simplification as a problem of decimating simplicial 2-complexes to handle multiple non-manifold mesh components collectively. Building on the success of quadric error simplification, we iteratively collapse 1-simplices (vertex pairs). Our approach employs a modified quadric error that converges to the original quadric error metric for watertight manifold meshes, while significantly improving the results on wild meshes. For textures, instead of following existing strategies to preserve UVs, we adopt a novel perspective which focuses on computing mesh correspondences throughout the decimation, independent of the UV layout. This combination yields a textured mesh simplification system that is capable of handling arbitrary triangle meshes, achieving to high-quality results on wild inputs without sacrificing the excellent performance on clean inputs. Our method guarantees to avoid common problems in textured mesh simplification, including the prevalent problem of texture bleeding. We extensively evaluate our method on multiple datasets, showing improvements over prior techniques through qualitative, quantitative, and user study evaluations.
{"title":"Simplifying Textured Triangle Meshes in the Wild","authors":"Hsueh-Ti Derek Liu, Xiaoting Zhang, Cem Yuksel","doi":"10.1145/3763277","DOIUrl":"https://doi.org/10.1145/3763277","url":null,"abstract":"This paper introduces a method for simplifying textured surface triangle meshes in the wild while maintaining high visual quality. While previous methods achieve excellent results on <jats:italic toggle=\"yes\">manifold</jats:italic> meshes by using the quadric error metric, they struggle to produce high-quality outputs for meshes in the wild, which typically contain <jats:italic toggle=\"yes\">non-manifold</jats:italic> elements and multiple connected components. In this work, we propose a method for simplifying these \"wild\" textured triangle meshes. We formulate mesh simplification as a problem of decimating <jats:italic toggle=\"yes\">simplicial 2-complexes</jats:italic> to handle multiple non-manifold mesh components collectively. Building on the success of quadric error simplification, we iteratively collapse 1-simplices (vertex pairs). Our approach employs a modified quadric error that converges to the original quadric error metric for watertight manifold meshes, while significantly improving the results on wild meshes. For textures, instead of following existing strategies to preserve UVs, we adopt a novel perspective which focuses on computing mesh correspondences throughout the decimation, independent of the UV layout. This combination yields a textured mesh simplification system that is capable of handling arbitrary triangle meshes, achieving to high-quality results on wild inputs without sacrificing the excellent performance on clean inputs. Our method guarantees to avoid common problems in textured mesh simplification, including the prevalent problem of <jats:italic toggle=\"yes\">texture bleeding.</jats:italic> We extensively evaluate our method on multiple datasets, showing improvements over prior techniques through qualitative, quantitative, and user study evaluations.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"26 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce a novel class of G2 continuous splines constructed using an innovative blending method, which guarantees precise interpolation of given control points. These splines are designed to achieve local curvature maxima specifically at these control points and possess compact local support, thereby eliminating the need for global optimization processes. The formulation ensures the splines are free from cusps and self-intersections and, notably, prevents adjacent segments from intersecting—a significant improvement over prior blending-based curve techniques. This framework utilizes quadratic Bézier splines in conjunction with quartic Bézier blending functions. A constructive algorithm is presented that generates these curvature-controlled curves without relying on global optimization. Through parametric adjustments of curvatures, the curve's geometry near control points can be tuned to create features ranging from smooth to sharp, thus broadening the design possibilities. Rigorous mathematical proofs and visual demonstrations validate all claimed properties of the framework.
{"title":"G 2 Interpolating Spline with Local Maximum Curvature","authors":"Bowen Jiang, Renjie Chen","doi":"10.1145/3763316","DOIUrl":"https://doi.org/10.1145/3763316","url":null,"abstract":"We introduce a novel class of <jats:italic toggle=\"yes\">G</jats:italic> <jats:sup>2</jats:sup> continuous splines constructed using an innovative blending method, which guarantees precise interpolation of given control points. These splines are designed to achieve local curvature maxima specifically at these control points and possess compact local support, thereby eliminating the need for global optimization processes. The formulation ensures the splines are free from cusps and self-intersections and, notably, prevents adjacent segments from intersecting—a significant improvement over prior blending-based curve techniques. This framework utilizes quadratic Bézier splines in conjunction with quartic Bézier blending functions. A constructive algorithm is presented that generates these curvature-controlled curves without relying on global optimization. Through parametric adjustments of curvatures, the curve's geometry near control points can be tuned to create features ranging from smooth to sharp, thus broadening the design possibilities. Rigorous mathematical proofs and visual demonstrations validate all claimed properties of the framework.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"1 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Felix Dellinger, Martin Kilian, Munkyun Lee, Christian Müller, Georg Nawratil, Tomohiro Tachi, Kiumars Sharifmoghaddam
We introduce a novel class of polyhedral tori (PQ-toroids) that snap between two stable configurations - a flat state and a deployed one separated by an energy barrier. Being able to create PQ-toroids from any set of given planar bottom and side faces opens the possibility to assemble the bistable blocks into a thick freeform curved shell structure to follow a planar quadrilateral (PQ) net with coplanar adjacent offset directions. A design pipeline is developed and presented for inversely computing PQ-toroid modules using conjugate net decompositions of a given surface. We analyze the snapping behavior and energy barriers through simulation and build physical prototypes to validate the feasibility of the proposed system. This work expands the geometric design space of multistable origami for lightweight modular structures and offers practical applications in architectural and deployable systems.
{"title":"Snapping Deployable Toroids for Modular Gridshells","authors":"Felix Dellinger, Martin Kilian, Munkyun Lee, Christian Müller, Georg Nawratil, Tomohiro Tachi, Kiumars Sharifmoghaddam","doi":"10.1145/3763808","DOIUrl":"https://doi.org/10.1145/3763808","url":null,"abstract":"We introduce a novel class of polyhedral tori (PQ-toroids) that snap between two stable configurations - a flat state and a deployed one separated by an energy barrier. Being able to create PQ-toroids from any set of given planar bottom and side faces opens the possibility to assemble the bistable blocks into a thick freeform curved shell structure to follow a planar quadrilateral (PQ) net with coplanar adjacent offset directions. A design pipeline is developed and presented for inversely computing PQ-toroid modules using conjugate net decompositions of a given surface. We analyze the snapping behavior and energy barriers through simulation and build physical prototypes to validate the feasibility of the proposed system. This work expands the geometric design space of multistable origami for lightweight modular structures and offers practical applications in architectural and deployable systems.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"155 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The realistic simulation of sand, soil, powders, rubble piles, and large collections of rigid bodies is a common and important problem in the fields of computer graphics, computational physics, and engineering. Direct simulation of these individual bodies quickly becomes expensive, so we often approximate the entire group as a continuum material that can be more easily computed using tools for solving partial differential equations, like the material point method (MPM). In this paper, we present a method for automatically extracting continuum material properties from a collection of rigid bodies. We use numerical homogenization with periodic boundary conditions to simulate an effectively infinite number of rigid bodies in contact. We then record the effective stress-strain relationships from these simulations and convert them into elastic properties and yield criteria for the continuum simulations. Our experiments validate existing theoretical models like the Mohr-Coulomb yield surface by extracting material behaviors from a collection of spheres in contact. We further generalize these existing models to more exotic materials derived from diverse and non-convex shapes. We observe complicated jamming behaviors from non-convex grains, and we introduce a new material model for materials with extremely high levels of internal friction and cohesion. We simulate these new continuum models using MPM with an improved return mapping technique. The end result is a complete system for turning an input rigid body simulation into an efficient continuum simulation with the same effective mechanical properties.
{"title":"Numerical Homogenization of Sand from Grain-level Simulations","authors":"Yi-Lu Chen, Mickaël Ly, Chris Wojtan","doi":"10.1145/3763344","DOIUrl":"https://doi.org/10.1145/3763344","url":null,"abstract":"The realistic simulation of sand, soil, powders, rubble piles, and large collections of rigid bodies is a common and important problem in the fields of computer graphics, computational physics, and engineering. Direct simulation of these individual bodies quickly becomes expensive, so we often approximate the entire group as a continuum material that can be more easily computed using tools for solving partial differential equations, like the material point method (MPM). In this paper, we present a method for automatically extracting continuum material properties from a collection of rigid bodies. We use numerical homogenization with periodic boundary conditions to simulate an effectively infinite number of rigid bodies in contact. We then record the effective stress-strain relationships from these simulations and convert them into elastic properties and yield criteria for the continuum simulations. Our experiments validate existing theoretical models like the Mohr-Coulomb yield surface by extracting material behaviors from a collection of spheres in contact. We further generalize these existing models to more exotic materials derived from diverse and non-convex shapes. We observe complicated jamming behaviors from non-convex grains, and we introduce a new material model for materials with extremely high levels of internal friction and cohesion. We simulate these new continuum models using MPM with an improved return mapping technique. The end result is a complete system for turning an input rigid body simulation into an efficient continuum simulation with the same effective mechanical properties.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"31 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}