Xianjin Gong, James Gain, Damien Rohmer, Sixtine Lyonnet, Julien Pettré, Marie-Paule Cani
We present a method for animating herds that automatically tunes a microscopic herd model based on a short video clip of real animals. Our method handles videos with dense herds, where individual animal motion cannot be separated out. Our contribution is a novel framework for extracting macroscopic herd behaviour from such video clips, and then deriving the microscopic agent parameters that best match this behaviour.
To support this learning process, we extend standard agent models to provide a separation between leaders and followers, better match the occlusion and field-of-view limitations of real animals, support differentiable parameter optimization and improve authoring control. We validate the method by showing that once optimized, the social force and perception parameters of the resulting herd model are accurate enough to predict subsequent frames in the video, even for macroscopic properties not directly incorporated in the optimization process. Furthermore, the extracted herding characteristics can be applied to any terrain with a palette and region-painting approach that generalizes to different herd sizes and leader trajectories. This enables the authoring of herd animations in new environments while preserving learned behaviour.
{"title":"Herds From Video: Learning a Microscopic Herd Model From Macroscopic Motion Data","authors":"Xianjin Gong, James Gain, Damien Rohmer, Sixtine Lyonnet, Julien Pettré, Marie-Paule Cani","doi":"10.1111/cgf.70225","DOIUrl":"https://doi.org/10.1111/cgf.70225","url":null,"abstract":"<p>We present a method for animating herds that automatically tunes a microscopic herd model based on a short video clip of real animals. Our method handles videos with dense herds, where individual animal motion cannot be separated out. Our contribution is a novel framework for extracting macroscopic herd behaviour from such video clips, and then deriving the microscopic agent parameters that best match this behaviour.</p><p>To support this learning process, we extend standard agent models to provide a separation between leaders and followers, better match the occlusion and field-of-view limitations of real animals, support differentiable parameter optimization and improve authoring control. We validate the method by showing that once optimized, the social force and perception parameters of the resulting herd model are accurate enough to predict subsequent frames in the video, even for macroscopic properties not directly incorporated in the optimization process. Furthermore, the extracted herding characteristics can be applied to any terrain with a palette and region-painting approach that generalizes to different herd sizes and leader trajectories. This enables the authoring of herd animations in new environments while preserving learned behaviour.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 6","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145135265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a novel approach for refining a given correspondence map between two shapes. A correspondence map represented as a functional map, namely a change of basis matrix, can be additionally treated as a 2D image. With this perspective, we train an image diffusion model directly in the space of functional maps, enabling it to generate accurate maps conditioned on an inaccurate initial map. The training is done purely in the functional space, and thus is highly efficient. At inference time, we use the pointwise map corresponding to the current functional map as guidance during the diffusion process. The guidance can additionally encourage different functional map objectives, such as orthogonality and commutativity with the Laplace-Beltrami operator. We show that our approach is competitive with state-of-the-art methods of map refinement and that guided diffusion models provide a promising pathway to functional map processing.
{"title":"FRIDU: Functional Map Refinement with Guided Image Diffusion","authors":"Avigail Cohen Rimon, Mirela Ben-Chen, Or Litany","doi":"10.1111/cgf.70203","DOIUrl":"https://doi.org/10.1111/cgf.70203","url":null,"abstract":"<p>We propose a novel approach for refining a given correspondence map between two shapes. A correspondence map represented as a <i>functional map</i>, namely a change of basis matrix, can be additionally treated as a 2D image. With this perspective, we train an <i>image diffusion model</i> directly in the space of functional maps, enabling it to generate accurate maps conditioned on an inaccurate initial map. The training is done purely in the functional space, and thus is highly efficient. At inference time, we use the pointwise map corresponding to the current functional map as <i>guidance</i> during the diffusion process. The guidance can additionally encourage different functional map objectives, such as orthogonality and commutativity with the Laplace-Beltrami operator. We show that our approach is competitive with state-of-the-art methods of map refinement and that guided diffusion models provide a promising pathway to functional map processing.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 5","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70203","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144914910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
X. Chermain, G. Cocco, C. Zanni, E. Garner, P. A. Hugron, S. Lefebvre
Fused filament fabrication (FFF) enables users to quickly design and fabricate parts with unprecedented geometric complexity, fine-tuning both the structural and aesthetic properties of each object. Nevertheless, the full potential of this technology has yet to be realized, as current slicing methods fail to fully exploit the deposition freedom offered by modern 3D printers. In this work, we introduce a novel approach to toolpath generation that moves beyond the traditional layer-based concept. We use frames, referred to as atoms, as solid elements instead of slices. We optimize the distribution of atoms within the part volume to ensure even spacing and smooth orientation while accurately capturing the part's geometry. Although these atoms collectively represent the complete object, they do not inherently define a fabrication plan. To address this, we compute an extrusion toolpath as an ordered sequence of atoms that, when followed, provides a collision-free fabrication strategy. This general approach is robust, requires minimal user intervention compared to existing techniques, and integrates many of the best features into a unified framework: precise deposition conforming to non-planar surfaces, effective filling of narrow features – down to a single path – and the capability to locally print vertical structures before transitioning elsewhere. Additionally, it enables entirely new capabilities, such as anisotropic appearance fabrication on curved surfaces.
{"title":"Atomizer: Beyond Non-Planar Slicing for Fused Filament Fabrication","authors":"X. Chermain, G. Cocco, C. Zanni, E. Garner, P. A. Hugron, S. Lefebvre","doi":"10.1111/cgf.70189","DOIUrl":"https://doi.org/10.1111/cgf.70189","url":null,"abstract":"<p>Fused filament fabrication (FFF) enables users to quickly design and fabricate parts with unprecedented geometric complexity, fine-tuning both the structural and aesthetic properties of each object. Nevertheless, the full potential of this technology has yet to be realized, as current slicing methods fail to fully exploit the deposition freedom offered by modern 3D printers. In this work, we introduce a novel approach to toolpath generation that moves beyond the traditional layer-based concept. We use frames, referred to as <i>atoms</i>, as solid elements instead of slices. We optimize the distribution of atoms within the part volume to ensure even spacing and smooth orientation while accurately capturing the part's geometry. Although these atoms collectively represent the complete object, they do not inherently define a fabrication plan. To address this, we compute an extrusion toolpath as an ordered sequence of atoms that, when followed, provides a collision-free fabrication strategy. This general approach is robust, requires minimal user intervention compared to existing techniques, and integrates many of the best features into a unified framework: precise deposition conforming to non-planar surfaces, effective filling of narrow features – down to a single path – and the capability to locally print vertical structures before transitioning elsewhere. Additionally, it enables entirely new capabilities, such as anisotropic appearance fabrication on curved surfaces.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 5","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144915218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hsueh-Ti Derek Liu, Mehdi Rahimzadeh, Victor Zordan
This work presents a method to control the output of mesh simplification algorithms based on iterative edge collapses. Traditional mesh simplification focuses on preserving the visual appearance. Despite still being an important criterion, other geometric properties also play critical roles in different applications, such as triangle quality for computations. This motivates our work to stay under the umbrella of the popular quadric error mesh simplification, while proposing different ways to control the simplified mesh to possess other geometric properties. The key ingredient of our work is another quadric error, called line quadrics, which can be seamlessly added to the vanilla quadric error metric. We show that, theoretically and empirically, adding our line quadrics can improve the numerics and encourage the simplified mesh to have uniformly distributed vertices. If we spread the line quadric adaptively to different regions, it can easily lead to soft preservation of feature vertices and edges. Our method is simple to implement, requiring only a few lines of code change on top of the original quadric error simplification, and can lead to a variety of user controls.
{"title":"Controlling Quadric Error Simplification with Line Quadrics","authors":"Hsueh-Ti Derek Liu, Mehdi Rahimzadeh, Victor Zordan","doi":"10.1111/cgf.70184","DOIUrl":"https://doi.org/10.1111/cgf.70184","url":null,"abstract":"<p>This work presents a method to control the output of mesh simplification algorithms based on iterative edge collapses. Traditional mesh simplification focuses on preserving the visual appearance. Despite still being an important criterion, other geometric properties also play critical roles in different applications, such as triangle quality for computations. This motivates our work to stay under the umbrella of the popular quadric error mesh simplification, while proposing different ways to control the simplified mesh to possess other geometric properties. The key ingredient of our work is another quadric error, called <i>line quadrics</i>, which can be seamlessly added to the vanilla quadric error metric. We show that, theoretically and empirically, adding our line quadrics can improve the numerics and encourage the simplified mesh to have uniformly distributed vertices. If we spread the line quadric adaptively to different regions, it can easily lead to soft preservation of feature vertices and edges. Our method is simple to implement, requiring only a few lines of code change on top of the original quadric error simplification, and can lead to a variety of user controls.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 5","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144915222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Selena Ling, Abhishek Madan, Nicholas Sharp, Alec Jacobson
Randomly sampling points on surfaces is an essential operation in geometry processing. This sampling is computationally straightforward on explicit meshes, but it is much more difficult on other shape representations, such as widely-used implicit surfaces. This work studies a simple and general scheme for sampling points on a surface, which is derived from a connection to the intersections of random rays with the surface. Concretely, given a subroutine to cast a ray against a surface and find all intersections, we can use that subroutine to uniformly sample white noise points on the surface. This approach is particularly effective in the context of implicit signed distance functions, where sphere marching allows us to efficiently cast rays and sample points, without needing to extract an intermediate mesh. We analyze the basic method to show that it guarantees uniformity, and find experimentally that it is significantly more efficient than alternative strategies on a variety of representations. Furthermore, we show extensions to blue noise sampling and stratified sampling, and applications to deform neural implicit surfaces as well as moment estimation.
{"title":"Uniform Sampling of Surfaces by Casting Rays","authors":"Selena Ling, Abhishek Madan, Nicholas Sharp, Alec Jacobson","doi":"10.1111/cgf.70202","DOIUrl":"https://doi.org/10.1111/cgf.70202","url":null,"abstract":"<p>Randomly sampling points on surfaces is an essential operation in geometry processing. This sampling is computationally straightforward on explicit meshes, but it is much more difficult on other shape representations, such as widely-used implicit surfaces. This work studies a simple and general scheme for sampling points on a surface, which is derived from a connection to the intersections of random rays with the surface. Concretely, given a subroutine to cast a ray against a surface and find all intersections, we can use that subroutine to uniformly sample white noise points on the surface. This approach is particularly effective in the context of implicit signed distance functions, where sphere marching allows us to efficiently cast rays and sample points, without needing to extract an intermediate mesh. We analyze the basic method to show that it guarantees uniformity, and find experimentally that it is significantly more efficient than alternative strategies on a variety of representations. Furthermore, we show extensions to blue noise sampling and stratified sampling, and applications to deform neural implicit surfaces as well as moment estimation.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 5","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70202","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144914905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gradient-based optimization is a fundamental tool in geometry processing, but it is often hampered by geometric distortion arising from noisy or sparse gradients. Existing methods mitigate these issues by filtering (i.e., diffusing) gradients over a surface mesh, but they require explicit mesh connectivity and solving large linear systems, making them unsuitable for point-based representation. In this work, we introduce a gradient filtering method tailored for point-based geometry. Our method bypasses explicit connectivity by leveraging regularized Green's functions to directly compute the filtered gradient field from discrete spatial points. Additionally, our approach incorporates elastic deformation based on Green's function of linear elasticity (known as Kelvinlets), reproducing various elastic behaviors such as smoothness and volume preservation while improving robustness in affine transformations. We further accelerate computation using a hierarchical Barnes–Hut style approximation, enabling scalable optimization of one million points. Our method significantly improves convergence across a wide range of applications, including reconstruction, editing, stylization, and simplified optimization experiments with Gaussian splatting.
{"title":"GreenCloud: Volumetric Gradient Filtering via Regularized Green's Functions","authors":"Kenji Tojo, Nobuyuki Umetani","doi":"10.1111/cgf.70207","DOIUrl":"https://doi.org/10.1111/cgf.70207","url":null,"abstract":"<p>Gradient-based optimization is a fundamental tool in geometry processing, but it is often hampered by geometric distortion arising from noisy or sparse gradients. Existing methods mitigate these issues by filtering (i.e., diffusing) gradients over a surface mesh, but they require explicit mesh connectivity and solving large linear systems, making them unsuitable for point-based representation. In this work, we introduce a gradient filtering method tailored for point-based geometry. Our method bypasses explicit connectivity by leveraging regularized Green's functions to directly compute the filtered gradient field from discrete spatial points. Additionally, our approach incorporates elastic deformation based on Green's function of linear elasticity (known as Kelvinlets), reproducing various elastic behaviors such as smoothness and volume preservation while improving robustness in affine transformations. We further accelerate computation using a hierarchical Barnes–Hut style approximation, enabling scalable optimization of one million points. Our method significantly improves convergence across a wide range of applications, including reconstruction, editing, stylization, and simplified optimization experiments with Gaussian splatting.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 5","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144914971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Siyuan He, Meng-Jan Wu, Arthur Lebée, Mélina Skouras
Inflatable pads, such as those used as mattresses or protective equipment, are structures made of two planar membranes sealed according to periodic patterns, typically parallel lines or dots. In this work, we propose to treat these inflatables as metamaterials.
By considering novel sealing patterns with 6-fold symmetry, we are able to generate a family of inflatable materials whose macroscale contraction is isotropic and can be modulated by controlling the parameters of the seals. We leverage this property of our inflatable materials family to propose a simple and effective algorithm based on conformal mapping that allows us to design the layout of inflatable structures that can be fabricated flat and whose inflated shapes approximate those of given target freeform surfaces.
{"title":"MatAIRials: Isotropic Inflatable Metamaterials for Freeform Surface Design","authors":"Siyuan He, Meng-Jan Wu, Arthur Lebée, Mélina Skouras","doi":"10.1111/cgf.70190","DOIUrl":"https://doi.org/10.1111/cgf.70190","url":null,"abstract":"<p>Inflatable pads, such as those used as mattresses or protective equipment, are structures made of two planar membranes sealed according to periodic patterns, typically parallel lines or dots. In this work, we propose to treat these inflatables as <i>metamaterials</i>.</p><p>By considering novel sealing patterns with 6-fold symmetry, we are able to generate a family of inflatable materials whose macroscale contraction is <i>isotropic</i> and can be modulated by controlling the parameters of the seals. We leverage this property of our inflatable materials family to propose a simple and effective algorithm based on conformal mapping that allows us to design the layout of inflatable structures that can be fabricated flat and whose inflated shapes approximate those of given target freeform surfaces.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 5","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144915176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The manufacturing industry faces an urgent need to transition from the linear “make-take-use-dispose” production model towards more sustainable circular models that retain resources in the production chain. Motivated by this need, we introduce the new problem of approximating 3D surfaces by reusing panels from other surfaces. We present an algorithm that takes as input one or several existing shapes and relies on partial shape registration to identify a small set of simple panels that, once cut from the existing shapes and transformed rigidly, approximate a target shape within a user-defined distance threshold. As a proof of concept, we demonstrate our algorithm in the context of rapid prototyping, where we harvest curved panels from plastic bottles and assemble them with custom connectors to fabricate medium-size freeform structures.
{"title":"Shape Approximation by Surface Reuse","authors":"Berend Baas, David Bommes, Adrien Bousseau","doi":"10.1111/cgf.70204","DOIUrl":"https://doi.org/10.1111/cgf.70204","url":null,"abstract":"<p>The manufacturing industry faces an urgent need to transition from the linear “make-take-use-dispose” production model towards more sustainable circular models that retain resources in the production chain. Motivated by this need, we introduce the new problem of approximating 3D surfaces by reusing panels from other surfaces. We present an algorithm that takes as input one or several existing shapes and relies on partial shape registration to identify a small set of simple panels that, once cut from the existing shapes and transformed rigidly, approximate a target shape within a user-defined distance threshold. As a proof of concept, we demonstrate our algorithm in the context of rapid prototyping, where we harvest curved panels from plastic bottles and assemble them with custom connectors to fabricate medium-size freeform structures.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 5","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144915219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present Im2SurfTex, a method that generates textures for input 3D shapes by learning to aggregate multi-view image outputs produced by 2D image diffusion models onto the shapes' texture space. Unlike existing texture generation techniques that use ad hoc backprojection and averaging schemes to blend multiview images into textures, often resulting in texture seams and artifacts, our approach employs a trained neural module to boost texture coherency. The key ingredient of our module is to leverage neural attention and appropriate positional encodings of image pixels based on their corresponding 3D point positions, normals, and surface-aware coordinates as encoded in geodesic distances within surface patches. These encodings capture texture correlations between neighboring surface points, ensuring better texture continuity. Experimental results show that our module improves texture quality, achieving superior performance in high-resolution texture generation.
{"title":"Im2SurfTex: Surface Texture Generation via Neural Backprojection of Multi-View Images","authors":"Yiangos Georgiou, Marios Loizou, Melinos Averkiou, Evangelos Kalogerakis","doi":"10.1111/cgf.70191","DOIUrl":"https://doi.org/10.1111/cgf.70191","url":null,"abstract":"<p>We present Im2SurfTex, a method that generates textures for input 3D shapes by learning to aggregate multi-view image outputs produced by 2D image diffusion models onto the shapes' texture space. Unlike existing texture generation techniques that use ad hoc backprojection and averaging schemes to blend multiview images into textures, often resulting in texture seams and artifacts, our approach employs a trained neural module to boost texture coherency. The key ingredient of our module is to leverage neural attention and appropriate positional encodings of image pixels based on their corresponding 3D point positions, normals, and surface-aware coordinates as encoded in geodesic distances within surface patches. These encodings capture texture correlations between neighboring surface points, ensuring better texture continuity. Experimental results show that our module improves texture quality, achieving superior performance in high-resolution texture generation.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 5","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70191","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144914964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}