Pub Date : 2025-06-26DOI: 10.1016/j.gmod.2025.101276
Siying Huang , Xin Yang , Zhengda Lu , Hongxing Qin , Huaiwen Zhang , Yiqun Wang
To improve learning on irregular 3D shapes, such as meshes with varying discretizations and point clouds with different samplings, we propose L-GNN, a new graph neural network that approximates the spectral filters using twice linear parameterization. First, we parameterize the spectral filters using wavelet filter basis functions. The parameterization allows for an enlarged receptive field of graph convolutions, which can simultaneously capture low-frequency and high-frequency information. Second, we parameterize the wavelet filter basis functions using Chebyshev polynomial basis functions. This parameterization reduces the computational complexity of graph convolutions while maintaining robustness to the change of mesh discretization and point cloud sampling. Our L-GNN based on the fast spectral filter can be used for shape correspondence, classification, and segmentation tasks on non-regular mesh or point cloud data. Experimental results show that our method outperforms the current state of the art in terms of both quality and efficiency.
{"title":"L2-GNN: Graph neural networks with fast spectral filters using twice linear parameterization","authors":"Siying Huang , Xin Yang , Zhengda Lu , Hongxing Qin , Huaiwen Zhang , Yiqun Wang","doi":"10.1016/j.gmod.2025.101276","DOIUrl":"10.1016/j.gmod.2025.101276","url":null,"abstract":"<div><div>To improve learning on irregular 3D shapes, such as meshes with varying discretizations and point clouds with different samplings, we propose L<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span>-GNN, a new graph neural network that approximates the spectral filters using twice linear parameterization. First, we parameterize the spectral filters using wavelet filter basis functions. The parameterization allows for an enlarged receptive field of graph convolutions, which can simultaneously capture low-frequency and high-frequency information. Second, we parameterize the wavelet filter basis functions using Chebyshev polynomial basis functions. This parameterization reduces the computational complexity of graph convolutions while maintaining robustness to the change of mesh discretization and point cloud sampling. Our L<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span>-GNN based on the fast spectral filter can be used for shape correspondence, classification, and segmentation tasks on non-regular mesh or point cloud data. Experimental results show that our method outperforms the current state of the art in terms of both quality and efficiency.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"140 ","pages":"Article 101276"},"PeriodicalIF":2.5,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144490699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-25DOI: 10.1016/j.gmod.2025.101277
Dong-Yu Chen, Hao-Xiang Chen, Qun-Ce Xu, Tai-Jiang Mu
Neural Radiance Field (NeRF) has achieved impressive 3D reconstruction quality using implicit scene representations. However, planar specular reflections pose significant challenges in the 3D reconstruction task. It is a common practice to decompose the scene into physically real geometries and virtual images produced by the reflections. However, current methods struggle to resolve the ambiguities in the decomposition process, because they mostly rely on mirror masks as external cues. They also fail to acquire accurate surface materials, which is essential for downstream applications of the recovered geometries. In this paper, we present RS-SpecSDF, a novel framework for indoor scene surface reconstruction that can faithfully reconstruct specular reflectors while accurately decomposing the reflection from the scene geometries and recovering the accurate specular fraction and diffuse appearance of the surface without requiring mirror masks. Our key idea is to perform reflection ray-casting and use it as supervision for the decomposition of reflection and surface material. Our method is based on an observation that the virtual image seen by the camera ray should be consistent with the object that the ray hits after reflecting off the specular surface. To leverage this constraint, we propose the Reflection Consistency Loss and Reflection Certainty Loss to regularize the decomposition. Experiments conducted on both our newly-proposed synthetic dataset and a real-captured dataset demonstrate that our method achieves high-quality surface reconstruction and accurate material decomposition results without the need of mirror masks.
{"title":"RS-SpecSDF: Reflection-supervised surface reconstruction and material estimation for specular indoor scenes","authors":"Dong-Yu Chen, Hao-Xiang Chen, Qun-Ce Xu, Tai-Jiang Mu","doi":"10.1016/j.gmod.2025.101277","DOIUrl":"10.1016/j.gmod.2025.101277","url":null,"abstract":"<div><div>Neural Radiance Field (NeRF) has achieved impressive 3D reconstruction quality using implicit scene representations. However, planar specular reflections pose significant challenges in the 3D reconstruction task. It is a common practice to decompose the scene into physically real geometries and virtual images produced by the reflections. However, current methods struggle to resolve the ambiguities in the decomposition process, because they mostly rely on mirror masks as external cues. They also fail to acquire accurate surface materials, which is essential for downstream applications of the recovered geometries. In this paper, we present RS-SpecSDF, a novel framework for indoor scene surface reconstruction that can faithfully reconstruct specular reflectors while accurately decomposing the reflection from the scene geometries and recovering the accurate specular fraction and diffuse appearance of the surface without requiring mirror masks. Our key idea is to perform reflection ray-casting and use it as supervision for the decomposition of reflection and surface material. Our method is based on an observation that the virtual image seen by the camera ray should be consistent with the object that the ray hits after reflecting off the specular surface. To leverage this constraint, we propose the Reflection Consistency Loss and Reflection Certainty Loss to regularize the decomposition. Experiments conducted on both our newly-proposed synthetic dataset and a real-captured dataset demonstrate that our method achieves high-quality surface reconstruction and accurate material decomposition results without the need of mirror masks.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"140 ","pages":"Article 101277"},"PeriodicalIF":2.5,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144472382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-21DOI: 10.1016/j.gmod.2025.101271
Rengan Xie , Kai Huang , Xiaoliang Luo , Yizheng Chen , Lvchun Wang , Qi Wang , Qi Ye , Wei Chen , Wenting Zheng , Yuchi Huo
Previous efforts have managed to generate production-ready 3D assets from text or images. However, these methods primarily employ NeRF or 3D Gaussian representations, which are not adept at producing smooth, high-quality geometries required by modern rendering pipelines. In this paper, we propose LDM, a Large tensorial SDF Model, which introduces a novel feed-forward framework capable of generating high-fidelity, illumination-decoupled textured mesh from a single image or text prompts. We firstly utilize a multi-view diffusion model to generate sparse multi-view inputs from single images or text prompts, and then a transformer-based model is trained to predict a tensorial SDF field from these sparse multi-view image inputs. Finally, we employ a gradient-based mesh optimization layer to refine this model, enabling it to produce an SDF field from which high-quality textured meshes can be extracted. Extensive experiments demonstrate that our method can generate diverse, high-quality 3D mesh assets with corresponding decomposed RGB textures within seconds. The project code is available at https://github.com/rgxie/LDM.
{"title":"LDM: Large tensorial SDF model for textured mesh generation","authors":"Rengan Xie , Kai Huang , Xiaoliang Luo , Yizheng Chen , Lvchun Wang , Qi Wang , Qi Ye , Wei Chen , Wenting Zheng , Yuchi Huo","doi":"10.1016/j.gmod.2025.101271","DOIUrl":"10.1016/j.gmod.2025.101271","url":null,"abstract":"<div><div>Previous efforts have managed to generate production-ready 3D assets from text or images. However, these methods primarily employ NeRF or 3D Gaussian representations, which are not adept at producing smooth, high-quality geometries required by modern rendering pipelines. In this paper, we propose LDM, a <strong>L</strong>arge tensorial S<strong>D</strong>F <strong>M</strong>odel, which introduces a novel feed-forward framework capable of generating high-fidelity, illumination-decoupled textured mesh from a single image or text prompts. We firstly utilize a multi-view diffusion model to generate sparse multi-view inputs from single images or text prompts, and then a transformer-based model is trained to predict a tensorial SDF field from these sparse multi-view image inputs. Finally, we employ a gradient-based mesh optimization layer to refine this model, enabling it to produce an SDF field from which high-quality textured meshes can be extracted. Extensive experiments demonstrate that our method can generate diverse, high-quality 3D mesh assets with corresponding decomposed RGB textures within seconds. The project code is available at <span><span>https://github.com/rgxie/LDM</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"140 ","pages":"Article 101271"},"PeriodicalIF":2.5,"publicationDate":"2025-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144330266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-19DOI: 10.1016/j.gmod.2025.101275
Erkan Gunpinar , A. Alper Tasmektepligil , Márton Vaitkus , Péter Salvi
This work investigates ribbon-based multi-sided surfaces that satisfy positional and cross-derivative constraints to ensure smooth transitions with adjacent tensor-product and multi-sided surfaces. The influence of cross-derivatives, crucial to surface quality, is studied within Kato’s transfinite surface interpolation instead of control point-based methods. To enhance surface quality, the surface is optimized using cost functions based on curvature metrics. Specifically, a Gaussian curvature-based cost function is also proposed in this work. An automated optimization procedure is introduced to determine rotation angles of cross-derivatives around normals and their magnitudes along curves in Kato’s interpolation scheme. Experimental results using both primitive (e.g., spherical) and realistic examples highlight the effectiveness of the proposed approach in improving surface quality.
{"title":"Optimization of cross-derivatives for ribbon-based multi-sided surfaces","authors":"Erkan Gunpinar , A. Alper Tasmektepligil , Márton Vaitkus , Péter Salvi","doi":"10.1016/j.gmod.2025.101275","DOIUrl":"10.1016/j.gmod.2025.101275","url":null,"abstract":"<div><div>This work investigates ribbon-based multi-sided surfaces that satisfy positional and cross-derivative constraints to ensure smooth transitions with adjacent tensor-product and multi-sided surfaces. The influence of cross-derivatives, crucial to surface quality, is studied within Kato’s transfinite surface interpolation instead of control point-based methods. To enhance surface quality, the surface is optimized using cost functions based on curvature metrics. Specifically, a Gaussian curvature-based cost function is also proposed in this work. An automated optimization procedure is introduced to determine rotation angles of cross-derivatives around normals and their magnitudes along curves in Kato’s interpolation scheme. Experimental results using both primitive (e.g., spherical) and realistic examples highlight the effectiveness of the proposed approach in improving surface quality.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"140 ","pages":"Article 101275"},"PeriodicalIF":2.5,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144314599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-18DOI: 10.1016/j.gmod.2025.101274
Zhicong Tang , Shuyang Gu , Chunyu Wang , Ting Zhang , Jianmin Bao , Dong Chen , Baining Guo
This work presents VolumeDiffusion, a novel feed-forward text-to-3D generation framework that directly synthesizes 3D objects from textual descriptions. It bypasses the conventional score distillation loss based or text-to-image-to-3D approaches. To scale up the training data for the diffusion model, a novel 3D volumetric encoder is developed to efficiently acquire feature volumes from multi-view images. The 3D volumes are then trained on a diffusion model for text-to-3D generation using a 3D U-Net. This research further addresses the challenges of inaccurate object captions and high-dimensional feature volumes. The proposed model, trained on the public Objaverse dataset, demonstrates promising outcomes in producing diverse and recognizable samples from text prompts. Notably, it empowers finer control over object part characteristics through textual cues, fostering model creativity by seamlessly combining multiple concepts within a single object. This research significantly contributes to the progress of 3D generation by introducing an efficient, flexible, and scalable representation methodology.
{"title":"VolumeDiffusion: Feed-forward text-to-3D generation with efficient volumetric encoder","authors":"Zhicong Tang , Shuyang Gu , Chunyu Wang , Ting Zhang , Jianmin Bao , Dong Chen , Baining Guo","doi":"10.1016/j.gmod.2025.101274","DOIUrl":"10.1016/j.gmod.2025.101274","url":null,"abstract":"<div><div>This work presents VolumeDiffusion, a novel feed-forward text-to-3D generation framework that directly synthesizes 3D objects from textual descriptions. It bypasses the conventional score distillation loss based or text-to-image-to-3D approaches. To scale up the training data for the diffusion model, a novel 3D volumetric encoder is developed to efficiently acquire feature volumes from multi-view images. The 3D volumes are then trained on a diffusion model for text-to-3D generation using a 3D U-Net. This research further addresses the challenges of inaccurate object captions and high-dimensional feature volumes. The proposed model, trained on the public Objaverse dataset, demonstrates promising outcomes in producing diverse and recognizable samples from text prompts. Notably, it empowers finer control over object part characteristics through textual cues, fostering model creativity by seamlessly combining multiple concepts within a single object. This research significantly contributes to the progress of 3D generation by introducing an efficient, flexible, and scalable representation methodology.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"140 ","pages":"Article 101274"},"PeriodicalIF":2.5,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144314598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-17DOI: 10.1016/j.gmod.2025.101272
Megha Shastry , Ye Fan , Clarissa Martins , Dinesh K. Pai
Fit and sizing of clothing are fundamental problems in the field of garment design, manufacture, and retail. Here we propose new computational methods for adjusting the fit of clothing on realistic models of the human body by interactively modifying desired fit attributes. Clothing fit represents the relationship between the body and the garment, and can be quantified using physical fit attributes such as ease and pressure on the body. However, the relationship between pattern geometry and such fit attributes is notoriously complex and nonlinear, requiring deep pattern making expertise to adjust patterns to achieve fit goals. Such attributes can be computed by physically based simulations, using soft avatars. Here we propose a method to learn the relationship between the fit attributes and the space of 2D pattern edits. We demonstrate our method via interactive tools that directly edit fit attributes in 3D and instantaneously predict the corresponding pattern adjustments. The approach has been tested with a range of garment types, and validated by comparing with physical prototypes. Our method introduces an alternative way to directly express fit adjustment goals, making pattern adjustment more broadly accessible. As an additional benefit, the proposed approach allows pattern adjustments to be systematized, enabling better communication and audit of decisions.
{"title":"Goal-oriented 3D pattern adjustment with machine learning","authors":"Megha Shastry , Ye Fan , Clarissa Martins , Dinesh K. Pai","doi":"10.1016/j.gmod.2025.101272","DOIUrl":"10.1016/j.gmod.2025.101272","url":null,"abstract":"<div><div>Fit and sizing of clothing are fundamental problems in the field of garment design, manufacture, and retail. Here we propose new computational methods for adjusting the fit of clothing on realistic models of the human body by interactively modifying desired <em>fit attributes</em>. Clothing fit represents the relationship between the body and the garment, and can be quantified using physical fit attributes such as ease and pressure on the body. However, the relationship between pattern geometry and such fit attributes is notoriously complex and nonlinear, requiring deep pattern making expertise to adjust patterns to achieve fit goals. Such attributes can be computed by physically based simulations, using soft avatars. Here we propose a method to learn the relationship between the fit attributes and the space of 2D pattern edits. We demonstrate our method via interactive tools that directly edit fit attributes in 3D and instantaneously predict the corresponding pattern adjustments. The approach has been tested with a range of garment types, and validated by comparing with physical prototypes. Our method introduces an alternative way to directly express fit adjustment goals, making pattern adjustment more broadly accessible. As an additional benefit, the proposed approach allows pattern adjustments to be systematized, enabling better communication and audit of decisions.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"140 ","pages":"Article 101272"},"PeriodicalIF":2.5,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144298108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-01DOI: 10.1016/j.gmod.2025.101270
Haojun Xu , Qinsong Li , Ling Hu , Shengjun Liu , Haibo Wang , Xinru Liu
In recent years, deep functional maps (DFM) have emerged as a leading learning-based framework for non-rigid shape-matching problems, offering diverse network architectures for this domain. This richness also makes exploring better and novel design beliefs for existing powerful DFM components to promote performance meaningful and engaging. This paper delves into this problem and successfully produces the SEDFMNet, a simple yet highly efficient DFM pipeline. To achieve this, we systematically deconstruct the core modules of the general DFM framework and analyze key design choices in existing approaches to identify the most critical components through extensive experiments. By reassembling these crucial components, we culminate in developing our SEDFMNet, which features a simpler structure than conventional DFM pipelines while delivering superior performance. Our approach is rigorously validated through comprehensive experiments on diverse datasets, where the SEDFMNet consistently achieves state-of-the-art results, even in challenging scenarios such as non-isometric shape matching and shape matching with topological noise. Our work offers fresh insights into DFM research and opens new avenues for advancing this field.
{"title":"SEDFMNet: A Simple and Efficient Unsupervised Functional Map for Shape Correspondence Based on Deconstruction","authors":"Haojun Xu , Qinsong Li , Ling Hu , Shengjun Liu , Haibo Wang , Xinru Liu","doi":"10.1016/j.gmod.2025.101270","DOIUrl":"10.1016/j.gmod.2025.101270","url":null,"abstract":"<div><div>In recent years, deep functional maps (DFM) have emerged as a leading learning-based framework for non-rigid shape-matching problems, offering diverse network architectures for this domain. This richness also makes exploring better and novel design beliefs for existing powerful DFM components to promote performance meaningful and engaging. This paper delves into this problem and successfully produces the SEDFMNet, a simple yet highly efficient DFM pipeline. To achieve this, we systematically deconstruct the core modules of the general DFM framework and analyze key design choices in existing approaches to identify the most critical components through extensive experiments. By reassembling these crucial components, we culminate in developing our SEDFMNet, which features a simpler structure than conventional DFM pipelines while delivering superior performance. Our approach is rigorously validated through comprehensive experiments on diverse datasets, where the SEDFMNet consistently achieves state-of-the-art results, even in challenging scenarios such as non-isometric shape matching and shape matching with topological noise. Our work offers fresh insights into DFM research and opens new avenues for advancing this field.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"139 ","pages":"Article 101270"},"PeriodicalIF":2.5,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144203918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-01DOI: 10.1016/j.gmod.2025.101273
Yang Zhang, Kailuo Yu, Xinyu Zhang
We present an efficient message aggregation algorithm FastClothGNN for Graph Neural Networks (GNNs) specifically designed for real-time cloth simulation in virtual try-on systems. Our approach reduces computational redundancy by optimizing neighbor sampling and minimizing unnecessary message-passing between cloth and obstacle nodes. This significantly accelerates the real-time performance of cloth simulation, making it ideal for interactive virtual environments. Our experiments demonstrate that our algorithm significantly enhances memory efficiency and improve the performance both in training and in inference in GNNs. This optimization enables our algorithm to be effectively applied to resource-constrained, providing users with more seamless and immersive interactions and thereby increasing the potential for practical real-time applications.
{"title":"FastClothGNN: Optimizing message passing in Graph Neural Networks for accelerating real-time cloth simulation","authors":"Yang Zhang, Kailuo Yu, Xinyu Zhang","doi":"10.1016/j.gmod.2025.101273","DOIUrl":"10.1016/j.gmod.2025.101273","url":null,"abstract":"<div><div>We present an efficient message aggregation algorithm FastClothGNN for Graph Neural Networks (GNNs) specifically designed for real-time cloth simulation in virtual try-on systems. Our approach reduces computational redundancy by optimizing neighbor sampling and minimizing unnecessary message-passing between cloth and obstacle nodes. This significantly accelerates the real-time performance of cloth simulation, making it ideal for interactive virtual environments. Our experiments demonstrate that our algorithm significantly enhances memory efficiency and improve the performance both in training and in inference in GNNs. This optimization enables our algorithm to be effectively applied to resource-constrained, providing users with more seamless and immersive interactions and thereby increasing the potential for practical real-time applications.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"139 ","pages":"Article 101273"},"PeriodicalIF":2.5,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144240201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-25DOI: 10.1016/j.gmod.2025.101269
Chenhui Wang , Jianyang Zhang , Chen Li , Changbo Wang
Despite the material point method (MPM) provides a unified particle simulation framework for coupling of different materials, MPM suffers from sticky numerical artifacts, which is inherently restricted to sticky and no-slip interactions. In this paper, we propose a novel transfer scheme called Decomposed Compatible Affine Particle in Cell (DC-APIC) within the MPM framework for simulating the two-way coupled interaction between elastic solids and incompressible fluids under free-slip boundary conditions on a unified background grid. Firstly, we adopt particle-grid compatibility to describe the relationship between grid nodes and particles at the fluid–solid interface, which serves as the guideline for subsequent particle–grid–particle transfers. Then we develop a phase-field gradient method to track the compatibility and normal directions at the interface. Secondly, to facilitate automatic MPM collision resolution during solid–fluid coupling, in the proposed DC-APIC integrator, the tangential component will not be transferred between incompatible grid nodes to prevent velocity smoothing in another phase, while the normal component is transferred without limitations. Finally, our comprehensive results confirm that our approach effectively reduces diffusion and unphysical viscosity compared to traditional MPM.
{"title":"DC-APIC: A decomposed compatible affine particle in cell transfer scheme for non-sticky solid–fluid interactions in MPM","authors":"Chenhui Wang , Jianyang Zhang , Chen Li , Changbo Wang","doi":"10.1016/j.gmod.2025.101269","DOIUrl":"10.1016/j.gmod.2025.101269","url":null,"abstract":"<div><div>Despite the material point method (MPM) provides a unified particle simulation framework for coupling of different materials, MPM suffers from sticky numerical artifacts, which is inherently restricted to sticky and no-slip interactions. In this paper, we propose a novel transfer scheme called Decomposed Compatible Affine Particle in Cell (DC-APIC) within the MPM framework for simulating the two-way coupled interaction between elastic solids and incompressible fluids under free-slip boundary conditions on a unified background grid. Firstly, we adopt particle-grid compatibility to describe the relationship between grid nodes and particles at the fluid–solid interface, which serves as the guideline for subsequent particle–grid–particle transfers. Then we develop a phase-field gradient method to track the compatibility and normal directions at the interface. Secondly, to facilitate automatic MPM collision resolution during solid–fluid coupling, in the proposed DC-APIC integrator, the tangential component will not be transferred between incompatible grid nodes to prevent velocity smoothing in another phase, while the normal component is transferred without limitations. Finally, our comprehensive results confirm that our approach effectively reduces diffusion and unphysical viscosity compared to traditional MPM.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"139 ","pages":"Article 101269"},"PeriodicalIF":2.5,"publicationDate":"2025-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144134591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-24DOI: 10.1016/j.gmod.2025.101267
Yu Chen, Hongwei Lin, Yifan Xing
Reconstructing curves that align with human visual perception from a noisy point cloud presents a significant challenge in the field of curve reconstruction. A specific problem involves reconstructing curves from a noisy point cloud sampled from multiple intersecting curves, ensuring that the reconstructed results align with the Gestalt principles and thus produce curves faithful to human perception. This task involves identifying all potential curves from a point cloud and reconstructing approximating curves, which is critical in applications such as trajectory reconstruction, path planning, and computer vision. In this study, we propose an automatic method that utilizes the topological understanding provided by persistent homology and the local principal curve method to separate and approximate the intersecting closed curves from point clouds, ultimately achieving successful human perception faithful curve reconstruction results using B-spline curves. This technique effectively addresses noisy data clouds and intersections, as demonstrated by experimental results.
{"title":"Human perception faithful curve reconstruction based on persistent homology and principal curve","authors":"Yu Chen, Hongwei Lin, Yifan Xing","doi":"10.1016/j.gmod.2025.101267","DOIUrl":"10.1016/j.gmod.2025.101267","url":null,"abstract":"<div><div>Reconstructing curves that align with human visual perception from a noisy point cloud presents a significant challenge in the field of curve reconstruction. A specific problem involves reconstructing curves from a noisy point cloud sampled from multiple intersecting curves, ensuring that the reconstructed results align with the Gestalt principles and thus produce curves faithful to human perception. This task involves identifying all potential curves from a point cloud and reconstructing approximating curves, which is critical in applications such as trajectory reconstruction, path planning, and computer vision. In this study, we propose an automatic method that utilizes the topological understanding provided by persistent homology and the local principal curve method to separate and approximate the intersecting closed curves from point clouds, ultimately achieving successful human perception faithful curve reconstruction results using B-spline curves. This technique effectively addresses noisy data clouds and intersections, as demonstrated by experimental results.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"139 ","pages":"Article 101267"},"PeriodicalIF":2.5,"publicationDate":"2025-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144131313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}