首页 > 最新文献

Graphical Models最新文献

英文 中文
L2-GNN: Graph neural networks with fast spectral filters using twice linear parameterization L2-GNN:使用两次线性参数化的快速光谱滤波的图神经网络
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-06-26 DOI: 10.1016/j.gmod.2025.101276
Siying Huang , Xin Yang , Zhengda Lu , Hongxing Qin , Huaiwen Zhang , Yiqun Wang
To improve learning on irregular 3D shapes, such as meshes with varying discretizations and point clouds with different samplings, we propose L2-GNN, a new graph neural network that approximates the spectral filters using twice linear parameterization. First, we parameterize the spectral filters using wavelet filter basis functions. The parameterization allows for an enlarged receptive field of graph convolutions, which can simultaneously capture low-frequency and high-frequency information. Second, we parameterize the wavelet filter basis functions using Chebyshev polynomial basis functions. This parameterization reduces the computational complexity of graph convolutions while maintaining robustness to the change of mesh discretization and point cloud sampling. Our L2-GNN based on the fast spectral filter can be used for shape correspondence, classification, and segmentation tasks on non-regular mesh or point cloud data. Experimental results show that our method outperforms the current state of the art in terms of both quality and efficiency.
为了提高不规则三维形状的学习能力,例如具有不同离散化的网格和不同采样的点云,我们提出了L2-GNN,一种新的图神经网络,它使用两次线性参数化来近似光谱滤波器。首先,利用小波滤波基函数对谱滤波器进行参数化。参数化允许扩大图卷积的接受域,可以同时捕获低频和高频信息。其次,利用切比雪夫多项式基函数对小波滤波器基函数进行参数化。这种参数化降低了图卷积的计算复杂度,同时保持了对网格离散化和点云采样变化的鲁棒性。基于快速光谱滤波的L2-GNN可用于不规则网格或点云数据的形状对应、分类和分割任务。实验结果表明,我们的方法在质量和效率方面都优于目前的技术水平。
{"title":"L2-GNN: Graph neural networks with fast spectral filters using twice linear parameterization","authors":"Siying Huang ,&nbsp;Xin Yang ,&nbsp;Zhengda Lu ,&nbsp;Hongxing Qin ,&nbsp;Huaiwen Zhang ,&nbsp;Yiqun Wang","doi":"10.1016/j.gmod.2025.101276","DOIUrl":"10.1016/j.gmod.2025.101276","url":null,"abstract":"<div><div>To improve learning on irregular 3D shapes, such as meshes with varying discretizations and point clouds with different samplings, we propose L<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span>-GNN, a new graph neural network that approximates the spectral filters using twice linear parameterization. First, we parameterize the spectral filters using wavelet filter basis functions. The parameterization allows for an enlarged receptive field of graph convolutions, which can simultaneously capture low-frequency and high-frequency information. Second, we parameterize the wavelet filter basis functions using Chebyshev polynomial basis functions. This parameterization reduces the computational complexity of graph convolutions while maintaining robustness to the change of mesh discretization and point cloud sampling. Our L<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span>-GNN based on the fast spectral filter can be used for shape correspondence, classification, and segmentation tasks on non-regular mesh or point cloud data. Experimental results show that our method outperforms the current state of the art in terms of both quality and efficiency.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"140 ","pages":"Article 101276"},"PeriodicalIF":2.5,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144490699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RS-SpecSDF: Reflection-supervised surface reconstruction and material estimation for specular indoor scenes RS-SpecSDF:镜面室内场景的反射监督表面重建和材料估计
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-06-25 DOI: 10.1016/j.gmod.2025.101277
Dong-Yu Chen, Hao-Xiang Chen, Qun-Ce Xu, Tai-Jiang Mu
Neural Radiance Field (NeRF) has achieved impressive 3D reconstruction quality using implicit scene representations. However, planar specular reflections pose significant challenges in the 3D reconstruction task. It is a common practice to decompose the scene into physically real geometries and virtual images produced by the reflections. However, current methods struggle to resolve the ambiguities in the decomposition process, because they mostly rely on mirror masks as external cues. They also fail to acquire accurate surface materials, which is essential for downstream applications of the recovered geometries. In this paper, we present RS-SpecSDF, a novel framework for indoor scene surface reconstruction that can faithfully reconstruct specular reflectors while accurately decomposing the reflection from the scene geometries and recovering the accurate specular fraction and diffuse appearance of the surface without requiring mirror masks. Our key idea is to perform reflection ray-casting and use it as supervision for the decomposition of reflection and surface material. Our method is based on an observation that the virtual image seen by the camera ray should be consistent with the object that the ray hits after reflecting off the specular surface. To leverage this constraint, we propose the Reflection Consistency Loss and Reflection Certainty Loss to regularize the decomposition. Experiments conducted on both our newly-proposed synthetic dataset and a real-captured dataset demonstrate that our method achieves high-quality surface reconstruction and accurate material decomposition results without the need of mirror masks.
神经辐射场(NeRF)已经实现了令人印象深刻的3D重建质量使用隐式场景表示。然而,平面镜面反射在三维重建任务中提出了重大挑战。将场景分解为物理上真实的几何图形和由反射产生的虚拟图像是一种常见的做法。然而,目前的方法很难解决分解过程中的模糊性,因为它们主要依赖于镜像掩模作为外部线索。它们也无法获得精确的表面材料,这对于回收几何形状的下游应用至关重要。在本文中,我们提出了一种新的室内场景表面重建框架RS-SpecSDF,它可以忠实地重建镜面反射器,同时准确地从场景几何形状中分解反射,并在不需要镜面掩模的情况下恢复表面的精确镜面分数和漫反射外观。我们的主要想法是执行反射光线投射,并使用它作为反射和表面材料分解的监督。我们的方法是基于这样一种观察,即相机光线所看到的虚拟图像应该与光线从镜面反射后击中的物体一致。为了利用这一约束,我们提出了反射一致性损失和反射确定性损失来正则化分解。在合成数据集和实际捕获数据集上进行的实验表明,我们的方法在不需要镜像掩模的情况下获得了高质量的表面重建和准确的材料分解结果。
{"title":"RS-SpecSDF: Reflection-supervised surface reconstruction and material estimation for specular indoor scenes","authors":"Dong-Yu Chen,&nbsp;Hao-Xiang Chen,&nbsp;Qun-Ce Xu,&nbsp;Tai-Jiang Mu","doi":"10.1016/j.gmod.2025.101277","DOIUrl":"10.1016/j.gmod.2025.101277","url":null,"abstract":"<div><div>Neural Radiance Field (NeRF) has achieved impressive 3D reconstruction quality using implicit scene representations. However, planar specular reflections pose significant challenges in the 3D reconstruction task. It is a common practice to decompose the scene into physically real geometries and virtual images produced by the reflections. However, current methods struggle to resolve the ambiguities in the decomposition process, because they mostly rely on mirror masks as external cues. They also fail to acquire accurate surface materials, which is essential for downstream applications of the recovered geometries. In this paper, we present RS-SpecSDF, a novel framework for indoor scene surface reconstruction that can faithfully reconstruct specular reflectors while accurately decomposing the reflection from the scene geometries and recovering the accurate specular fraction and diffuse appearance of the surface without requiring mirror masks. Our key idea is to perform reflection ray-casting and use it as supervision for the decomposition of reflection and surface material. Our method is based on an observation that the virtual image seen by the camera ray should be consistent with the object that the ray hits after reflecting off the specular surface. To leverage this constraint, we propose the Reflection Consistency Loss and Reflection Certainty Loss to regularize the decomposition. Experiments conducted on both our newly-proposed synthetic dataset and a real-captured dataset demonstrate that our method achieves high-quality surface reconstruction and accurate material decomposition results without the need of mirror masks.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"140 ","pages":"Article 101277"},"PeriodicalIF":2.5,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144472382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LDM: Large tensorial SDF model for textured mesh generation LDM:用于纹理网格生成的大张量SDF模型
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-06-21 DOI: 10.1016/j.gmod.2025.101271
Rengan Xie , Kai Huang , Xiaoliang Luo , Yizheng Chen , Lvchun Wang , Qi Wang , Qi Ye , Wei Chen , Wenting Zheng , Yuchi Huo
Previous efforts have managed to generate production-ready 3D assets from text or images. However, these methods primarily employ NeRF or 3D Gaussian representations, which are not adept at producing smooth, high-quality geometries required by modern rendering pipelines. In this paper, we propose LDM, a Large tensorial SDF Model, which introduces a novel feed-forward framework capable of generating high-fidelity, illumination-decoupled textured mesh from a single image or text prompts. We firstly utilize a multi-view diffusion model to generate sparse multi-view inputs from single images or text prompts, and then a transformer-based model is trained to predict a tensorial SDF field from these sparse multi-view image inputs. Finally, we employ a gradient-based mesh optimization layer to refine this model, enabling it to produce an SDF field from which high-quality textured meshes can be extracted. Extensive experiments demonstrate that our method can generate diverse, high-quality 3D mesh assets with corresponding decomposed RGB textures within seconds. The project code is available at https://github.com/rgxie/LDM.
之前的努力已经成功地从文本或图像生成生产就绪的3D资产。然而,这些方法主要采用NeRF或3D高斯表示,它们不擅长生成现代渲染管道所需的光滑、高质量的几何形状。在本文中,我们提出了LDM,一种大型张量SDF模型,它引入了一种新的前馈框架,能够从单个图像或文本提示生成高保真度,光照解耦的纹理网格。我们首先利用多视图扩散模型从单个图像或文本提示生成稀疏多视图输入,然后训练基于变换的模型从这些稀疏多视图图像输入预测张量SDF场。最后,我们使用基于梯度的网格优化层来细化该模型,使其能够产生一个SDF场,从中可以提取高质量的纹理网格。大量的实验表明,我们的方法可以在几秒钟内生成具有相应分解RGB纹理的多种高质量3D网格资产。项目代码可从https://github.com/rgxie/LDM获得。
{"title":"LDM: Large tensorial SDF model for textured mesh generation","authors":"Rengan Xie ,&nbsp;Kai Huang ,&nbsp;Xiaoliang Luo ,&nbsp;Yizheng Chen ,&nbsp;Lvchun Wang ,&nbsp;Qi Wang ,&nbsp;Qi Ye ,&nbsp;Wei Chen ,&nbsp;Wenting Zheng ,&nbsp;Yuchi Huo","doi":"10.1016/j.gmod.2025.101271","DOIUrl":"10.1016/j.gmod.2025.101271","url":null,"abstract":"<div><div>Previous efforts have managed to generate production-ready 3D assets from text or images. However, these methods primarily employ NeRF or 3D Gaussian representations, which are not adept at producing smooth, high-quality geometries required by modern rendering pipelines. In this paper, we propose LDM, a <strong>L</strong>arge tensorial S<strong>D</strong>F <strong>M</strong>odel, which introduces a novel feed-forward framework capable of generating high-fidelity, illumination-decoupled textured mesh from a single image or text prompts. We firstly utilize a multi-view diffusion model to generate sparse multi-view inputs from single images or text prompts, and then a transformer-based model is trained to predict a tensorial SDF field from these sparse multi-view image inputs. Finally, we employ a gradient-based mesh optimization layer to refine this model, enabling it to produce an SDF field from which high-quality textured meshes can be extracted. Extensive experiments demonstrate that our method can generate diverse, high-quality 3D mesh assets with corresponding decomposed RGB textures within seconds. The project code is available at <span><span>https://github.com/rgxie/LDM</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"140 ","pages":"Article 101271"},"PeriodicalIF":2.5,"publicationDate":"2025-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144330266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimization of cross-derivatives for ribbon-based multi-sided surfaces 带状多面曲面交叉导数的优化
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-06-19 DOI: 10.1016/j.gmod.2025.101275
Erkan Gunpinar , A. Alper Tasmektepligil , Márton Vaitkus , Péter Salvi
This work investigates ribbon-based multi-sided surfaces that satisfy positional and cross-derivative constraints to ensure smooth transitions with adjacent tensor-product and multi-sided surfaces. The influence of cross-derivatives, crucial to surface quality, is studied within Kato’s transfinite surface interpolation instead of control point-based methods. To enhance surface quality, the surface is optimized using cost functions based on curvature metrics. Specifically, a Gaussian curvature-based cost function is also proposed in this work. An automated optimization procedure is introduced to determine rotation angles of cross-derivatives around normals and their magnitudes along curves in Kato’s interpolation scheme. Experimental results using both primitive (e.g., spherical) and realistic examples highlight the effectiveness of the proposed approach in improving surface quality.
这项工作研究了满足位置和交叉导数约束的基于带状的多边面,以确保与相邻张量积和多边面的平滑过渡。交叉导数对曲面质量的影响是至关重要的,在加藤的超限曲面插值中代替基于控制点的方法进行了研究。为了提高表面质量,使用基于曲率度量的成本函数对表面进行优化。具体来说,本文还提出了一种基于高斯曲率的代价函数。在加藤插值方案中,引入了一种自动优化程序来确定法向周围交叉导数的旋转角度及其沿曲线的幅度。使用原始(例如,球面)和现实实例的实验结果突出了所提出的方法在改善表面质量方面的有效性。
{"title":"Optimization of cross-derivatives for ribbon-based multi-sided surfaces","authors":"Erkan Gunpinar ,&nbsp;A. Alper Tasmektepligil ,&nbsp;Márton Vaitkus ,&nbsp;Péter Salvi","doi":"10.1016/j.gmod.2025.101275","DOIUrl":"10.1016/j.gmod.2025.101275","url":null,"abstract":"<div><div>This work investigates ribbon-based multi-sided surfaces that satisfy positional and cross-derivative constraints to ensure smooth transitions with adjacent tensor-product and multi-sided surfaces. The influence of cross-derivatives, crucial to surface quality, is studied within Kato’s transfinite surface interpolation instead of control point-based methods. To enhance surface quality, the surface is optimized using cost functions based on curvature metrics. Specifically, a Gaussian curvature-based cost function is also proposed in this work. An automated optimization procedure is introduced to determine rotation angles of cross-derivatives around normals and their magnitudes along curves in Kato’s interpolation scheme. Experimental results using both primitive (e.g., spherical) and realistic examples highlight the effectiveness of the proposed approach in improving surface quality.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"140 ","pages":"Article 101275"},"PeriodicalIF":2.5,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144314599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VolumeDiffusion: Feed-forward text-to-3D generation with efficient volumetric encoder VolumeDiffusion:前馈文本到3d生成与高效的体积编码器
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-06-18 DOI: 10.1016/j.gmod.2025.101274
Zhicong Tang , Shuyang Gu , Chunyu Wang , Ting Zhang , Jianmin Bao , Dong Chen , Baining Guo
This work presents VolumeDiffusion, a novel feed-forward text-to-3D generation framework that directly synthesizes 3D objects from textual descriptions. It bypasses the conventional score distillation loss based or text-to-image-to-3D approaches. To scale up the training data for the diffusion model, a novel 3D volumetric encoder is developed to efficiently acquire feature volumes from multi-view images. The 3D volumes are then trained on a diffusion model for text-to-3D generation using a 3D U-Net. This research further addresses the challenges of inaccurate object captions and high-dimensional feature volumes. The proposed model, trained on the public Objaverse dataset, demonstrates promising outcomes in producing diverse and recognizable samples from text prompts. Notably, it empowers finer control over object part characteristics through textual cues, fostering model creativity by seamlessly combining multiple concepts within a single object. This research significantly contributes to the progress of 3D generation by introducing an efficient, flexible, and scalable representation methodology.
这项工作提出了VolumeDiffusion,一个新的前馈文本到3D生成框架,直接从文本描述合成3D对象。它绕过了传统的基于分数蒸馏损失或文本到图像到3d的方法。为了扩大扩散模型的训练数据,开发了一种新的三维体积编码器,以有效地从多视图图像中获取特征体积。然后使用3D U-Net在文本到3D生成的扩散模型上训练3D体。该研究进一步解决了不准确的目标标题和高维特征量的挑战。该模型在公共Objaverse数据集上进行了训练,在从文本提示生成多样化和可识别的样本方面展示了有希望的结果。值得注意的是,它可以通过文本线索更好地控制对象部分特征,通过在单个对象中无缝地组合多个概念来培养模型创造力。该研究通过引入一种高效、灵活、可扩展的表示方法,对三维生成的进展做出了重大贡献。
{"title":"VolumeDiffusion: Feed-forward text-to-3D generation with efficient volumetric encoder","authors":"Zhicong Tang ,&nbsp;Shuyang Gu ,&nbsp;Chunyu Wang ,&nbsp;Ting Zhang ,&nbsp;Jianmin Bao ,&nbsp;Dong Chen ,&nbsp;Baining Guo","doi":"10.1016/j.gmod.2025.101274","DOIUrl":"10.1016/j.gmod.2025.101274","url":null,"abstract":"<div><div>This work presents VolumeDiffusion, a novel feed-forward text-to-3D generation framework that directly synthesizes 3D objects from textual descriptions. It bypasses the conventional score distillation loss based or text-to-image-to-3D approaches. To scale up the training data for the diffusion model, a novel 3D volumetric encoder is developed to efficiently acquire feature volumes from multi-view images. The 3D volumes are then trained on a diffusion model for text-to-3D generation using a 3D U-Net. This research further addresses the challenges of inaccurate object captions and high-dimensional feature volumes. The proposed model, trained on the public Objaverse dataset, demonstrates promising outcomes in producing diverse and recognizable samples from text prompts. Notably, it empowers finer control over object part characteristics through textual cues, fostering model creativity by seamlessly combining multiple concepts within a single object. This research significantly contributes to the progress of 3D generation by introducing an efficient, flexible, and scalable representation methodology.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"140 ","pages":"Article 101274"},"PeriodicalIF":2.5,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144314598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Goal-oriented 3D pattern adjustment with machine learning 目标导向的3D模式调整与机器学习
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-06-17 DOI: 10.1016/j.gmod.2025.101272
Megha Shastry , Ye Fan , Clarissa Martins , Dinesh K. Pai
Fit and sizing of clothing are fundamental problems in the field of garment design, manufacture, and retail. Here we propose new computational methods for adjusting the fit of clothing on realistic models of the human body by interactively modifying desired fit attributes. Clothing fit represents the relationship between the body and the garment, and can be quantified using physical fit attributes such as ease and pressure on the body. However, the relationship between pattern geometry and such fit attributes is notoriously complex and nonlinear, requiring deep pattern making expertise to adjust patterns to achieve fit goals. Such attributes can be computed by physically based simulations, using soft avatars. Here we propose a method to learn the relationship between the fit attributes and the space of 2D pattern edits. We demonstrate our method via interactive tools that directly edit fit attributes in 3D and instantaneously predict the corresponding pattern adjustments. The approach has been tested with a range of garment types, and validated by comparing with physical prototypes. Our method introduces an alternative way to directly express fit adjustment goals, making pattern adjustment more broadly accessible. As an additional benefit, the proposed approach allows pattern adjustments to be systematized, enabling better communication and audit of decisions.
服装的合身和尺寸是服装设计、制造和零售领域的基本问题。在这里,我们提出了一种新的计算方法,通过交互修改期望的合身属性来调整人体逼真模型上的服装合身度。服装合身代表了身体和服装之间的关系,可以用身体的舒适度和压力等物理合身属性来量化。然而,模式几何与这些拟合属性之间的关系是出了名的复杂和非线性,需要深厚的模式制作专业知识来调整模式以达到拟合目标。这些属性可以通过基于物理的模拟计算,使用软化身。本文提出了一种学习二维图形编辑的拟合属性与空间关系的方法。我们通过交互式工具演示了我们的方法,该工具可以直接在3D中编辑适合属性并立即预测相应的模式调整。该方法已经在一系列服装类型上进行了测试,并通过与实物原型的比较进行了验证。我们的方法引入了一种直接表达拟合调整目标的替代方法,使模式调整更容易实现。作为一个额外的好处,所建议的方法允许将模式调整系统化,从而能够更好地沟通和审计决策。
{"title":"Goal-oriented 3D pattern adjustment with machine learning","authors":"Megha Shastry ,&nbsp;Ye Fan ,&nbsp;Clarissa Martins ,&nbsp;Dinesh K. Pai","doi":"10.1016/j.gmod.2025.101272","DOIUrl":"10.1016/j.gmod.2025.101272","url":null,"abstract":"<div><div>Fit and sizing of clothing are fundamental problems in the field of garment design, manufacture, and retail. Here we propose new computational methods for adjusting the fit of clothing on realistic models of the human body by interactively modifying desired <em>fit attributes</em>. Clothing fit represents the relationship between the body and the garment, and can be quantified using physical fit attributes such as ease and pressure on the body. However, the relationship between pattern geometry and such fit attributes is notoriously complex and nonlinear, requiring deep pattern making expertise to adjust patterns to achieve fit goals. Such attributes can be computed by physically based simulations, using soft avatars. Here we propose a method to learn the relationship between the fit attributes and the space of 2D pattern edits. We demonstrate our method via interactive tools that directly edit fit attributes in 3D and instantaneously predict the corresponding pattern adjustments. The approach has been tested with a range of garment types, and validated by comparing with physical prototypes. Our method introduces an alternative way to directly express fit adjustment goals, making pattern adjustment more broadly accessible. As an additional benefit, the proposed approach allows pattern adjustments to be systematized, enabling better communication and audit of decisions.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"140 ","pages":"Article 101272"},"PeriodicalIF":2.5,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144298108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SEDFMNet: A Simple and Efficient Unsupervised Functional Map for Shape Correspondence Based on Deconstruction SEDFMNet:一种简单高效的基于解构的形状对应的无监督函数映射
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-06-01 DOI: 10.1016/j.gmod.2025.101270
Haojun Xu , Qinsong Li , Ling Hu , Shengjun Liu , Haibo Wang , Xinru Liu
In recent years, deep functional maps (DFM) have emerged as a leading learning-based framework for non-rigid shape-matching problems, offering diverse network architectures for this domain. This richness also makes exploring better and novel design beliefs for existing powerful DFM components to promote performance meaningful and engaging. This paper delves into this problem and successfully produces the SEDFMNet, a simple yet highly efficient DFM pipeline. To achieve this, we systematically deconstruct the core modules of the general DFM framework and analyze key design choices in existing approaches to identify the most critical components through extensive experiments. By reassembling these crucial components, we culminate in developing our SEDFMNet, which features a simpler structure than conventional DFM pipelines while delivering superior performance. Our approach is rigorously validated through comprehensive experiments on diverse datasets, where the SEDFMNet consistently achieves state-of-the-art results, even in challenging scenarios such as non-isometric shape matching and shape matching with topological noise. Our work offers fresh insights into DFM research and opens new avenues for advancing this field.
近年来,深度功能图(DFM)已成为非刚性形状匹配问题的主要基于学习的框架,为该领域提供了多种网络架构。这种丰富性还有助于为现有功能强大的DFM组件探索更好和新颖的设计信念,以提高性能的意义和吸引力。本文对这一问题进行了深入的研究,并成功地实现了一种简单而高效的DFM流水线SEDFMNet。为了实现这一目标,我们系统地解构了通用DFM框架的核心模块,并通过广泛的实验分析了现有方法中的关键设计选择,以确定最关键的组件。通过重新组装这些关键组件,我们最终开发出了SEDFMNet,它的结构比传统的DFM管道更简单,同时提供了卓越的性能。我们的方法通过在不同数据集上的综合实验得到了严格的验证,即使在非等距形状匹配和带有拓扑噪声的形状匹配等具有挑战性的场景中,SEDFMNet也始终能够获得最先进的结果。我们的工作为DFM研究提供了新的见解,并为推进这一领域开辟了新的途径。
{"title":"SEDFMNet: A Simple and Efficient Unsupervised Functional Map for Shape Correspondence Based on Deconstruction","authors":"Haojun Xu ,&nbsp;Qinsong Li ,&nbsp;Ling Hu ,&nbsp;Shengjun Liu ,&nbsp;Haibo Wang ,&nbsp;Xinru Liu","doi":"10.1016/j.gmod.2025.101270","DOIUrl":"10.1016/j.gmod.2025.101270","url":null,"abstract":"<div><div>In recent years, deep functional maps (DFM) have emerged as a leading learning-based framework for non-rigid shape-matching problems, offering diverse network architectures for this domain. This richness also makes exploring better and novel design beliefs for existing powerful DFM components to promote performance meaningful and engaging. This paper delves into this problem and successfully produces the SEDFMNet, a simple yet highly efficient DFM pipeline. To achieve this, we systematically deconstruct the core modules of the general DFM framework and analyze key design choices in existing approaches to identify the most critical components through extensive experiments. By reassembling these crucial components, we culminate in developing our SEDFMNet, which features a simpler structure than conventional DFM pipelines while delivering superior performance. Our approach is rigorously validated through comprehensive experiments on diverse datasets, where the SEDFMNet consistently achieves state-of-the-art results, even in challenging scenarios such as non-isometric shape matching and shape matching with topological noise. Our work offers fresh insights into DFM research and opens new avenues for advancing this field.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"139 ","pages":"Article 101270"},"PeriodicalIF":2.5,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144203918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FastClothGNN: Optimizing message passing in Graph Neural Networks for accelerating real-time cloth simulation FastClothGNN:优化图神经网络中的消息传递,加速实时布料模拟
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-06-01 DOI: 10.1016/j.gmod.2025.101273
Yang Zhang, Kailuo Yu, Xinyu Zhang
We present an efficient message aggregation algorithm FastClothGNN for Graph Neural Networks (GNNs) specifically designed for real-time cloth simulation in virtual try-on systems. Our approach reduces computational redundancy by optimizing neighbor sampling and minimizing unnecessary message-passing between cloth and obstacle nodes. This significantly accelerates the real-time performance of cloth simulation, making it ideal for interactive virtual environments. Our experiments demonstrate that our algorithm significantly enhances memory efficiency and improve the performance both in training and in inference in GNNs. This optimization enables our algorithm to be effectively applied to resource-constrained, providing users with more seamless and immersive interactions and thereby increasing the potential for practical real-time applications.
我们提出了一种高效的消息聚合算法FastClothGNN,用于专门设计用于虚拟试穿系统中实时布料模拟的图神经网络(gnn)。我们的方法通过优化邻居采样和最小化布节点和障碍节点之间不必要的消息传递来减少计算冗余。这大大加快了布料模拟的实时性能,使其成为交互式虚拟环境的理想选择。实验表明,该算法显著提高了gnn的记忆效率,提高了训练和推理的性能。这种优化使我们的算法能够有效地应用于资源受限的环境中,为用户提供更加无缝和沉浸式的交互,从而增加了实际实时应用的潜力。
{"title":"FastClothGNN: Optimizing message passing in Graph Neural Networks for accelerating real-time cloth simulation","authors":"Yang Zhang,&nbsp;Kailuo Yu,&nbsp;Xinyu Zhang","doi":"10.1016/j.gmod.2025.101273","DOIUrl":"10.1016/j.gmod.2025.101273","url":null,"abstract":"<div><div>We present an efficient message aggregation algorithm FastClothGNN for Graph Neural Networks (GNNs) specifically designed for real-time cloth simulation in virtual try-on systems. Our approach reduces computational redundancy by optimizing neighbor sampling and minimizing unnecessary message-passing between cloth and obstacle nodes. This significantly accelerates the real-time performance of cloth simulation, making it ideal for interactive virtual environments. Our experiments demonstrate that our algorithm significantly enhances memory efficiency and improve the performance both in training and in inference in GNNs. This optimization enables our algorithm to be effectively applied to resource-constrained, providing users with more seamless and immersive interactions and thereby increasing the potential for practical real-time applications.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"139 ","pages":"Article 101273"},"PeriodicalIF":2.5,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144240201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DC-APIC: A decomposed compatible affine particle in cell transfer scheme for non-sticky solid–fluid interactions in MPM DC-APIC:在MPM中非粘性固-液相互作用的细胞转移方案中分解的兼容仿射粒子
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-05-25 DOI: 10.1016/j.gmod.2025.101269
Chenhui Wang , Jianyang Zhang , Chen Li , Changbo Wang
Despite the material point method (MPM) provides a unified particle simulation framework for coupling of different materials, MPM suffers from sticky numerical artifacts, which is inherently restricted to sticky and no-slip interactions. In this paper, we propose a novel transfer scheme called Decomposed Compatible Affine Particle in Cell (DC-APIC) within the MPM framework for simulating the two-way coupled interaction between elastic solids and incompressible fluids under free-slip boundary conditions on a unified background grid. Firstly, we adopt particle-grid compatibility to describe the relationship between grid nodes and particles at the fluid–solid interface, which serves as the guideline for subsequent particle–grid–particle transfers. Then we develop a phase-field gradient method to track the compatibility and normal directions at the interface. Secondly, to facilitate automatic MPM collision resolution during solid–fluid coupling, in the proposed DC-APIC integrator, the tangential component will not be transferred between incompatible grid nodes to prevent velocity smoothing in another phase, while the normal component is transferred without limitations. Finally, our comprehensive results confirm that our approach effectively reduces diffusion and unphysical viscosity compared to traditional MPM.
尽管材料点法(MPM)为不同材料的耦合提供了统一的粒子模拟框架,但MPM存在粘性数值伪影,固有地局限于粘性和无滑移相互作用。本文在MPM框架下,提出了一种新的可分解兼容仿射粒子单元(DC-APIC)传输方案,用于模拟统一背景网格下自由滑移边界条件下弹性固体与不可压缩流体之间的双向耦合相互作用。首先,我们采用颗粒-网格相容性来描述流固界面上网格节点与颗粒之间的关系,为后续颗粒-网格-颗粒转移提供指导。然后,我们提出了一种相场梯度法来跟踪界面处的相容性和法线方向。其次,为了方便固流耦合过程中MPM碰撞自动解决,在本文提出的DC-APIC积分器中,切向分量不会在不兼容的网格节点之间转移,以防止另一阶段的速度平滑,而法向分量的转移没有限制。最后,我们的综合结果证实,与传统的MPM相比,我们的方法有效地降低了扩散和非物理粘度。
{"title":"DC-APIC: A decomposed compatible affine particle in cell transfer scheme for non-sticky solid–fluid interactions in MPM","authors":"Chenhui Wang ,&nbsp;Jianyang Zhang ,&nbsp;Chen Li ,&nbsp;Changbo Wang","doi":"10.1016/j.gmod.2025.101269","DOIUrl":"10.1016/j.gmod.2025.101269","url":null,"abstract":"<div><div>Despite the material point method (MPM) provides a unified particle simulation framework for coupling of different materials, MPM suffers from sticky numerical artifacts, which is inherently restricted to sticky and no-slip interactions. In this paper, we propose a novel transfer scheme called Decomposed Compatible Affine Particle in Cell (DC-APIC) within the MPM framework for simulating the two-way coupled interaction between elastic solids and incompressible fluids under free-slip boundary conditions on a unified background grid. Firstly, we adopt particle-grid compatibility to describe the relationship between grid nodes and particles at the fluid–solid interface, which serves as the guideline for subsequent particle–grid–particle transfers. Then we develop a phase-field gradient method to track the compatibility and normal directions at the interface. Secondly, to facilitate automatic MPM collision resolution during solid–fluid coupling, in the proposed DC-APIC integrator, the tangential component will not be transferred between incompatible grid nodes to prevent velocity smoothing in another phase, while the normal component is transferred without limitations. Finally, our comprehensive results confirm that our approach effectively reduces diffusion and unphysical viscosity compared to traditional MPM.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"139 ","pages":"Article 101269"},"PeriodicalIF":2.5,"publicationDate":"2025-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144134591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human perception faithful curve reconstruction based on persistent homology and principal curve 基于持久同调和主曲线的人类感知忠实曲线重建
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-05-24 DOI: 10.1016/j.gmod.2025.101267
Yu Chen, Hongwei Lin, Yifan Xing
Reconstructing curves that align with human visual perception from a noisy point cloud presents a significant challenge in the field of curve reconstruction. A specific problem involves reconstructing curves from a noisy point cloud sampled from multiple intersecting curves, ensuring that the reconstructed results align with the Gestalt principles and thus produce curves faithful to human perception. This task involves identifying all potential curves from a point cloud and reconstructing approximating curves, which is critical in applications such as trajectory reconstruction, path planning, and computer vision. In this study, we propose an automatic method that utilizes the topological understanding provided by persistent homology and the local principal curve method to separate and approximate the intersecting closed curves from point clouds, ultimately achieving successful human perception faithful curve reconstruction results using B-spline curves. This technique effectively addresses noisy data clouds and intersections, as demonstrated by experimental results.
从噪声点云中重建符合人类视觉感知的曲线是曲线重建领域的一个重大挑战。一个具体的问题涉及从多个相交曲线采样的噪声点云重建曲线,确保重建结果符合格式塔原则,从而产生忠实于人类感知的曲线。该任务包括从点云中识别所有潜在曲线并重建近似曲线,这在轨迹重建、路径规划和计算机视觉等应用中至关重要。在本研究中,我们提出了一种自动方法,利用持久同调提供的拓扑理解和局部主曲线方法从点云中分离和近似相交的封闭曲线,最终获得成功的人类感知忠实的b样条曲线重建结果。实验结果表明,该技术有效地解决了噪声数据云和交叉的问题。
{"title":"Human perception faithful curve reconstruction based on persistent homology and principal curve","authors":"Yu Chen,&nbsp;Hongwei Lin,&nbsp;Yifan Xing","doi":"10.1016/j.gmod.2025.101267","DOIUrl":"10.1016/j.gmod.2025.101267","url":null,"abstract":"<div><div>Reconstructing curves that align with human visual perception from a noisy point cloud presents a significant challenge in the field of curve reconstruction. A specific problem involves reconstructing curves from a noisy point cloud sampled from multiple intersecting curves, ensuring that the reconstructed results align with the Gestalt principles and thus produce curves faithful to human perception. This task involves identifying all potential curves from a point cloud and reconstructing approximating curves, which is critical in applications such as trajectory reconstruction, path planning, and computer vision. In this study, we propose an automatic method that utilizes the topological understanding provided by persistent homology and the local principal curve method to separate and approximate the intersecting closed curves from point clouds, ultimately achieving successful human perception faithful curve reconstruction results using B-spline curves. This technique effectively addresses noisy data clouds and intersections, as demonstrated by experimental results.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"139 ","pages":"Article 101267"},"PeriodicalIF":2.5,"publicationDate":"2025-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144131313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Graphical Models
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1