首页 > 最新文献

Graphical Models最新文献

英文 中文
EasyAnim: 3D facial animation from in-the-wild videos for avatars with customized riggings EasyAnim: 3D面部动画从野外视频与定制的装备的化身
IF 2.2 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-09-23 DOI: 10.1016/j.gmod.2025.101298
Hao-Xuan Song , Yue Qian , Xiaohang Zhan , Tai-Jiang Mu
3D facial animation of digital avatars driven by RGB videos has extensive applications. However, the practical implementation encounters a significant challenge due to the various identities and environments of in-the-wild videos and varying rigging designs. The traditional industry pipeline necessitates a labor-intensive alignment process to ensure compatibility, while the recent novel methods are constrained to a specific rigging standard or require additional labor on actor videos, making them difficult to apply to customized riggings and in-the-wild videos. To make the task easy and convenient, we introduce EasyAnim, which utilizes abundant 2D videos to learn an aligned implicit motion flow unsupervisedly and maps it to various rigging parameters in a generalized manner. A novel framework with self- and cross- reconstruction constraints is proposed to ensure the alignment of avatar and human actor domains. Extensive experiments demonstrate that EasyAnim generates comparable or even better results with no additional constraints and labor.
RGB视频驱动的数字人物三维面部动画有着广泛的应用。然而,由于各种身份和野外视频环境以及不同的索具设计,实际实施遇到了重大挑战。传统的工业管道需要一个劳动密集型的校准过程来确保兼容性,而最近的新方法受限于特定的索具标准,或者需要在演员视频上额外的劳动,这使得它们难以应用于定制的索具和野外视频。为了使任务变得简单和方便,我们介绍了EasyAnim,它利用丰富的2D视频来无监督地学习对齐的隐式运动流,并以广义的方式将其映射到各种索具参数。提出了一种具有自重构约束和交叉重构约束的框架,以保证角色域和人类角色域的一致性。大量的实验表明,EasyAnim在没有额外约束和人工的情况下产生了相当甚至更好的结果。
{"title":"EasyAnim: 3D facial animation from in-the-wild videos for avatars with customized riggings","authors":"Hao-Xuan Song ,&nbsp;Yue Qian ,&nbsp;Xiaohang Zhan ,&nbsp;Tai-Jiang Mu","doi":"10.1016/j.gmod.2025.101298","DOIUrl":"10.1016/j.gmod.2025.101298","url":null,"abstract":"<div><div>3D facial animation of digital avatars driven by RGB videos has extensive applications. However, the practical implementation encounters a significant challenge due to the various identities and environments of in-the-wild videos and varying rigging designs. The traditional industry pipeline necessitates a labor-intensive alignment process to ensure compatibility, while the recent novel methods are constrained to a specific rigging standard or require additional labor on actor videos, making them difficult to apply to customized riggings and in-the-wild videos. To make the task easy and convenient, we introduce <strong>EasyAnim</strong>, which utilizes abundant 2D videos to learn an aligned implicit motion flow unsupervisedly and maps it to various rigging parameters in a generalized manner. A novel framework with self- and cross- reconstruction constraints is proposed to ensure the alignment of avatar and human actor domains. Extensive experiments demonstrate that EasyAnim generates comparable or even better results with no additional constraints and labor.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"142 ","pages":"Article 101298"},"PeriodicalIF":2.2,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145121305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Disambiguating flat spots in discrete scalar fields 离散标量场中的平点消歧
IF 2.2 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-08-27 DOI: 10.1016/j.gmod.2025.101299
L. Rocca , F. Iuricich , E. Puppo
We consider 2D scalar fields sampled on a regular grid. When the gradient is low relative to the resolution of the dataset’s range, the signal may contain flat spots: connected areas where all points share the same value. Flat spots hinder certain analyses, such as topological characterization or drainage network computations. We present an algorithm to determine a symbolic slope inside flat spots and consistently place a minimal set of critical points, in a way that is less biased than state-of-the-art methods. We present experimental results on both synthetic and real data, demonstrating how our method provides a more plausible positioning of critical points and a better recovery of the Morse–Smale complex.
我们考虑在规则网格上采样的二维标量场。当梯度相对于数据集范围的分辨率较低时,信号可能包含平坦点:所有点共享相同值的连接区域。平坦点阻碍了某些分析,如拓扑表征或排水网络计算。我们提出了一种算法来确定平面点内的符号斜率,并始终如一地放置最小临界点集,以一种比最先进的方法更少偏见的方式。我们给出了合成和真实数据的实验结果,证明了我们的方法如何提供更合理的临界点定位和更好的莫尔斯-斯莫尔复合物的恢复。
{"title":"Disambiguating flat spots in discrete scalar fields","authors":"L. Rocca ,&nbsp;F. Iuricich ,&nbsp;E. Puppo","doi":"10.1016/j.gmod.2025.101299","DOIUrl":"10.1016/j.gmod.2025.101299","url":null,"abstract":"<div><div>We consider 2D scalar fields sampled on a regular grid. When the gradient is low relative to the resolution of the dataset’s range, the signal may contain <em>flat spots</em>: connected areas where all points share the same value. Flat spots hinder certain analyses, such as topological characterization or drainage network computations. We present an algorithm to determine a symbolic slope inside flat spots and consistently place a minimal set of critical points, in a way that is less biased than state-of-the-art methods. We present experimental results on both synthetic and real data, demonstrating how our method provides a more plausible positioning of critical points and a better recovery of the Morse–Smale complex.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"141 ","pages":"Article 101299"},"PeriodicalIF":2.2,"publicationDate":"2025-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144904093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Edge-aware denoising framework for real-time mobile ray tracing 实时移动光线追踪的边缘感知去噪框架
IF 2.2 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-08-27 DOI: 10.1016/j.gmod.2025.101301
Haosen Fu, Mingcong Ma, Junqiu Zhu, Lu Wang, Yanning Xu
With the proliferation of mobile hardware-accelerated ray tracing, visual quality at low sampling rates (1spp) significantly deteriorates due to high-frequency noise and temporal artifacts introduced by Monte Carlo path tracing. Traditional spatiotemporal denoising methods, such as Spatiotemporal Variance-Guided Filtering (SVGF), effectively suppress noise by fusing multi-frame information and using geometry buffer (G-buffer) guided filters. However, their reliance on per-frame variance computation and global filtering imposes prohibitive overhead for mobile devices. This paper proposes an edge-aware, data-driven real-time denoising architecture within the SVGF framework, tailored explicitly for mobile computational constraints. Our method introduces two key innovations that eliminate variance estimation overhead: (1) an adaptive filtering kernel sizing mechanism, which dynamically adjusts filtering scope based on local complexity analysis of the G-buffer; and (2) a data-driven weight table construction strategy, converting traditional computational processes into efficient real-time lookup operations. These innovations significantly enhance processing efficiency while preserving edge accuracy. Experimental results on the Qualcomm Snapdragon 768G platform demonstrate that our method achieves 55 FPS with 1spp input. This frame rate is 67.42% higher than mobile-optimized SVGF, provides better visual quality, and reduces power consumption by 16.80%. Our solution offers a practical and efficient denoising framework suitable for real-time ray tracing in mobile gaming and AR/VR applications.
随着移动硬件加速光线追踪的普及,低采样率(1spp)下的视觉质量由于蒙特卡罗路径追踪引入的高频噪声和时间伪影而显著恶化。传统的时空去噪方法,如时空方差引导滤波(spatial - temporal Variance-Guided Filtering, SVGF),通过融合多帧信息并使用几何缓冲(G-buffer)引导滤波器来有效抑制噪声。然而,它们对每帧方差计算和全局过滤的依赖给移动设备带来了令人望而却步的开销。本文在SVGF框架内提出了一种边缘感知、数据驱动的实时去噪架构,明确针对移动计算约束进行了定制。该方法引入了消除方差估计开销的两个关键创新:(1)自适应滤波核大小机制,该机制基于G-buffer的局部复杂度分析动态调整滤波范围;(2)数据驱动的权重表构建策略,将传统的计算过程转化为高效的实时查找操作。这些创新显著提高加工效率,同时保持边缘精度。在高通骁龙768G平台上的实验结果表明,该方法在1spp输入下可以达到55fps。该帧率比移动优化的SVGF高67.42%,提供了更好的视觉质量,并降低了16.80%的功耗。我们的解决方案提供了一个实用而高效的去噪框架,适用于移动游戏和AR/VR应用中的实时光线追踪。
{"title":"Edge-aware denoising framework for real-time mobile ray tracing","authors":"Haosen Fu,&nbsp;Mingcong Ma,&nbsp;Junqiu Zhu,&nbsp;Lu Wang,&nbsp;Yanning Xu","doi":"10.1016/j.gmod.2025.101301","DOIUrl":"10.1016/j.gmod.2025.101301","url":null,"abstract":"<div><div>With the proliferation of mobile hardware-accelerated ray tracing, visual quality at low sampling rates (1spp) significantly deteriorates due to high-frequency noise and temporal artifacts introduced by Monte Carlo path tracing. Traditional spatiotemporal denoising methods, such as Spatiotemporal Variance-Guided Filtering (SVGF), effectively suppress noise by fusing multi-frame information and using geometry buffer (G-buffer) guided filters. However, their reliance on per-frame variance computation and global filtering imposes prohibitive overhead for mobile devices. This paper proposes an edge-aware, data-driven real-time denoising architecture within the SVGF framework, tailored explicitly for mobile computational constraints. Our method introduces two key innovations that eliminate variance estimation overhead: (1) an adaptive filtering kernel sizing mechanism, which dynamically adjusts filtering scope based on local complexity analysis of the G-buffer; and (2) a data-driven weight table construction strategy, converting traditional computational processes into efficient real-time lookup operations. These innovations significantly enhance processing efficiency while preserving edge accuracy. Experimental results on the Qualcomm Snapdragon 768G platform demonstrate that our method achieves 55 FPS with 1spp input. This <strong>frame rate is 67.42% higher</strong> than mobile-optimized SVGF, provides <strong>better visual quality</strong>, and <strong>reduces power consumption by 16.80%</strong>. Our solution offers a practical and efficient denoising framework suitable for real-time ray tracing in mobile gaming and AR/VR applications.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"141 ","pages":"Article 101301"},"PeriodicalIF":2.2,"publicationDate":"2025-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144904092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive mesh-aligned Gaussian Splatting for monocular human avatar reconstruction 自适应网格对齐高斯飞溅单目人体头像重建
IF 2.2 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-08-25 DOI: 10.1016/j.gmod.2025.101300
Hai Yuan , Xia Yuan , Yanli Liu , Guanyu Xing , Jing Hu , Xi Wu , Zijun Zhou
Virtual human avatars are essential for applications such as gaming, augmented reality, and virtual production. However, existing methods struggle to achieve high fidelity reconstruction from monocular input while keeping hardware costs low. Many approaches rely on the SMPL body prior and apply vertex offsets to represent clothed avatars. Unfortunately, excessive offsets often cause misalignment and blurred contours, particularly around clothing wrinkles, silhouette boundaries, and facial regions. To address these limitations, we propose a dual branch framework for human avatar reconstruction from monocular video. A lightweight Vertex Align Net (VAN) predicts per-vertex normal direction offsets on the SMPL mesh to achieve coarse geometric alignment and guide Gaussian-based human avatar modeling. In parallel, we construct a high resolution facial Gaussian branch based on FLAME estimated parameters, with facial regions localized via pretrained detectors. The facial and body renderings are fused using a semantic mask to enhance facial clarity and ensure globally consistent avatar appearance. Experiments demonstrate that our method surpasses state of the art approaches in modeling animatable human avatars with fine grained fidelity.
虚拟的人类化身对于游戏、增强现实和虚拟生产等应用是必不可少的。然而,现有的方法很难从单目输入实现高保真重建,同时保持低硬件成本。许多方法依赖于SMPL主体,并应用顶点偏移来表示穿着衣服的角色。不幸的是,过度的偏移往往会导致轮廓不一致和模糊,特别是在衣服褶皱、轮廓边界和面部区域。为了解决这些限制,我们提出了一个双分支框架,用于从单目视频中重建人类头像。一个轻量级的顶点对齐网络(VAN)预测SMPL网格上的每个顶点法线方向偏移,以实现粗几何对齐并指导基于高斯的人类化身建模。同时,我们基于FLAME估计参数构建了一个高分辨率的面部高斯分支,并通过预训练的检测器对面部区域进行了定位。面部和身体渲染融合使用语义掩码,以增强面部清晰度,并确保全球一致的化身外观。实验表明,我们的方法在建模具有细粒度保真度的可动画人类化身方面超越了最先进的方法。
{"title":"Adaptive mesh-aligned Gaussian Splatting for monocular human avatar reconstruction","authors":"Hai Yuan ,&nbsp;Xia Yuan ,&nbsp;Yanli Liu ,&nbsp;Guanyu Xing ,&nbsp;Jing Hu ,&nbsp;Xi Wu ,&nbsp;Zijun Zhou","doi":"10.1016/j.gmod.2025.101300","DOIUrl":"10.1016/j.gmod.2025.101300","url":null,"abstract":"<div><div>Virtual human avatars are essential for applications such as gaming, augmented reality, and virtual production. However, existing methods struggle to achieve high fidelity reconstruction from monocular input while keeping hardware costs low. Many approaches rely on the SMPL body prior and apply vertex offsets to represent clothed avatars. Unfortunately, excessive offsets often cause misalignment and blurred contours, particularly around clothing wrinkles, silhouette boundaries, and facial regions. To address these limitations, we propose a dual branch framework for human avatar reconstruction from monocular video. A lightweight Vertex Align Net (VAN) predicts per-vertex normal direction offsets on the SMPL mesh to achieve coarse geometric alignment and guide Gaussian-based human avatar modeling. In parallel, we construct a high resolution facial Gaussian branch based on FLAME estimated parameters, with facial regions localized via pretrained detectors. The facial and body renderings are fused using a semantic mask to enhance facial clarity and ensure globally consistent avatar appearance. Experiments demonstrate that our method surpasses state of the art approaches in modeling animatable human avatars with fine grained fidelity.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"141 ","pages":"Article 101300"},"PeriodicalIF":2.2,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144893147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient RANSAC in 4D Plane Space for Point Cloud Registration 高效的四维平面空间RANSAC点云配准
IF 2.2 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-08-22 DOI: 10.1016/j.gmod.2025.101289
Chang Liu , Chao Liu , Yuming Zhang , Zhongqi Wu , Jianwei Guo
3D registration methods based on point-level information struggle in situations with noise, density variation, large-scale points, and small overlaps, while existing primitive-based methods are usually sensitive to tiny errors in the primitive extraction process. In this paper, we present a reliable and efficient global registration algorithm exploiting the RANdom SAmple Consensus (RANSAC) in the plane space instead of the point space. To improve the inlier ratio in the putative correspondences, we design an inner plane-based descriptor, termed Convex Hull Descriptor (CHD), and an inter plane-based descriptor, termed PLane Feature Histograms (PLFH), which take full advantage of plane contour shape and plane-wise relationship, respectively. Based on those new descriptors, we randomly select corresponding plane pairs to compute candidate transformations, followed by a hypotheses verification step to identify the optimal registration. Extensive tests on large-scale point sets demonstrate the effectiveness of our method, and that it notably improves registration performance compared to state-of-the-art methods in terms of efficiency and accuracy.
基于点级信息的三维配准方法在噪声、密度变化、大尺度点和小重叠的情况下会产生冲突,而现有的基于原语的三维配准方法在原语提取过程中往往对微小误差很敏感。本文提出了一种可靠、高效的全局配准算法,该算法利用了平面空间而不是点空间中的随机样本一致性(RANSAC)。为了提高假定对应的内嵌比,我们设计了一个基于内平面的描述符,称为凸壳描述符(CHD),以及一个基于平面间的描述符,称为平面特征直方图(PLFH),它们分别充分利用了平面轮廓形状和面向关系。基于这些新的描述符,我们随机选择相应的平面对来计算候选变换,然后进行假设验证步骤来确定最优配准。大规模点集的大量测试证明了我们的方法的有效性,并且与最先进的方法相比,它在效率和准确性方面显着提高了配准性能。
{"title":"Efficient RANSAC in 4D Plane Space for Point Cloud Registration","authors":"Chang Liu ,&nbsp;Chao Liu ,&nbsp;Yuming Zhang ,&nbsp;Zhongqi Wu ,&nbsp;Jianwei Guo","doi":"10.1016/j.gmod.2025.101289","DOIUrl":"10.1016/j.gmod.2025.101289","url":null,"abstract":"<div><div>3D registration methods based on point-level information struggle in situations with noise, density variation, large-scale points, and small overlaps, while existing primitive-based methods are usually sensitive to tiny errors in the primitive extraction process. In this paper, we present a reliable and efficient global registration algorithm exploiting the RANdom SAmple Consensus (RANSAC) in the plane space instead of the point space. To improve the inlier ratio in the putative correspondences, we design an inner plane-based descriptor, termed <em>Convex Hull Descriptor</em> (CHD), and an inter plane-based descriptor, termed <em>PLane Feature Histograms</em> (PLFH), which take full advantage of plane contour shape and plane-wise relationship, respectively. Based on those new descriptors, we randomly select corresponding plane pairs to compute candidate transformations, followed by a hypotheses verification step to identify the optimal registration. Extensive tests on large-scale point sets demonstrate the effectiveness of our method, and that it notably improves registration performance compared to state-of-the-art methods in terms of efficiency and accuracy.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"141 ","pages":"Article 101289"},"PeriodicalIF":2.2,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144886852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DISCO: Efficient Diffusion Solver for large-scale Combinatorial Optimization problems 大规模组合优化问题的高效扩散求解器
IF 2.2 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-08-21 DOI: 10.1016/j.gmod.2025.101284
Hang Zhao , Kexiong Yu , Yuhang Huang , Renjiao Yi , Chenyang Zhu , Kai Xu
Combinatorial Optimization (CO) problems are fundamentally important in numerous real-world applications across diverse industries, notably computer graphics, characterized by entailing enormous solution space and demanding time-sensitive response. Despite recent advancements in neural solvers, their limited expressiveness struggles to capture the multi-modal nature of CO landscapes. While some research has adopted diffusion models, these methods sample solutions indiscriminately from the entire NP-complete solution space with time-consuming denoising processes, limiting scalability for large-scale problems. We propose DISCO, an efficient DIffusion Solver for large-scale Combinatorial Optimization problems that excels in both solution quality and inference speed. DISCO’s efficacy is twofold: First, it enhances solution quality by constraining the sampling space to a more meaningful domain guided by solution residues, while preserving the multi-modal properties of the output distributions. Second, it accelerates the denoising process through an analytically solvable approach, enabling solution sampling with very few reverse-time steps and significantly reducing inference time. This inference-speed advantage is further amplified by Jittor, a high-performance learning framework based on just-in-time compiling and meta-operators. DISCO delivers strong performance on large-scale Traveling Salesman Problems and challenging Maximal Independent Set benchmarks, with inference duration up to 5.38 times faster than existing diffusion solver alternatives. We apply DISCO to design 2D/3D TSP Art, enabling the generation of fluid stroke sequences at reduced path costs. By incorporating DISCO’s multi-modal property into a divide-and-conquer strategy, it can further generalize to solve unseen-scale instances out of the box.
组合优化(CO)问题在不同行业的许多实际应用中非常重要,特别是计算机图形学,其特点是需要巨大的解决方案空间和要求时间敏感的响应。尽管最近神经求解器取得了进步,但它们有限的表现力难以捕捉CO景观的多模态本质。虽然一些研究采用了扩散模型,但这些方法不加选择地从整个np完全解空间中采样解,并且耗时去噪,限制了大规模问题的可扩展性。我们提出DISCO,一个有效的大规模组合优化问题的扩散求解器,在解决质量和推理速度上都很出色。DISCO的效果是双重的:首先,它通过将采样空间约束到由解残数引导的更有意义的域来提高解的质量,同时保留了输出分布的多模态性质。其次,它通过解析可解的方法加速了去噪过程,使解采样具有很少的逆时间步长,并显着减少了推理时间。Jittor是一种基于即时编译和元操作符的高性能学习框架,它进一步增强了这种推断速度的优势。DISCO在大规模旅行推销员问题和具有挑战性的最大独立集基准上提供了强大的性能,推理持续时间比现有的扩散求解器替代品快5.38倍。我们将DISCO应用于设计2D/3D TSP Art,从而能够以更低的路径成本生成流体冲程序列。通过将DISCO的多模态特性结合到分而治之的策略中,它可以进一步推广到解决开箱即用的看不见的规模实例。
{"title":"DISCO: Efficient Diffusion Solver for large-scale Combinatorial Optimization problems","authors":"Hang Zhao ,&nbsp;Kexiong Yu ,&nbsp;Yuhang Huang ,&nbsp;Renjiao Yi ,&nbsp;Chenyang Zhu ,&nbsp;Kai Xu","doi":"10.1016/j.gmod.2025.101284","DOIUrl":"10.1016/j.gmod.2025.101284","url":null,"abstract":"<div><div>Combinatorial Optimization (CO) problems are fundamentally important in numerous real-world applications across diverse industries, notably computer graphics, characterized by entailing enormous solution space and demanding time-sensitive response. Despite recent advancements in neural solvers, their limited expressiveness struggles to capture the multi-modal nature of CO landscapes. While some research has adopted diffusion models, these methods sample solutions indiscriminately from the entire NP-complete solution space with time-consuming denoising processes, limiting scalability for large-scale problems. We propose <strong>DISCO</strong>, an efficient <strong>DI</strong>ffusion <strong>S</strong>olver for large-scale <strong>C</strong>ombinatorial <strong>O</strong>ptimization problems that excels in both solution quality and inference speed. DISCO’s efficacy is twofold: First, it enhances solution quality by constraining the sampling space to a more meaningful domain guided by solution residues, while preserving the multi-modal properties of the output distributions. Second, it accelerates the denoising process through an analytically solvable approach, enabling solution sampling with very few reverse-time steps and significantly reducing inference time. This inference-speed advantage is further amplified by Jittor, a high-performance learning framework based on just-in-time compiling and meta-operators. DISCO delivers strong performance on large-scale Traveling Salesman Problems and challenging Maximal Independent Set benchmarks, with inference duration up to 5.38 times faster than existing diffusion solver alternatives. We apply DISCO to design 2D/3D TSP Art, enabling the generation of fluid stroke sequences at reduced path costs. By incorporating DISCO’s multi-modal property into a divide-and-conquer strategy, it can further generalize to solve unseen-scale instances out of the box.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"141 ","pages":"Article 101284"},"PeriodicalIF":2.2,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144878547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DIFF: A dataset for indoor flexible furniture DIFF:室内柔性家具的数据集
IF 2.2 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-08-19 DOI: 10.1016/j.gmod.2025.101293
Jia-Hong Liu , Shao-Kui Zhang , Shuran Sun , Zihao Wang , Song-Hai Zhang
Recently, indoor scene synthesis has gathered significant attention, leading to the development of numerous indoor datasets. However, existing datasets only address static furniture and scenes, ignoring the need for dynamic interior design scenarios that emphasize flexible functionalities. Addressing this gap, we present DIFF (Dataset for Indoor Flexible Furniture), featuring expertly crafted and labeled furniture modules capable of inter-transforming between different states, e.g., a cabinet can be inter-transformed to a desk. Each module exhibits flexibility in shifting to multiple shapes and functionalities. Additionally, we propose a method that adapts our dataset to generate flexible layouts. By matching our flexible objects to objects from existing datasets, we use a graph-based approach to migrate the spatial relation priors for optimizing a layout; subsequent layouts are then generated by minimizing a transition-cost function. Analyses and user studies validate the quality of our modules and demonstrate the plausibility of the proposed method.
近年来,室内场景合成引起了人们的极大关注,导致了大量室内数据集的开发。然而,现有的数据集只涉及静态家具和场景,忽略了强调灵活功能的动态室内设计场景的需求。为了解决这一差距,我们提出了DIFF(室内柔性家具数据集),其特点是能够在不同状态之间相互转换的精心制作和标记的家具模块,例如,一个橱柜可以相互转换为一张桌子。每个模块都可以灵活地转换为多种形状和功能。此外,我们提出了一种方法来适应我们的数据集,以产生灵活的布局。通过将我们的柔性对象与现有数据集中的对象进行匹配,我们使用基于图的方法来迁移空间关系先验以优化布局;然后通过最小化过渡成本函数生成后续布局。分析和用户研究验证了我们模块的质量,并证明了所提出方法的合理性。
{"title":"DIFF: A dataset for indoor flexible furniture","authors":"Jia-Hong Liu ,&nbsp;Shao-Kui Zhang ,&nbsp;Shuran Sun ,&nbsp;Zihao Wang ,&nbsp;Song-Hai Zhang","doi":"10.1016/j.gmod.2025.101293","DOIUrl":"10.1016/j.gmod.2025.101293","url":null,"abstract":"<div><div>Recently, indoor scene synthesis has gathered significant attention, leading to the development of numerous indoor datasets. However, existing datasets only address static furniture and scenes, ignoring the need for dynamic interior design scenarios that emphasize flexible functionalities. Addressing this gap, we present DIFF (Dataset for Indoor Flexible Furniture), featuring expertly crafted and labeled furniture modules capable of inter-transforming between different states, e.g., a cabinet can be inter-transformed to a desk. Each module exhibits flexibility in shifting to multiple shapes and functionalities. Additionally, we propose a method that adapts our dataset to generate flexible layouts. By matching our flexible objects to objects from existing datasets, we use a graph-based approach to migrate the spatial relation priors for optimizing a layout; subsequent layouts are then generated by minimizing a transition-cost function. Analyses and user studies validate the quality of our modules and demonstrate the plausibility of the proposed method.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"141 ","pages":"Article 101293"},"PeriodicalIF":2.2,"publicationDate":"2025-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144867141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nav2Scene: Navigation-driven fine-tuning for robot-friendly scene generation Nav2Scene:用于机器人友好场景生成的导航驱动微调
IF 2.2 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-08-17 DOI: 10.1016/j.gmod.2025.101287
Bowei Jiang , Tongyuan Bai , Peng Zheng , Tieru Wu , Rui Ma
The integration of embodied intelligence in indoor scene synthesis holds significant potential for future interior design applications. Nevertheless, prevailing methodologies for indoor scene synthesis predominantly adhere to data-driven learning paradigms. Despite achieving photorealistic 3D renderings through such approaches, current frameworks systematically neglect to incorporate agent-centric functional metrics essential for optimizing navigational topology and task-oriented interactivity in embodied AI systems like service robotics platforms or autonomous domestic assistants. For example, poorly arranged furniture may prevent robots from effectively interacting with the environment, and this issue cannot be fully resolved by merely introducing prior constraints. To fill this gap, we propose Nav2Scene, a novel plug-and-play fine-tuning mechanism that can be deployed on existing scene generators to enhance the suitability of generated scenes for efficient robot navigation. Specifically, we first introduce path planning score (PPS), which is defined based on the results of the path planning algorithm and can be used to evaluate the robot navigation suitability of a given scene. Then, we pre-compute the PPS of 3D scenes from existing datasets and train a ScoreNet to efficiently predict the PPS of the generated scenes. Finally, the predicted PPS is used to guide the fine-tuning of existing scene generators and produce indoor scenes with higher PPS, indicating improved suitability for robot navigation. We conduct experiments on the 3D-FRONT dataset for different tasks including scene generation, completion and re-arrangement. The results demonstrate that by incorporating our Nav2Scene mechanism, the fine-tuned scene generators can produce scenes with improved navigation compatibility for home robots, while maintaining superior or comparable performance in terms of scene quality and diversity.
将具身智能集成到室内场景合成中,在未来的室内设计应用中具有巨大的潜力。然而,室内场景合成的主流方法主要坚持数据驱动的学习范式。尽管通过这些方法实现了逼真的3D渲染,但目前的框架系统地忽略了将以代理为中心的功能指标纳入优化导航拓扑和任务导向的交互性所必需的嵌入AI系统,如服务机器人平台或自主家庭助理。例如,摆放不当的家具可能会阻碍机器人与环境的有效互动,这个问题不能仅仅通过引入先验约束来完全解决。为了填补这一空白,我们提出了Nav2Scene,这是一种新型的即插即用微调机制,可以部署在现有的场景生成器上,以增强生成的场景对高效机器人导航的适用性。具体来说,我们首先引入路径规划分数(PPS),它是基于路径规划算法的结果定义的,可以用来评估给定场景下机器人的导航适用性。然后,我们从现有的数据集中预先计算3D场景的PPS,并训练ScoreNet来有效地预测生成场景的PPS。最后,利用预测的PPS指导现有场景生成器进行微调,生成更高PPS的室内场景,提高了机器人导航的适用性。我们在3D-FRONT数据集上进行了不同任务的实验,包括场景生成、补全和重新排列。结果表明,通过结合我们的Nav2Scene机制,经过微调的场景生成器可以生成具有改进的家庭机器人导航兼容性的场景,同时在场景质量和多样性方面保持优越或相当的性能。
{"title":"Nav2Scene: Navigation-driven fine-tuning for robot-friendly scene generation","authors":"Bowei Jiang ,&nbsp;Tongyuan Bai ,&nbsp;Peng Zheng ,&nbsp;Tieru Wu ,&nbsp;Rui Ma","doi":"10.1016/j.gmod.2025.101287","DOIUrl":"10.1016/j.gmod.2025.101287","url":null,"abstract":"<div><div>The integration of embodied intelligence in indoor scene synthesis holds significant potential for future interior design applications. Nevertheless, prevailing methodologies for indoor scene synthesis predominantly adhere to data-driven learning paradigms. Despite achieving photorealistic 3D renderings through such approaches, current frameworks systematically neglect to incorporate agent-centric functional metrics essential for optimizing navigational topology and task-oriented interactivity in embodied AI systems like service robotics platforms or autonomous domestic assistants. For example, poorly arranged furniture may prevent robots from effectively interacting with the environment, and this issue cannot be fully resolved by merely introducing prior constraints. To fill this gap, we propose Nav2Scene, a novel plug-and-play fine-tuning mechanism that can be deployed on existing scene generators to enhance the suitability of generated scenes for efficient robot navigation. Specifically, we first introduce path planning score (PPS), which is defined based on the results of the path planning algorithm and can be used to evaluate the robot navigation suitability of a given scene. Then, we pre-compute the PPS of 3D scenes from existing datasets and train a ScoreNet to efficiently predict the PPS of the generated scenes. Finally, the predicted PPS is used to guide the fine-tuning of existing scene generators and produce indoor scenes with higher PPS, indicating improved suitability for robot navigation. We conduct experiments on the 3D-FRONT dataset for different tasks including scene generation, completion and re-arrangement. The results demonstrate that by incorporating our Nav2Scene mechanism, the fine-tuned scene generators can produce scenes with improved navigation compatibility for home robots, while maintaining superior or comparable performance in terms of scene quality and diversity.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"141 ","pages":"Article 101287"},"PeriodicalIF":2.2,"publicationDate":"2025-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144858475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Collision-free path planning method for digital orthodontic treatment 指指正畸治疗的无碰撞路径规划方法
IF 2.2 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-08-15 DOI: 10.1016/j.gmod.2025.101297
Hao Yu , Longdu Liu , Shuangmin Chen , Lin Lu , Yuanfeng Zhou , Shiqing Xin , Changhe Tu
The rapid evolution of digital orthodontics has highlighted a critical need for automated treatment planning systems that balance computational efficiency with clinical reliability. However, existing methods still suffer from several limitations, including excessive clinician involvement (accounting for over 35% of treatment planning time), reliance on empirically defined key frames, and limited biomechanical plausibility, particularly in cases of severe dental crowding. This paper proposes a novel collision-free optimization framework to address these issues simultaneously. Our method defines a total movement energy function evaluated over each tooth’s pose at intermediate time frames. This energy is minimized iteratively using a steepest descent strategy. A rollback mechanism is employed: if inter-tooth penetration is detected during an update, the step size is halved repeatedly until collisions are eliminated. The framework allows flexible control over the number of intermediate frames to enforce a strict constraint on per-tooth displacement, limiting it to 0.2 mm translation or 2° rotation every 10 to 14 days. Clinical evaluations show that the proposed algorithm can generate desirable and clinically valid tooth movement plans, even in complex cases, while significantly reducing the need for manual intervention.
数字正畸的快速发展突出了对平衡计算效率和临床可靠性的自动化治疗计划系统的迫切需求。然而,现有的方法仍然存在一些局限性,包括过多的临床医生参与(占治疗计划时间的35%以上),依赖经验定义的关键框架,以及有限的生物力学合理性,特别是在严重牙齿拥挤的情况下。本文提出了一种新的无碰撞优化框架来同时解决这些问题。我们的方法定义了在中间时间框架内评估每个牙齿姿势的总运动能量函数。使用最陡下降策略迭代最小化该能量。采用回滚机制:如果在更新期间检测到齿间渗透,则步长重复减半,直到消除碰撞。该框架允许灵活控制中间框架的数量,以严格限制每颗牙齿的位移,将其限制在每10至14天0.2毫米的平移或2°旋转。临床评估表明,即使在复杂的情况下,该算法也能产生理想的临床有效的牙齿移动计划,同时大大减少了人工干预的需要。
{"title":"Collision-free path planning method for digital orthodontic treatment","authors":"Hao Yu ,&nbsp;Longdu Liu ,&nbsp;Shuangmin Chen ,&nbsp;Lin Lu ,&nbsp;Yuanfeng Zhou ,&nbsp;Shiqing Xin ,&nbsp;Changhe Tu","doi":"10.1016/j.gmod.2025.101297","DOIUrl":"10.1016/j.gmod.2025.101297","url":null,"abstract":"<div><div>The rapid evolution of digital orthodontics has highlighted a critical need for automated treatment planning systems that balance computational efficiency with clinical reliability. However, existing methods still suffer from several limitations, including excessive clinician involvement (accounting for over 35% of treatment planning time), reliance on empirically defined key frames, and limited biomechanical plausibility, particularly in cases of severe dental crowding. This paper proposes a novel collision-free optimization framework to address these issues simultaneously. Our method defines a total movement energy function evaluated over each tooth’s pose at intermediate time frames. This energy is minimized iteratively using a steepest descent strategy. A rollback mechanism is employed: if inter-tooth penetration is detected during an update, the step size is halved repeatedly until collisions are eliminated. The framework allows flexible control over the number of intermediate frames to enforce a strict constraint on per-tooth displacement, limiting it to 0.2 mm translation or <span><math><mrow><mn>2</mn><mo>°</mo></mrow></math></span> rotation every 10 to 14 days. Clinical evaluations show that the proposed algorithm can generate desirable and clinically valid tooth movement plans, even in complex cases, while significantly reducing the need for manual intervention.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"141 ","pages":"Article 101297"},"PeriodicalIF":2.2,"publicationDate":"2025-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144842157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DP-Adapter: Dual-pathway adapter for boosting fidelity and text consistency in customizable human image generation DP-Adapter:用于在可定制的人类图像生成中提高保真度和文本一致性的双路径适配器
IF 2.2 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-08-15 DOI: 10.1016/j.gmod.2025.101292
Ye Wang , Ruiqi Liu , Xuping Xie , Lanjun Wang , Zili Yi , Rui Ma
With the growing popularity of personalized human content creation and sharing, there is a rising demand for advanced techniques in customized human image generation. However, current methods struggle to simultaneously maintain the fidelity of human identity and ensure the consistency of textual prompts, often resulting in suboptimal outcomes. This shortcoming is primarily due to the lack of effective constraints during the simultaneous integration of visual and textual prompts, leading to unhealthy mutual interference that compromises the full expression of both types of input. Building on prior research that suggests visual and textual conditions influence different regions of an image in distinct ways, we introduce a novel Dual-Pathway Adapter (DP-Adapter) to enhance both high-fidelity identity preservation and textual consistency in personalized human image generation. Our approach begins by decoupling the target human image into visually sensitive and text-sensitive regions. For visually sensitive regions, DP-Adapter employs an Identity-Enhancing Adapter (IEA) to preserve detailed identity features. For text-sensitive regions, we introduce a Textual-Consistency Adapter (TCA) to minimize visual interference and ensure the consistency of textual semantics. To seamlessly integrate these pathways, we develop a Fine-Grained Feature-Level Blending (FFB) module that efficiently combines hierarchical semantic features from both pathways, resulting in more natural and coherent synthesis outcomes. Additionally, DP-Adapter supports various innovative applications, including controllable headshot-to-full-body portrait generation, age editing, old-photo to reality, and expression editing. Extensive experiments demonstrate that DP-Adapter outperforms state-of-the-art methods in both visual fidelity and text consistency, highlighting its effectiveness and versatility in the field of human image generation.
随着个性化人物内容创作和分享的日益普及,对定制人物图像生成的先进技术的需求也在不断增长。然而,目前的方法很难同时保持人类身份的保真度和确保文本提示的一致性,往往导致次优结果。这一缺陷主要是由于在同时集成视觉和文本提示时缺乏有效的约束,导致不健康的相互干扰,从而损害了两种输入类型的充分表达。基于先前的研究表明,视觉和文本条件以不同的方式影响图像的不同区域,我们引入了一种新的双路径适配器(DP-Adapter),以增强个性化人体图像生成中的高保真身份保存和文本一致性。我们的方法首先将目标人类图像解耦为视觉敏感和文本敏感区域。对于视觉敏感区域,DP-Adapter采用身份增强适配器(identity - enhancement Adapter, IEA)来保留详细的身份特征。对于文本敏感区域,我们引入了文本一致性适配器(TCA),以减少视觉干扰并确保文本语义的一致性。为了无缝集成这些路径,我们开发了一个细粒度特征级混合(FFB)模块,该模块有效地结合了来自两个路径的分层语义特征,从而产生更自然和连贯的合成结果。此外,DP-Adapter支持各种创新的应用程序,包括可控的头像到全身肖像生成,年龄编辑,老照片到现实,表情编辑。大量的实验表明,DP-Adapter在视觉保真度和文本一致性方面都优于最先进的方法,突出了其在人类图像生成领域的有效性和多功能性。
{"title":"DP-Adapter: Dual-pathway adapter for boosting fidelity and text consistency in customizable human image generation","authors":"Ye Wang ,&nbsp;Ruiqi Liu ,&nbsp;Xuping Xie ,&nbsp;Lanjun Wang ,&nbsp;Zili Yi ,&nbsp;Rui Ma","doi":"10.1016/j.gmod.2025.101292","DOIUrl":"10.1016/j.gmod.2025.101292","url":null,"abstract":"<div><div>With the growing popularity of personalized human content creation and sharing, there is a rising demand for advanced techniques in customized human image generation. However, current methods struggle to simultaneously maintain the fidelity of human identity and ensure the consistency of textual prompts, often resulting in suboptimal outcomes. This shortcoming is primarily due to the lack of effective constraints during the simultaneous integration of visual and textual prompts, leading to unhealthy mutual interference that compromises the full expression of both types of input. Building on prior research that suggests visual and textual conditions influence different regions of an image in distinct ways, we introduce a novel Dual-Pathway Adapter (DP-Adapter) to enhance both high-fidelity identity preservation and textual consistency in personalized human image generation. Our approach begins by decoupling the target human image into visually sensitive and text-sensitive regions. For visually sensitive regions, DP-Adapter employs an Identity-Enhancing Adapter (IEA) to preserve detailed identity features. For text-sensitive regions, we introduce a Textual-Consistency Adapter (TCA) to minimize visual interference and ensure the consistency of textual semantics. To seamlessly integrate these pathways, we develop a Fine-Grained Feature-Level Blending (FFB) module that efficiently combines hierarchical semantic features from both pathways, resulting in more natural and coherent synthesis outcomes. Additionally, DP-Adapter supports various innovative applications, including controllable headshot-to-full-body portrait generation, age editing, old-photo to reality, and expression editing. Extensive experiments demonstrate that DP-Adapter outperforms state-of-the-art methods in both visual fidelity and text consistency, highlighting its effectiveness and versatility in the field of human image generation.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"141 ","pages":"Article 101292"},"PeriodicalIF":2.2,"publicationDate":"2025-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144842158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Graphical Models
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1