首页 > 最新文献

Graphical Models最新文献

英文 中文
Carvable packing of revolved 3D objects for subtractive manufacturing 用于减法制造的旋转三维物体的可切割包装
IF 2.2 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-09-01 Epub Date: 2025-08-05 DOI: 10.1016/j.gmod.2025.101282
Chengdong Wei, Shuai Feng, Hao Xu, Qidong Zhang, Songyang Zhang, Zongzhen Li, Changhe Tu, Haisen Zhao
Revolved 3D objects are widely used in industrial, manufacturing, and artistic fields, with subtractive manufacturing being a common production method. A key preprocessing step is to maximize raw material utilization by generating as many rough-machined inputs as possible from a single stock piece, which poses a packing problem constrained by tool accessibility. The main challenge is integrating tool accessibility into packing. This paper introduces the carvable packing problem for revolved objects, a critical but under-researched area in subtractive manufacturing. We propose a new carvable coarsening hull and a packing strategy that uses beam search and a bottom-up placement method to position these hulls in the stock material. Our method was tested on diverse sets of revolved objects with different geometries, and physical tests were conducted on a 5-axis machining platform, proving its ability to enhance material use and manufacturability.
旋转三维物体广泛应用于工业、制造业和艺术领域,减法制造是一种常见的生产方法。一个关键的预处理步骤是通过从单个库存件中产生尽可能多的粗加工输入来最大化原材料利用率,这就产生了受工具可及性限制的包装问题。主要的挑战是将工具的可访问性集成到包装中。本文介绍了旋转物体的可切割填充问题,这是减法制造中一个关键但研究较少的领域。我们提出了一种新的可切割粗化船体和包装策略,使用光束搜索和自下而上的放置方法来定位这些船体在库存材料中。我们的方法在不同几何形状的多组旋转物体上进行了测试,并在五轴加工平台上进行了物理测试,证明了其提高材料利用率和可制造性的能力。
{"title":"Carvable packing of revolved 3D objects for subtractive manufacturing","authors":"Chengdong Wei,&nbsp;Shuai Feng,&nbsp;Hao Xu,&nbsp;Qidong Zhang,&nbsp;Songyang Zhang,&nbsp;Zongzhen Li,&nbsp;Changhe Tu,&nbsp;Haisen Zhao","doi":"10.1016/j.gmod.2025.101282","DOIUrl":"10.1016/j.gmod.2025.101282","url":null,"abstract":"<div><div>Revolved 3D objects are widely used in industrial, manufacturing, and artistic fields, with subtractive manufacturing being a common production method. A key preprocessing step is to maximize raw material utilization by generating as many rough-machined inputs as possible from a single stock piece, which poses a packing problem constrained by tool accessibility. The main challenge is integrating tool accessibility into packing. This paper introduces the carvable packing problem for revolved objects, a critical but under-researched area in subtractive manufacturing. We propose a new carvable coarsening hull and a packing strategy that uses beam search and a bottom-up placement method to position these hulls in the stock material. Our method was tested on diverse sets of revolved objects with different geometries, and physical tests were conducted on a 5-axis machining platform, proving its ability to enhance material use and manufacturability.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"141 ","pages":"Article 101282"},"PeriodicalIF":2.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144779312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GPU-accelerated rendering of vector strokes with piecewise quadratic approximation gpu加速绘制的矢量笔画与分段二次逼近
IF 2.2 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-09-01 Epub Date: 2025-08-13 DOI: 10.1016/j.gmod.2025.101295
Xuhai Chen , Guangze Zhang , Wanyi Wang , Juan Cao , Zhonggui Chen
Vector graphics are widely used in areas such as logo design and digital painting, including both stroked and filled paths as primitives. GPU-based rendering for filled paths already has well-established solutions. Due to the complexity of stroked paths, existing methods often render them by approximating strokes with filled shapes. However, the performance of existing methods still leaves room for improvement. This paper designs a GPU-accelerated rendering algorithm along with a curvature-guided parallel adaptive subdivision method to accurately and efficiently render stroke areas. Additionally, we propose an efficient Newton iteration-based method for arc-length parameterization of quadratic curves, along with an error estimation technique. This enables a parallel rendering approach for dashed stroke styles and arc-length guided texture filling. Experimental results show that our method achieves average speedups of 3.4× for rendering quadratic stroked paths and 2.5× for rendering quadratic dashed strokes, compared to the best existing approaches.
矢量图形广泛应用于标志设计和数字绘画等领域,包括笔画和填充路径作为原语。基于gpu的填充路径渲染已经有了完善的解决方案。由于描边路径的复杂性,现有的方法通常是用填充形状逼近描边来绘制。然而,现有方法的性能仍有改进的余地。本文设计了一种gpu加速绘制算法和曲率引导并行自适应细分方法,以准确高效地绘制笔画区域。此外,我们提出了一种有效的基于牛顿迭代的二次曲线弧长参数化方法,以及误差估计技术。这使得虚线描边样式和圆弧长度引导纹理填充的并行渲染方法成为可能。实验结果表明,与现有的最佳方法相比,该方法绘制二次笔画路径的平均速度提高了3.4倍,绘制二次虚线路径的平均速度提高了2.5倍。
{"title":"GPU-accelerated rendering of vector strokes with piecewise quadratic approximation","authors":"Xuhai Chen ,&nbsp;Guangze Zhang ,&nbsp;Wanyi Wang ,&nbsp;Juan Cao ,&nbsp;Zhonggui Chen","doi":"10.1016/j.gmod.2025.101295","DOIUrl":"10.1016/j.gmod.2025.101295","url":null,"abstract":"<div><div>Vector graphics are widely used in areas such as logo design and digital painting, including both stroked and filled paths as primitives. GPU-based rendering for filled paths already has well-established solutions. Due to the complexity of stroked paths, existing methods often render them by approximating strokes with filled shapes. However, the performance of existing methods still leaves room for improvement. This paper designs a GPU-accelerated rendering algorithm along with a curvature-guided parallel adaptive subdivision method to accurately and efficiently render stroke areas. Additionally, we propose an efficient Newton iteration-based method for arc-length parameterization of quadratic curves, along with an error estimation technique. This enables a parallel rendering approach for dashed stroke styles and arc-length guided texture filling. Experimental results show that our method achieves average speedups of <span><math><mrow><mn>3</mn><mo>.</mo><mn>4</mn><mo>×</mo></mrow></math></span> for rendering quadratic stroked paths and <span><math><mrow><mn>2</mn><mo>.</mo><mn>5</mn><mo>×</mo></mrow></math></span> for rendering quadratic dashed strokes, compared to the best existing approaches.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"141 ","pages":"Article 101295"},"PeriodicalIF":2.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144829843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time neural soft shadow synthesis from hard shadows 实时神经软阴影从硬阴影合成
IF 2.2 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-09-01 Epub Date: 2025-08-14 DOI: 10.1016/j.gmod.2025.101294
Ran Chen , Xiang Xu , KaiYao Ge , Yanning Xu , Xiangxu Meng , Lu Wang
Soft shadows play a crucial role in enhancing visual realism in real-time rendering. Although traditional shadow mapping techniques offer high efficiency, they often suffer from artifacts and limited quality. In contrast, ray tracing can produce high-fidelity soft shadows but incurs substantial computational cost. In this paper, we propose a general-purpose, real-time soft shadow generation method based on neural networks. To encode shadow geometry, we employ the hard shadows via shadow mapping as input to our network, which effectively captures the spatial layout of shadow positions and contours. A lightweight U-Net architecture then refines this input to synthesize high-quality soft shadows in real time. The generated shadows closely approximate ray-traced references in visual fidelity. Compared to existing learning-based methods, our approach produces higher-quality soft shadows and offers improved generalization across diverse scenes. Furthermore, it requires no scene-specific precomputation, making it directly applicable to practical real-time rendering scenarios.
在实时渲染中,软阴影对增强视觉真实感起着至关重要的作用。虽然传统的阴影映射技术提供了很高的效率,但它们经常受到伪影和质量限制的影响。射线追踪虽然可以产生高保真的软阴影,但计算成本较高。本文提出了一种基于神经网络的通用实时软阴影生成方法。为了编码阴影几何,我们通过阴影映射将硬阴影作为输入到我们的网络中,这有效地捕获了阴影位置和轮廓的空间布局。然后,一个轻量级的U-Net架构对这个输入进行细化,实时合成高质量的软阴影。生成的阴影在视觉保真度上接近光线跟踪参考。与现有的基于学习的方法相比,我们的方法产生了更高质量的软阴影,并在不同场景中提供了更好的泛化。此外,它不需要特定场景的预计算,使其直接适用于实际的实时渲染场景。
{"title":"Real-time neural soft shadow synthesis from hard shadows","authors":"Ran Chen ,&nbsp;Xiang Xu ,&nbsp;KaiYao Ge ,&nbsp;Yanning Xu ,&nbsp;Xiangxu Meng ,&nbsp;Lu Wang","doi":"10.1016/j.gmod.2025.101294","DOIUrl":"10.1016/j.gmod.2025.101294","url":null,"abstract":"<div><div>Soft shadows play a crucial role in enhancing visual realism in real-time rendering. Although traditional shadow mapping techniques offer high efficiency, they often suffer from artifacts and limited quality. In contrast, ray tracing can produce high-fidelity soft shadows but incurs substantial computational cost. In this paper, we propose a general-purpose, real-time soft shadow generation method based on neural networks. To encode shadow geometry, we employ the hard shadows via shadow mapping as input to our network, which effectively captures the spatial layout of shadow positions and contours. A lightweight U-Net architecture then refines this input to synthesize high-quality soft shadows in real time. The generated shadows closely approximate ray-traced references in visual fidelity. Compared to existing learning-based methods, our approach produces higher-quality soft shadows and offers improved generalization across diverse scenes. Furthermore, it requires no scene-specific precomputation, making it directly applicable to practical real-time rendering scenarios.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"141 ","pages":"Article 101294"},"PeriodicalIF":2.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144842156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Edge-aware denoising framework for real-time mobile ray tracing 实时移动光线追踪的边缘感知去噪框架
IF 2.2 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-09-01 Epub Date: 2025-08-27 DOI: 10.1016/j.gmod.2025.101301
Haosen Fu, Mingcong Ma, Junqiu Zhu, Lu Wang, Yanning Xu
With the proliferation of mobile hardware-accelerated ray tracing, visual quality at low sampling rates (1spp) significantly deteriorates due to high-frequency noise and temporal artifacts introduced by Monte Carlo path tracing. Traditional spatiotemporal denoising methods, such as Spatiotemporal Variance-Guided Filtering (SVGF), effectively suppress noise by fusing multi-frame information and using geometry buffer (G-buffer) guided filters. However, their reliance on per-frame variance computation and global filtering imposes prohibitive overhead for mobile devices. This paper proposes an edge-aware, data-driven real-time denoising architecture within the SVGF framework, tailored explicitly for mobile computational constraints. Our method introduces two key innovations that eliminate variance estimation overhead: (1) an adaptive filtering kernel sizing mechanism, which dynamically adjusts filtering scope based on local complexity analysis of the G-buffer; and (2) a data-driven weight table construction strategy, converting traditional computational processes into efficient real-time lookup operations. These innovations significantly enhance processing efficiency while preserving edge accuracy. Experimental results on the Qualcomm Snapdragon 768G platform demonstrate that our method achieves 55 FPS with 1spp input. This frame rate is 67.42% higher than mobile-optimized SVGF, provides better visual quality, and reduces power consumption by 16.80%. Our solution offers a practical and efficient denoising framework suitable for real-time ray tracing in mobile gaming and AR/VR applications.
随着移动硬件加速光线追踪的普及,低采样率(1spp)下的视觉质量由于蒙特卡罗路径追踪引入的高频噪声和时间伪影而显著恶化。传统的时空去噪方法,如时空方差引导滤波(spatial - temporal Variance-Guided Filtering, SVGF),通过融合多帧信息并使用几何缓冲(G-buffer)引导滤波器来有效抑制噪声。然而,它们对每帧方差计算和全局过滤的依赖给移动设备带来了令人望而却步的开销。本文在SVGF框架内提出了一种边缘感知、数据驱动的实时去噪架构,明确针对移动计算约束进行了定制。该方法引入了消除方差估计开销的两个关键创新:(1)自适应滤波核大小机制,该机制基于G-buffer的局部复杂度分析动态调整滤波范围;(2)数据驱动的权重表构建策略,将传统的计算过程转化为高效的实时查找操作。这些创新显著提高加工效率,同时保持边缘精度。在高通骁龙768G平台上的实验结果表明,该方法在1spp输入下可以达到55fps。该帧率比移动优化的SVGF高67.42%,提供了更好的视觉质量,并降低了16.80%的功耗。我们的解决方案提供了一个实用而高效的去噪框架,适用于移动游戏和AR/VR应用中的实时光线追踪。
{"title":"Edge-aware denoising framework for real-time mobile ray tracing","authors":"Haosen Fu,&nbsp;Mingcong Ma,&nbsp;Junqiu Zhu,&nbsp;Lu Wang,&nbsp;Yanning Xu","doi":"10.1016/j.gmod.2025.101301","DOIUrl":"10.1016/j.gmod.2025.101301","url":null,"abstract":"<div><div>With the proliferation of mobile hardware-accelerated ray tracing, visual quality at low sampling rates (1spp) significantly deteriorates due to high-frequency noise and temporal artifacts introduced by Monte Carlo path tracing. Traditional spatiotemporal denoising methods, such as Spatiotemporal Variance-Guided Filtering (SVGF), effectively suppress noise by fusing multi-frame information and using geometry buffer (G-buffer) guided filters. However, their reliance on per-frame variance computation and global filtering imposes prohibitive overhead for mobile devices. This paper proposes an edge-aware, data-driven real-time denoising architecture within the SVGF framework, tailored explicitly for mobile computational constraints. Our method introduces two key innovations that eliminate variance estimation overhead: (1) an adaptive filtering kernel sizing mechanism, which dynamically adjusts filtering scope based on local complexity analysis of the G-buffer; and (2) a data-driven weight table construction strategy, converting traditional computational processes into efficient real-time lookup operations. These innovations significantly enhance processing efficiency while preserving edge accuracy. Experimental results on the Qualcomm Snapdragon 768G platform demonstrate that our method achieves 55 FPS with 1spp input. This <strong>frame rate is 67.42% higher</strong> than mobile-optimized SVGF, provides <strong>better visual quality</strong>, and <strong>reduces power consumption by 16.80%</strong>. Our solution offers a practical and efficient denoising framework suitable for real-time ray tracing in mobile gaming and AR/VR applications.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"141 ","pages":"Article 101301"},"PeriodicalIF":2.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144904092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient RANSAC in 4D Plane Space for Point Cloud Registration 高效的四维平面空间RANSAC点云配准
IF 2.2 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-09-01 Epub Date: 2025-08-22 DOI: 10.1016/j.gmod.2025.101289
Chang Liu , Chao Liu , Yuming Zhang , Zhongqi Wu , Jianwei Guo
3D registration methods based on point-level information struggle in situations with noise, density variation, large-scale points, and small overlaps, while existing primitive-based methods are usually sensitive to tiny errors in the primitive extraction process. In this paper, we present a reliable and efficient global registration algorithm exploiting the RANdom SAmple Consensus (RANSAC) in the plane space instead of the point space. To improve the inlier ratio in the putative correspondences, we design an inner plane-based descriptor, termed Convex Hull Descriptor (CHD), and an inter plane-based descriptor, termed PLane Feature Histograms (PLFH), which take full advantage of plane contour shape and plane-wise relationship, respectively. Based on those new descriptors, we randomly select corresponding plane pairs to compute candidate transformations, followed by a hypotheses verification step to identify the optimal registration. Extensive tests on large-scale point sets demonstrate the effectiveness of our method, and that it notably improves registration performance compared to state-of-the-art methods in terms of efficiency and accuracy.
基于点级信息的三维配准方法在噪声、密度变化、大尺度点和小重叠的情况下会产生冲突,而现有的基于原语的三维配准方法在原语提取过程中往往对微小误差很敏感。本文提出了一种可靠、高效的全局配准算法,该算法利用了平面空间而不是点空间中的随机样本一致性(RANSAC)。为了提高假定对应的内嵌比,我们设计了一个基于内平面的描述符,称为凸壳描述符(CHD),以及一个基于平面间的描述符,称为平面特征直方图(PLFH),它们分别充分利用了平面轮廓形状和面向关系。基于这些新的描述符,我们随机选择相应的平面对来计算候选变换,然后进行假设验证步骤来确定最优配准。大规模点集的大量测试证明了我们的方法的有效性,并且与最先进的方法相比,它在效率和准确性方面显着提高了配准性能。
{"title":"Efficient RANSAC in 4D Plane Space for Point Cloud Registration","authors":"Chang Liu ,&nbsp;Chao Liu ,&nbsp;Yuming Zhang ,&nbsp;Zhongqi Wu ,&nbsp;Jianwei Guo","doi":"10.1016/j.gmod.2025.101289","DOIUrl":"10.1016/j.gmod.2025.101289","url":null,"abstract":"<div><div>3D registration methods based on point-level information struggle in situations with noise, density variation, large-scale points, and small overlaps, while existing primitive-based methods are usually sensitive to tiny errors in the primitive extraction process. In this paper, we present a reliable and efficient global registration algorithm exploiting the RANdom SAmple Consensus (RANSAC) in the plane space instead of the point space. To improve the inlier ratio in the putative correspondences, we design an inner plane-based descriptor, termed <em>Convex Hull Descriptor</em> (CHD), and an inter plane-based descriptor, termed <em>PLane Feature Histograms</em> (PLFH), which take full advantage of plane contour shape and plane-wise relationship, respectively. Based on those new descriptors, we randomly select corresponding plane pairs to compute candidate transformations, followed by a hypotheses verification step to identify the optimal registration. Extensive tests on large-scale point sets demonstrate the effectiveness of our method, and that it notably improves registration performance compared to state-of-the-art methods in terms of efficiency and accuracy.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"141 ","pages":"Article 101289"},"PeriodicalIF":2.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144886852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TerraCraft: City-scale generative procedural modeling with natural languages TerraCraft:使用自然语言的城市规模生成过程建模
IF 2.2 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-09-01 Epub Date: 2025-08-05 DOI: 10.1016/j.gmod.2025.101285
Zichen Xi , Zhihao Yao , Jiahui Huang , Zi-Qi Lu , Hongyu Yan , Tai-Jiang Mu , Zhigang Wang , Qun-Ce Xu
Automated generation of large-scale 3D scenes presents a significant challenge due to the resource-intensive training and datasets required. This is in sharp contrast to the 2D counterparts that have become readily available due to their superior speed and quality. However, prior work in 3D procedural modeling has demonstrated promise in generating high-quality assets using the combination of algorithms and user-defined rules. To leverage the best of both 2D generative models and procedural modeling tools, we present TerraCraft, a novel framework for generating geometrically high-quality 3D city-scale scenes. By utilizing Large Language Models (LLMs), TerraCraft can generate city-scale 3D scenes from natural text descriptions. With its intuitive operation and powerful capabilities, TerraCraft enables users to easily create geometrically high-quality scenes readily for various applications, such as virtual reality and game design. We validate TerraCraft’s effectiveness through extensive experiments and user studies, showing its superior performance compared to existing baselines.
由于需要资源密集的训练和数据集,大规模3D场景的自动生成提出了一个重大挑战。这与2D版本形成鲜明对比,后者由于速度和质量的优势而变得唾手可得。然而,之前在3D过程建模方面的工作已经证明了使用算法和用户定义规则的组合来生成高质量资产的前景。为了充分利用2D生成模型和程序建模工具的优势,我们提出了TerraCraft,一个用于生成几何高质量3D城市规模场景的新框架。通过使用大型语言模型(llm), TerraCraft可以从自然文本描述中生成城市规模的3D场景。凭借其直观的操作和强大的功能,TerraCraft使用户能够轻松地为各种应用(如虚拟现实和游戏设计)创建几何上高质量的场景。我们通过广泛的实验和用户研究验证了TerraCraft的有效性,与现有基线相比,显示了其优越的性能。
{"title":"TerraCraft: City-scale generative procedural modeling with natural languages","authors":"Zichen Xi ,&nbsp;Zhihao Yao ,&nbsp;Jiahui Huang ,&nbsp;Zi-Qi Lu ,&nbsp;Hongyu Yan ,&nbsp;Tai-Jiang Mu ,&nbsp;Zhigang Wang ,&nbsp;Qun-Ce Xu","doi":"10.1016/j.gmod.2025.101285","DOIUrl":"10.1016/j.gmod.2025.101285","url":null,"abstract":"<div><div>Automated generation of large-scale 3D scenes presents a significant challenge due to the resource-intensive training and datasets required. This is in sharp contrast to the 2D counterparts that have become readily available due to their superior speed and quality. However, prior work in 3D procedural modeling has demonstrated promise in generating high-quality assets using the combination of algorithms and user-defined rules. To leverage the best of both 2D generative models and procedural modeling tools, we present TerraCraft, a novel framework for generating geometrically high-quality 3D city-scale scenes. By utilizing Large Language Models (LLMs), TerraCraft can generate city-scale 3D scenes from natural text descriptions. With its intuitive operation and powerful capabilities, TerraCraft enables users to easily create geometrically high-quality scenes readily for various applications, such as virtual reality and game design. We validate TerraCraft’s effectiveness through extensive experiments and user studies, showing its superior performance compared to existing baselines.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"141 ","pages":"Article 101285"},"PeriodicalIF":2.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144771658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DP-Adapter: Dual-pathway adapter for boosting fidelity and text consistency in customizable human image generation DP-Adapter:用于在可定制的人类图像生成中提高保真度和文本一致性的双路径适配器
IF 2.2 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-09-01 Epub Date: 2025-08-15 DOI: 10.1016/j.gmod.2025.101292
Ye Wang , Ruiqi Liu , Xuping Xie , Lanjun Wang , Zili Yi , Rui Ma
With the growing popularity of personalized human content creation and sharing, there is a rising demand for advanced techniques in customized human image generation. However, current methods struggle to simultaneously maintain the fidelity of human identity and ensure the consistency of textual prompts, often resulting in suboptimal outcomes. This shortcoming is primarily due to the lack of effective constraints during the simultaneous integration of visual and textual prompts, leading to unhealthy mutual interference that compromises the full expression of both types of input. Building on prior research that suggests visual and textual conditions influence different regions of an image in distinct ways, we introduce a novel Dual-Pathway Adapter (DP-Adapter) to enhance both high-fidelity identity preservation and textual consistency in personalized human image generation. Our approach begins by decoupling the target human image into visually sensitive and text-sensitive regions. For visually sensitive regions, DP-Adapter employs an Identity-Enhancing Adapter (IEA) to preserve detailed identity features. For text-sensitive regions, we introduce a Textual-Consistency Adapter (TCA) to minimize visual interference and ensure the consistency of textual semantics. To seamlessly integrate these pathways, we develop a Fine-Grained Feature-Level Blending (FFB) module that efficiently combines hierarchical semantic features from both pathways, resulting in more natural and coherent synthesis outcomes. Additionally, DP-Adapter supports various innovative applications, including controllable headshot-to-full-body portrait generation, age editing, old-photo to reality, and expression editing. Extensive experiments demonstrate that DP-Adapter outperforms state-of-the-art methods in both visual fidelity and text consistency, highlighting its effectiveness and versatility in the field of human image generation.
随着个性化人物内容创作和分享的日益普及,对定制人物图像生成的先进技术的需求也在不断增长。然而,目前的方法很难同时保持人类身份的保真度和确保文本提示的一致性,往往导致次优结果。这一缺陷主要是由于在同时集成视觉和文本提示时缺乏有效的约束,导致不健康的相互干扰,从而损害了两种输入类型的充分表达。基于先前的研究表明,视觉和文本条件以不同的方式影响图像的不同区域,我们引入了一种新的双路径适配器(DP-Adapter),以增强个性化人体图像生成中的高保真身份保存和文本一致性。我们的方法首先将目标人类图像解耦为视觉敏感和文本敏感区域。对于视觉敏感区域,DP-Adapter采用身份增强适配器(identity - enhancement Adapter, IEA)来保留详细的身份特征。对于文本敏感区域,我们引入了文本一致性适配器(TCA),以减少视觉干扰并确保文本语义的一致性。为了无缝集成这些路径,我们开发了一个细粒度特征级混合(FFB)模块,该模块有效地结合了来自两个路径的分层语义特征,从而产生更自然和连贯的合成结果。此外,DP-Adapter支持各种创新的应用程序,包括可控的头像到全身肖像生成,年龄编辑,老照片到现实,表情编辑。大量的实验表明,DP-Adapter在视觉保真度和文本一致性方面都优于最先进的方法,突出了其在人类图像生成领域的有效性和多功能性。
{"title":"DP-Adapter: Dual-pathway adapter for boosting fidelity and text consistency in customizable human image generation","authors":"Ye Wang ,&nbsp;Ruiqi Liu ,&nbsp;Xuping Xie ,&nbsp;Lanjun Wang ,&nbsp;Zili Yi ,&nbsp;Rui Ma","doi":"10.1016/j.gmod.2025.101292","DOIUrl":"10.1016/j.gmod.2025.101292","url":null,"abstract":"<div><div>With the growing popularity of personalized human content creation and sharing, there is a rising demand for advanced techniques in customized human image generation. However, current methods struggle to simultaneously maintain the fidelity of human identity and ensure the consistency of textual prompts, often resulting in suboptimal outcomes. This shortcoming is primarily due to the lack of effective constraints during the simultaneous integration of visual and textual prompts, leading to unhealthy mutual interference that compromises the full expression of both types of input. Building on prior research that suggests visual and textual conditions influence different regions of an image in distinct ways, we introduce a novel Dual-Pathway Adapter (DP-Adapter) to enhance both high-fidelity identity preservation and textual consistency in personalized human image generation. Our approach begins by decoupling the target human image into visually sensitive and text-sensitive regions. For visually sensitive regions, DP-Adapter employs an Identity-Enhancing Adapter (IEA) to preserve detailed identity features. For text-sensitive regions, we introduce a Textual-Consistency Adapter (TCA) to minimize visual interference and ensure the consistency of textual semantics. To seamlessly integrate these pathways, we develop a Fine-Grained Feature-Level Blending (FFB) module that efficiently combines hierarchical semantic features from both pathways, resulting in more natural and coherent synthesis outcomes. Additionally, DP-Adapter supports various innovative applications, including controllable headshot-to-full-body portrait generation, age editing, old-photo to reality, and expression editing. Extensive experiments demonstrate that DP-Adapter outperforms state-of-the-art methods in both visual fidelity and text consistency, highlighting its effectiveness and versatility in the field of human image generation.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"141 ","pages":"Article 101292"},"PeriodicalIF":2.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144842158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nav2Scene: Navigation-driven fine-tuning for robot-friendly scene generation Nav2Scene:用于机器人友好场景生成的导航驱动微调
IF 2.2 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-09-01 Epub Date: 2025-08-17 DOI: 10.1016/j.gmod.2025.101287
Bowei Jiang , Tongyuan Bai , Peng Zheng , Tieru Wu , Rui Ma
The integration of embodied intelligence in indoor scene synthesis holds significant potential for future interior design applications. Nevertheless, prevailing methodologies for indoor scene synthesis predominantly adhere to data-driven learning paradigms. Despite achieving photorealistic 3D renderings through such approaches, current frameworks systematically neglect to incorporate agent-centric functional metrics essential for optimizing navigational topology and task-oriented interactivity in embodied AI systems like service robotics platforms or autonomous domestic assistants. For example, poorly arranged furniture may prevent robots from effectively interacting with the environment, and this issue cannot be fully resolved by merely introducing prior constraints. To fill this gap, we propose Nav2Scene, a novel plug-and-play fine-tuning mechanism that can be deployed on existing scene generators to enhance the suitability of generated scenes for efficient robot navigation. Specifically, we first introduce path planning score (PPS), which is defined based on the results of the path planning algorithm and can be used to evaluate the robot navigation suitability of a given scene. Then, we pre-compute the PPS of 3D scenes from existing datasets and train a ScoreNet to efficiently predict the PPS of the generated scenes. Finally, the predicted PPS is used to guide the fine-tuning of existing scene generators and produce indoor scenes with higher PPS, indicating improved suitability for robot navigation. We conduct experiments on the 3D-FRONT dataset for different tasks including scene generation, completion and re-arrangement. The results demonstrate that by incorporating our Nav2Scene mechanism, the fine-tuned scene generators can produce scenes with improved navigation compatibility for home robots, while maintaining superior or comparable performance in terms of scene quality and diversity.
将具身智能集成到室内场景合成中,在未来的室内设计应用中具有巨大的潜力。然而,室内场景合成的主流方法主要坚持数据驱动的学习范式。尽管通过这些方法实现了逼真的3D渲染,但目前的框架系统地忽略了将以代理为中心的功能指标纳入优化导航拓扑和任务导向的交互性所必需的嵌入AI系统,如服务机器人平台或自主家庭助理。例如,摆放不当的家具可能会阻碍机器人与环境的有效互动,这个问题不能仅仅通过引入先验约束来完全解决。为了填补这一空白,我们提出了Nav2Scene,这是一种新型的即插即用微调机制,可以部署在现有的场景生成器上,以增强生成的场景对高效机器人导航的适用性。具体来说,我们首先引入路径规划分数(PPS),它是基于路径规划算法的结果定义的,可以用来评估给定场景下机器人的导航适用性。然后,我们从现有的数据集中预先计算3D场景的PPS,并训练ScoreNet来有效地预测生成场景的PPS。最后,利用预测的PPS指导现有场景生成器进行微调,生成更高PPS的室内场景,提高了机器人导航的适用性。我们在3D-FRONT数据集上进行了不同任务的实验,包括场景生成、补全和重新排列。结果表明,通过结合我们的Nav2Scene机制,经过微调的场景生成器可以生成具有改进的家庭机器人导航兼容性的场景,同时在场景质量和多样性方面保持优越或相当的性能。
{"title":"Nav2Scene: Navigation-driven fine-tuning for robot-friendly scene generation","authors":"Bowei Jiang ,&nbsp;Tongyuan Bai ,&nbsp;Peng Zheng ,&nbsp;Tieru Wu ,&nbsp;Rui Ma","doi":"10.1016/j.gmod.2025.101287","DOIUrl":"10.1016/j.gmod.2025.101287","url":null,"abstract":"<div><div>The integration of embodied intelligence in indoor scene synthesis holds significant potential for future interior design applications. Nevertheless, prevailing methodologies for indoor scene synthesis predominantly adhere to data-driven learning paradigms. Despite achieving photorealistic 3D renderings through such approaches, current frameworks systematically neglect to incorporate agent-centric functional metrics essential for optimizing navigational topology and task-oriented interactivity in embodied AI systems like service robotics platforms or autonomous domestic assistants. For example, poorly arranged furniture may prevent robots from effectively interacting with the environment, and this issue cannot be fully resolved by merely introducing prior constraints. To fill this gap, we propose Nav2Scene, a novel plug-and-play fine-tuning mechanism that can be deployed on existing scene generators to enhance the suitability of generated scenes for efficient robot navigation. Specifically, we first introduce path planning score (PPS), which is defined based on the results of the path planning algorithm and can be used to evaluate the robot navigation suitability of a given scene. Then, we pre-compute the PPS of 3D scenes from existing datasets and train a ScoreNet to efficiently predict the PPS of the generated scenes. Finally, the predicted PPS is used to guide the fine-tuning of existing scene generators and produce indoor scenes with higher PPS, indicating improved suitability for robot navigation. We conduct experiments on the 3D-FRONT dataset for different tasks including scene generation, completion and re-arrangement. The results demonstrate that by incorporating our Nav2Scene mechanism, the fine-tuned scene generators can produce scenes with improved navigation compatibility for home robots, while maintaining superior or comparable performance in terms of scene quality and diversity.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"141 ","pages":"Article 101287"},"PeriodicalIF":2.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144858475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimization of cross-derivatives for ribbon-based multi-sided surfaces 带状多面曲面交叉导数的优化
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-08-01 Epub Date: 2025-06-19 DOI: 10.1016/j.gmod.2025.101275
Erkan Gunpinar , A. Alper Tasmektepligil , Márton Vaitkus , Péter Salvi
This work investigates ribbon-based multi-sided surfaces that satisfy positional and cross-derivative constraints to ensure smooth transitions with adjacent tensor-product and multi-sided surfaces. The influence of cross-derivatives, crucial to surface quality, is studied within Kato’s transfinite surface interpolation instead of control point-based methods. To enhance surface quality, the surface is optimized using cost functions based on curvature metrics. Specifically, a Gaussian curvature-based cost function is also proposed in this work. An automated optimization procedure is introduced to determine rotation angles of cross-derivatives around normals and their magnitudes along curves in Kato’s interpolation scheme. Experimental results using both primitive (e.g., spherical) and realistic examples highlight the effectiveness of the proposed approach in improving surface quality.
这项工作研究了满足位置和交叉导数约束的基于带状的多边面,以确保与相邻张量积和多边面的平滑过渡。交叉导数对曲面质量的影响是至关重要的,在加藤的超限曲面插值中代替基于控制点的方法进行了研究。为了提高表面质量,使用基于曲率度量的成本函数对表面进行优化。具体来说,本文还提出了一种基于高斯曲率的代价函数。在加藤插值方案中,引入了一种自动优化程序来确定法向周围交叉导数的旋转角度及其沿曲线的幅度。使用原始(例如,球面)和现实实例的实验结果突出了所提出的方法在改善表面质量方面的有效性。
{"title":"Optimization of cross-derivatives for ribbon-based multi-sided surfaces","authors":"Erkan Gunpinar ,&nbsp;A. Alper Tasmektepligil ,&nbsp;Márton Vaitkus ,&nbsp;Péter Salvi","doi":"10.1016/j.gmod.2025.101275","DOIUrl":"10.1016/j.gmod.2025.101275","url":null,"abstract":"<div><div>This work investigates ribbon-based multi-sided surfaces that satisfy positional and cross-derivative constraints to ensure smooth transitions with adjacent tensor-product and multi-sided surfaces. The influence of cross-derivatives, crucial to surface quality, is studied within Kato’s transfinite surface interpolation instead of control point-based methods. To enhance surface quality, the surface is optimized using cost functions based on curvature metrics. Specifically, a Gaussian curvature-based cost function is also proposed in this work. An automated optimization procedure is introduced to determine rotation angles of cross-derivatives around normals and their magnitudes along curves in Kato’s interpolation scheme. Experimental results using both primitive (e.g., spherical) and realistic examples highlight the effectiveness of the proposed approach in improving surface quality.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"140 ","pages":"Article 101275"},"PeriodicalIF":2.5,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144314599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Goal-oriented 3D pattern adjustment with machine learning 目标导向的3D模式调整与机器学习
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-08-01 Epub Date: 2025-06-17 DOI: 10.1016/j.gmod.2025.101272
Megha Shastry , Ye Fan , Clarissa Martins , Dinesh K. Pai
Fit and sizing of clothing are fundamental problems in the field of garment design, manufacture, and retail. Here we propose new computational methods for adjusting the fit of clothing on realistic models of the human body by interactively modifying desired fit attributes. Clothing fit represents the relationship between the body and the garment, and can be quantified using physical fit attributes such as ease and pressure on the body. However, the relationship between pattern geometry and such fit attributes is notoriously complex and nonlinear, requiring deep pattern making expertise to adjust patterns to achieve fit goals. Such attributes can be computed by physically based simulations, using soft avatars. Here we propose a method to learn the relationship between the fit attributes and the space of 2D pattern edits. We demonstrate our method via interactive tools that directly edit fit attributes in 3D and instantaneously predict the corresponding pattern adjustments. The approach has been tested with a range of garment types, and validated by comparing with physical prototypes. Our method introduces an alternative way to directly express fit adjustment goals, making pattern adjustment more broadly accessible. As an additional benefit, the proposed approach allows pattern adjustments to be systematized, enabling better communication and audit of decisions.
服装的合身和尺寸是服装设计、制造和零售领域的基本问题。在这里,我们提出了一种新的计算方法,通过交互修改期望的合身属性来调整人体逼真模型上的服装合身度。服装合身代表了身体和服装之间的关系,可以用身体的舒适度和压力等物理合身属性来量化。然而,模式几何与这些拟合属性之间的关系是出了名的复杂和非线性,需要深厚的模式制作专业知识来调整模式以达到拟合目标。这些属性可以通过基于物理的模拟计算,使用软化身。本文提出了一种学习二维图形编辑的拟合属性与空间关系的方法。我们通过交互式工具演示了我们的方法,该工具可以直接在3D中编辑适合属性并立即预测相应的模式调整。该方法已经在一系列服装类型上进行了测试,并通过与实物原型的比较进行了验证。我们的方法引入了一种直接表达拟合调整目标的替代方法,使模式调整更容易实现。作为一个额外的好处,所建议的方法允许将模式调整系统化,从而能够更好地沟通和审计决策。
{"title":"Goal-oriented 3D pattern adjustment with machine learning","authors":"Megha Shastry ,&nbsp;Ye Fan ,&nbsp;Clarissa Martins ,&nbsp;Dinesh K. Pai","doi":"10.1016/j.gmod.2025.101272","DOIUrl":"10.1016/j.gmod.2025.101272","url":null,"abstract":"<div><div>Fit and sizing of clothing are fundamental problems in the field of garment design, manufacture, and retail. Here we propose new computational methods for adjusting the fit of clothing on realistic models of the human body by interactively modifying desired <em>fit attributes</em>. Clothing fit represents the relationship between the body and the garment, and can be quantified using physical fit attributes such as ease and pressure on the body. However, the relationship between pattern geometry and such fit attributes is notoriously complex and nonlinear, requiring deep pattern making expertise to adjust patterns to achieve fit goals. Such attributes can be computed by physically based simulations, using soft avatars. Here we propose a method to learn the relationship between the fit attributes and the space of 2D pattern edits. We demonstrate our method via interactive tools that directly edit fit attributes in 3D and instantaneously predict the corresponding pattern adjustments. The approach has been tested with a range of garment types, and validated by comparing with physical prototypes. Our method introduces an alternative way to directly express fit adjustment goals, making pattern adjustment more broadly accessible. As an additional benefit, the proposed approach allows pattern adjustments to be systematized, enabling better communication and audit of decisions.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"140 ","pages":"Article 101272"},"PeriodicalIF":2.5,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144298108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Graphical Models
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1