首页 > 最新文献

ACM Transactions on Graphics最新文献

英文 中文
SZ Sequences: Binary-Based (0, 2 q )-Sequences SZ序列:基于二进制的(0,2 q)序列
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763272
Abdalla G. M. Ahmed, Matt Pharr, Victor Ostromoukhov, Hui Huang
Low-discrepancy sequences have seen widespread adoption in computer graphics thanks to the superior rates of convergence that they provide. Because rendering integrals often are comprised of products of lower-dimensional integrals, recent work has focused on developing sequences that are also well-distributed in lower-dimensional projections. To this end, we introduce a novel construction of binary-based (0, 4)-sequences; that is, progressive fully multi-stratified sequences of 4D points, and extend the idea to higher power-of-two dimensions. We further show that not only it is possible to nest lower-dimensional sequences in higher-dimensional ones—for example, embedding a (0, 2)-sequence within our (0, 4)-sequence—but that we can ensemble two (0, 2)-sequences into a (0, 4)-sequence, four (0, 4)-sequences into a (0,16)-sequence, and so on. Such sequences can provide excellent rates of convergence when integrals include lower-dimensional integration problems in 2, 4, 16,... dimensions. Our construction is based on using 2×2 block matrices as symbols to construct larger matrices that potentially generate a sequence with the target (0, s )-sequence in base s property. We describe how to search for suitable alphabets and identify two distinct, cross-related alphabets of block symbols, which we call s and z , hence SZ for the resulting family of sequences. Given the alphabets, we construct candidate generator matrices and search for valid sets of matrices. We then infer a simple recurrence formula to construct full-resolution (64-bit) matrices. Because our generator matrices are binary, they allow highly-efficient implementation using bitwise operations and can be used as a drop-in replacement for Sobol matrices in existing applications. We compare SZ sequences to state-of-the-art low discrepancy sequences, and demonstrate mean relative squared error improvements up to 1.93× in common rendering applications.
低差异序列由于其优越的收敛速度在计算机图形学中得到了广泛的应用。由于渲染积分通常由低维积分的乘积组成,因此最近的工作重点是开发在低维投影中也分布良好的序列。为此,我们引入了一种新的基于二进制的(0,4)序列构造;即四维点的渐进全多层序列,并将此思想推广到二维的更高次幂。我们进一步证明,不仅可以在高维序列中嵌套低维序列——例如,在(0,4)序列中嵌入一个(0,2)序列——而且可以将两个(0,2)序列集成到一个(0,4)序列中,将四个(0,4)序列集成到一个(0,16)序列中,等等。当积分包括2、4、16、…中的低维积分问题时,这样的序列可以提供极好的收敛速度。维度。我们的构造是基于使用2×2块矩阵作为符号来构造更大的矩阵,这些矩阵可能生成一个具有目标(0,s)序列的基s属性的序列。我们描述了如何搜索合适的字母,并识别两个不同的,交叉相关的块符号字母,我们称之为s和z,因此SZ为结果序列族。给定字母表,我们构造候选生成矩阵并搜索有效矩阵集。然后我们推导出一个简单的递归公式来构造全分辨率(64位)矩阵。由于我们的生成器矩阵是二进制的,因此它们允许使用位操作高效地实现,并且可以在现有应用程序中作为Sobol矩阵的临时替代品。我们将SZ序列与最先进的低差异序列进行了比较,并证明在常见的渲染应用程序中,平均相对平方误差提高了1.93倍。
{"title":"SZ Sequences: Binary-Based (0, 2 q )-Sequences","authors":"Abdalla G. M. Ahmed, Matt Pharr, Victor Ostromoukhov, Hui Huang","doi":"10.1145/3763272","DOIUrl":"https://doi.org/10.1145/3763272","url":null,"abstract":"Low-discrepancy sequences have seen widespread adoption in computer graphics thanks to the superior rates of convergence that they provide. Because rendering integrals often are comprised of products of lower-dimensional integrals, recent work has focused on developing sequences that are also well-distributed in lower-dimensional projections. To this end, we introduce a novel construction of binary-based (0, 4)-sequences; that is, progressive fully multi-stratified sequences of 4D points, and extend the idea to higher power-of-two dimensions. We further show that not only it is possible to nest lower-dimensional sequences in higher-dimensional ones—for example, embedding a (0, 2)-sequence within our (0, 4)-sequence—but that we can ensemble two (0, 2)-sequences into a (0, 4)-sequence, four (0, 4)-sequences into a (0,16)-sequence, and so on. Such sequences can provide excellent rates of convergence when integrals include lower-dimensional integration problems in 2, 4, 16,... dimensions. Our construction is based on using 2×2 block matrices as symbols to construct larger matrices that potentially generate a sequence with the target (0, <jats:italic toggle=\"yes\">s</jats:italic> )-sequence in base <jats:italic toggle=\"yes\">s</jats:italic> property. We describe how to search for suitable alphabets and identify two distinct, cross-related alphabets of block symbols, which we call <jats:italic toggle=\"yes\">s</jats:italic> and <jats:italic toggle=\"yes\">z</jats:italic> , hence <jats:italic toggle=\"yes\">SZ</jats:italic> for the resulting family of sequences. Given the alphabets, we construct candidate generator matrices and search for valid sets of matrices. We then infer a simple recurrence formula to construct full-resolution (64-bit) matrices. Because our generator matrices are binary, they allow highly-efficient implementation using bitwise operations and can be used as a drop-in replacement for Sobol matrices in existing applications. We compare SZ sequences to state-of-the-art low discrepancy sequences, and demonstrate mean relative squared error improvements up to 1.93× in common rendering applications.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"33 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalized Unbiased Reconstruction for Gradient-Domain Rendering 梯度域绘制的广义无偏重建
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763297
Difei Yan, Zengyu Li, Lifan Wu, Kun Xu
Gradient-domain rendering estimates image-space gradients using correlated sampling, which can be combined with color information to reconstruct smoother and less noisy images. While simple ℒ 2 reconstruction is unbiased, it often leads to visible artifacts. In contrast, most recent reconstruction methods based on learned or handcrafted techniques improve visual quality but introduce bias, leaving the development of practically unbiased reconstruction approaches relatively underexplored. In this work, we propose a generalized framework for unbiased reconstruction in gradient-domain rendering. We first derive the unbiasedness condition under a general formulation that linearly combines pixel colors and gradients. Based on this unbiasedness condition, we design a practical algorithm 1 that minimizes image variance while strictly satisfying unbiasedness. Experimental results demonstrate that our method not only guarantees unbiasedness but also achieves superior quality compared to existing unbiased and slightly biased reconstruction methods.
梯度域绘制利用相关采样估计图像空间梯度,并结合颜色信息重建更平滑、噪声更小的图像。虽然简单的重构是无偏的,但它经常导致可见的伪影。相比之下,最近大多数基于学习或手工技术的重建方法提高了视觉质量,但引入了偏见,使得实际无偏倚重建方法的发展相对缺乏探索。在这项工作中,我们提出了一个用于梯度域绘制的无偏重建的广义框架。我们首先在线性组合像素颜色和梯度的一般公式下推导出无偏性条件。基于这种无偏性条件,我们设计了一种实用的算法1,在严格满足无偏性的情况下最小化图像方差。实验结果表明,与现有的无偏和微偏重建方法相比,我们的方法不仅保证了无偏性,而且获得了更好的质量。
{"title":"Generalized Unbiased Reconstruction for Gradient-Domain Rendering","authors":"Difei Yan, Zengyu Li, Lifan Wu, Kun Xu","doi":"10.1145/3763297","DOIUrl":"https://doi.org/10.1145/3763297","url":null,"abstract":"Gradient-domain rendering estimates image-space gradients using correlated sampling, which can be combined with color information to reconstruct smoother and less noisy images. While simple ℒ <jats:sub>2</jats:sub> reconstruction is unbiased, it often leads to visible artifacts. In contrast, most recent reconstruction methods based on learned or handcrafted techniques improve visual quality but introduce bias, leaving the development of practically unbiased reconstruction approaches relatively underexplored. In this work, we propose a generalized framework for unbiased reconstruction in gradient-domain rendering. We first derive the unbiasedness condition under a general formulation that linearly combines pixel colors and gradients. Based on this unbiasedness condition, we design a practical algorithm <jats:sup>1</jats:sup> that minimizes image variance while strictly satisfying unbiasedness. Experimental results demonstrate that our method not only guarantees unbiasedness but also achieves superior quality compared to existing unbiased and slightly biased reconstruction methods.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"203 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NeuVAS: Neural Implicit Surfaces for Variational Shape Modeling 用于变分形状建模的神经隐式曲面
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763331
Pengfei Wang, Qiujie Dong, Fangtian Liang, Hao Pan, Lei Yang, Congyi Zhang, Guying Lin, Caiming Zhang, Yuanfeng Zhou, Changhe Tu, Shiqing Xin, Alla Sheffer, Xin Li, Wenping Wang
Neural implicit shape representation has drawn significant attention in recent years due to its smoothness, differentiability, and topological flexibility. However, directly modeling the shape of a neural implicit surface, especially as the zero-level set of a neural signed distance function (SDF), with sparse geometric control is still a challenging task. Sparse input shape control typically includes 3D curve networks or, more generally, 3D curve sketches, which are unstructured and cannot be connected to form a curve network, and therefore more difficult to deal with. While 3D curve networks or curve sketches provide intuitive shape control, their sparsity and varied topology pose challenges in generating high-quality surfaces to meet such curve constraints. In this paper, we propose NeuVAS, a variational approach to shape modeling using neural implicit surfaces constrained under sparse input shape control, including unstructured 3D curve sketches as well as connected 3D curve networks. Specifically, we introduce a smoothness term based on a functional of surface curvatures to minimize shape variation of the zero-level set surface of a neural SDF. We also develop a new technique to faithfully model G 0 sharp feature curves as specified in the input curve sketches. Comprehensive comparisons with the state-of-the-art methods demonstrate the significant advantages of our method.
神经隐式形状表示由于其平滑性、可微性和拓扑灵活性近年来引起了广泛的关注。然而,利用稀疏几何控制直接建模神经隐式曲面的形状,特别是作为神经符号距离函数(SDF)的零水平集,仍然是一个具有挑战性的任务。稀疏输入形状控制通常包括三维曲线网络,或者更一般地包括三维曲线草图,它们是非结构化的,不能连接形成曲线网络,因此更难处理。虽然3D曲线网络或曲线草图提供了直观的形状控制,但它们的稀疏性和各种拓扑结构对生成高质量表面以满足此类曲线约束提出了挑战。在本文中,我们提出了NeuVAS,这是一种使用稀疏输入形状控制约束下的神经隐式曲面进行形状建模的变分方法,包括非结构化的三维曲线草图和连接的三维曲线网络。具体来说,我们引入了一个基于曲面曲率函数的平滑项,以最小化神经SDF的零水平集曲面的形状变化。我们还开发了一种新技术来忠实地模拟g0尖锐特征曲线,如输入曲线草图中指定的那样。与最先进的方法的综合比较表明了我们的方法的显著优势。
{"title":"NeuVAS: Neural Implicit Surfaces for Variational Shape Modeling","authors":"Pengfei Wang, Qiujie Dong, Fangtian Liang, Hao Pan, Lei Yang, Congyi Zhang, Guying Lin, Caiming Zhang, Yuanfeng Zhou, Changhe Tu, Shiqing Xin, Alla Sheffer, Xin Li, Wenping Wang","doi":"10.1145/3763331","DOIUrl":"https://doi.org/10.1145/3763331","url":null,"abstract":"Neural implicit shape representation has drawn significant attention in recent years due to its smoothness, differentiability, and topological flexibility. However, directly modeling the shape of a neural implicit surface, especially as the zero-level set of a neural signed distance function (SDF), with sparse geometric control is still a challenging task. Sparse input shape control typically includes 3D curve networks or, more generally, 3D curve sketches, which are unstructured and cannot be connected to form a curve network, and therefore more difficult to deal with. While 3D curve networks or curve sketches provide intuitive shape control, their sparsity and varied topology pose challenges in generating high-quality surfaces to meet such curve constraints. In this paper, we propose NeuVAS, a variational approach to shape modeling using neural implicit surfaces constrained under sparse input shape control, including unstructured 3D curve sketches as well as connected 3D curve networks. Specifically, we introduce a smoothness term based on a functional of surface curvatures to minimize shape variation of the zero-level set surface of a neural SDF. We also develop a new technique to faithfully model <jats:italic toggle=\"yes\">G</jats:italic> <jats:sup>0</jats:sup> sharp feature curves as specified in the input curve sketches. Comprehensive comparisons with the state-of-the-art methods demonstrate the significant advantages of our method.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"1 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AnySplat: Feed-forward 3D Gaussian Splatting from Unconstrained Views AnySplat:前馈三维高斯溅射从无约束视图
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763326
Lihan Jiang, Yucheng Mao, Linning Xu, Tao Lu, Kerui Ren, Yichen Jin, Xudong Xu, Mulin Yu, Jiangmiao Pang, Feng Zhao, Dahua Lin, Bo Dai
We introduce AnySplat, a feed-forward network for novel-view synthesis from uncalibrated image collections. In contrast to traditional neural-rendering pipelines that demand known camera poses and per-scene optimization, or recent feed-forward methods that buckle under the computational weight of dense views—our model predicts everything in one shot. A single forward pass yields a set of 3D Gaussian primitives encoding both scene geometry and appearance, and the corresponding camera intrinsics and extrinsics for each input image. This unified design scales effortlessly to casually captured, multi-view datasets without any pose annotations. In extensive zero-shot evaluations, AnySplat matches the quality of pose-aware baselines in both sparse- and dense-view scenarios while surpassing existing pose-free approaches. Moreover, it greatly reduces rendering latency compared to optimization-based neural fields, bringing real-time novel-view synthesis within reach for unconstrained capture settings. Project page: https://city-super.github.io/anysplat/.
我们介绍了AnySplat,一个前馈网络,用于从未校准的图像集合中合成新视图。传统的神经渲染管道需要已知的相机姿势和每个场景的优化,或者最近的前馈方法在密集视图的计算权重下弯曲,与之相反,我们的模型在一个镜头中预测一切。单个前向传递产生一组3D高斯原语,编码场景几何形状和外观,以及每个输入图像对应的相机内部和外部特征。这种统一的设计可以毫不费力地扩展到随意捕获的多视图数据集,而无需任何姿态注释。在广泛的零射击评估中,AnySplat在稀疏和密集视图场景中匹配姿态感知基线的质量,同时超越现有的无姿态方法。此外,与基于优化的神经场相比,它大大减少了渲染延迟,使实时新视图合成能够实现无约束的捕获设置。项目页面:https://city-super.github.io/anysplat/。
{"title":"AnySplat: Feed-forward 3D Gaussian Splatting from Unconstrained Views","authors":"Lihan Jiang, Yucheng Mao, Linning Xu, Tao Lu, Kerui Ren, Yichen Jin, Xudong Xu, Mulin Yu, Jiangmiao Pang, Feng Zhao, Dahua Lin, Bo Dai","doi":"10.1145/3763326","DOIUrl":"https://doi.org/10.1145/3763326","url":null,"abstract":"We introduce AnySplat, a feed-forward network for novel-view synthesis from uncalibrated image collections. In contrast to traditional neural-rendering pipelines that demand known camera poses and per-scene optimization, or recent feed-forward methods that buckle under the computational weight of dense views—our model predicts everything in one shot. A single forward pass yields a set of 3D Gaussian primitives encoding both scene geometry and appearance, and the corresponding camera intrinsics and extrinsics for each input image. This unified design scales effortlessly to casually captured, multi-view datasets without any pose annotations. In extensive zero-shot evaluations, AnySplat matches the quality of pose-aware baselines in both sparse- and dense-view scenarios while surpassing existing pose-free approaches. Moreover, it greatly reduces rendering latency compared to optimization-based neural fields, bringing real-time novel-view synthesis within reach for unconstrained capture settings. Project page: https://city-super.github.io/anysplat/.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"28 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Granule-In-Cell Method for Simulating Sand–Water Mixtures 模拟砂-水混合物的细胞内颗粒法
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763279
Yizao Tang, Yuechen Zhu, Xingyu Ni, Baoquan Chen
The simulation of sand-water mixtures requires capturing the stochastic behavior of individual sand particles within a uniform, continuous fluid medium. However, most existing approaches, which only treat sand particles as markers within fluid solvers, fail to account for both the forces acting on individual sand particles and the collective feedback of the particle assemblies on the fluid. This prevents faithful reproduction of characteristic phenomena including transport, deposition, and clogging. Building upon kinetic ensemble averaging technique, we propose a physically consistent coupling strategy and introduce a novel Granule-In-Cell (GIC) method for modeling such sand-water interactions. We employ the Discrete Element Method (DEM) to capture fine-scale granule dynamics and the Particle-In-Cell (PIC) method for continuous spatial representation and density projection. To bridge these two frameworks, we treat granules as macroscopic transport flow rather than solid boundaries within the fluid domain. This bidirectional coupling allows our model to incorporate a range of interphase forces using different discretization schemes, resulting in more realistic simulations that strictly adhere to the mass conservation law. Experimental results demonstrate the effectiveness of our method in simulating complex sand-water interactions, uniquely capturing intricate physical phenomena and ensuring exact volume preservation compared to existing approaches.
砂水混合物的模拟需要捕捉均匀连续流体介质中单个砂粒的随机行为。然而,大多数现有方法仅将砂粒作为流体求解器中的标记,无法同时考虑作用在单个砂粒上的力和颗粒组合对流体的集体反馈。这阻止了特征现象的忠实再现,包括运输、沉积和堵塞。在动力学系综平均技术的基础上,我们提出了一种物理一致的耦合策略,并引入了一种新的细胞内颗粒(GIC)方法来模拟这种砂-水相互作用。我们采用离散元法(DEM)来捕捉精细尺度的颗粒动力学,并采用颗粒-细胞(PIC)方法进行连续空间表示和密度投影。为了连接这两个框架,我们将颗粒视为宏观运输流,而不是流体域内的固体边界。这种双向耦合使我们的模型能够使用不同的离散方案合并一系列相间力,从而产生更逼真的模拟,严格遵守质量守恒定律。实验结果表明,我们的方法在模拟复杂的沙-水相互作用方面是有效的,与现有方法相比,它独特地捕捉了复杂的物理现象,并确保了精确的体积保存。
{"title":"The Granule-In-Cell Method for Simulating Sand–Water Mixtures","authors":"Yizao Tang, Yuechen Zhu, Xingyu Ni, Baoquan Chen","doi":"10.1145/3763279","DOIUrl":"https://doi.org/10.1145/3763279","url":null,"abstract":"The simulation of sand-water mixtures requires capturing the stochastic behavior of individual sand particles within a uniform, continuous fluid medium. However, most existing approaches, which only treat sand particles as markers within fluid solvers, fail to account for both the forces acting on individual sand particles and the collective feedback of the particle assemblies on the fluid. This prevents faithful reproduction of characteristic phenomena including transport, deposition, and clogging. Building upon kinetic ensemble averaging technique, we propose a physically consistent coupling strategy and introduce a novel Granule-In-Cell (GIC) method for modeling such sand-water interactions. We employ the Discrete Element Method (DEM) to capture fine-scale granule dynamics and the Particle-In-Cell (PIC) method for continuous spatial representation and density projection. To bridge these two frameworks, we treat granules as macroscopic transport flow rather than solid boundaries within the fluid domain. This bidirectional coupling allows our model to incorporate a range of interphase forces using different discretization schemes, resulting in more realistic simulations that strictly adhere to the mass conservation law. Experimental results demonstrate the effectiveness of our method in simulating complex sand-water interactions, uniquely capturing intricate physical phenomena and ensuring exact volume preservation compared to existing approaches.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"367 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Shaping Strands with Neural Style Transfer 塑造与神经风格转移股
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763365
Beyzanur Coban, Pascal Chang, Guilherme Gomes Haetinger, Jingwei Tang, Vinicius C. Azevedo
The intricate geometric complexity of knots, tangles, dreads and clumps require sophisticated grooming systems that allow artists to both realistically model and artistically control fur and hair systems. Recent volumetric and 3D neural style transfer techniques provided a new paradigm of art directability, allowing artists to modify assets drastically with the use of single style images. However, these previous 3D neural stylization approaches were limited to volumes and meshes. In this paper we propose the first stylization pipeline to support hair and fur. Through a carefully tailored fur/hair representation, our approach allows complex, 3D consistent and temporally coherent grooms that are stylized using style images.
错综复杂的几何结,缠结,发辫和团块需要复杂的梳理系统,允许艺术家既现实地模拟和艺术地控制皮毛和头发系统。最近的体积和3D神经风格转移技术提供了艺术可指向性的新范例,允许艺术家使用单一风格的图像大幅修改资产。然而,这些之前的3D神经风格化方法仅限于体积和网格。在本文中,我们提出了第一个风格化管道来支持头发和皮毛。通过精心定制的毛皮/头发表示,我们的方法允许使用风格图像进行风格化的复杂,3D一致和时间连贯的新郎。
{"title":"Shaping Strands with Neural Style Transfer","authors":"Beyzanur Coban, Pascal Chang, Guilherme Gomes Haetinger, Jingwei Tang, Vinicius C. Azevedo","doi":"10.1145/3763365","DOIUrl":"https://doi.org/10.1145/3763365","url":null,"abstract":"The intricate geometric complexity of knots, tangles, dreads and clumps require sophisticated grooming systems that allow artists to both realistically model and artistically control fur and hair systems. Recent volumetric and 3D neural style transfer techniques provided a new paradigm of art directability, allowing artists to modify assets drastically with the use of single style images. However, these previous 3D neural stylization approaches were limited to volumes and meshes. In this paper we propose the first stylization pipeline to support hair and fur. Through a carefully tailored fur/hair representation, our approach allows complex, 3D consistent and temporally coherent grooms that are stylized using style images.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"27 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Stack-Free Parallel h-Adaptation Algorithm for Dynamically Balanced Trees on GPUs gpu上动态平衡树的无栈并行h-自适应算法
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763349
Lixin Ren, Xiaowei He, Shusen Liu, Yuzhong Guo, Enhua Wu
Prior research has demonstrated the efficacy of balanced trees as spatially adaptive grids for large-scale simulations. However, state-of-the-art methods for balanced tree construction are restricted by the iterative nature of the ripple effect, thus failing to fully leverage the massive parallelism offered by modern GPU architectures. We propose to reframe the construction of balanced trees as a process to merge N -balanced Minimum Spanning Trees ( N -balanced MSTs) generated from a collection of seed points. To ensure optimal performance, we propose a stack-free parallel strategy for constructing all internal nodes of a specified N -balanced MST. This approach leverages two 32-bit integer registers as buffers rather than relying on an integer array as a stack during construction, which helps maintain balanced workloads across different GPU threads. We then propose a dynamic update algorithm utilizing refinement counters for all internal nodes to enable parallel insertion and deletion operations of N -balanced MSTs. This design achieves significant efficiency improvements compared to full reconstruction from scratch, thereby facilitating fluid simulations in handling dynamic moving boundaries. Our approach is fully compatible with GPU implementation and demonstrates up to an order-of-magnitude speedup compared to the state-of-the-art method [Wang et al. 2024]. The source code for the paper is publicly available at https://github.com/peridyno/peridyno.
先前的研究已经证明了平衡树作为空间自适应网格在大规模模拟中的有效性。然而,最先进的平衡树构建方法受到涟漪效应迭代性质的限制,因此无法充分利用现代GPU架构提供的大规模并行性。我们建议将平衡树的构造重新定义为一个合并由种子点集合生成的N个平衡最小生成树(N -balanced MSTs)的过程。为了保证最优的性能,我们提出了一种无堆栈并行策略来构建指定N均衡MST的所有内部节点。这种方法利用两个32位整数寄存器作为缓冲区,而不是在构造期间依赖整数数组作为堆栈,这有助于在不同GPU线程之间保持平衡的工作负载。然后,我们提出了一种动态更新算法,利用所有内部节点的细化计数器来实现N平衡mst的并行插入和删除操作。与从头开始的完全重建相比,该设计实现了显著的效率提高,从而促进了处理动态移动边界的流体模拟。我们的方法与GPU实现完全兼容,并且与最先进的方法相比,速度提高了一个数量级[Wang et al. 2024]。该论文的源代码可在https://github.com/peridyno/peridyno上公开获取。
{"title":"A Stack-Free Parallel h-Adaptation Algorithm for Dynamically Balanced Trees on GPUs","authors":"Lixin Ren, Xiaowei He, Shusen Liu, Yuzhong Guo, Enhua Wu","doi":"10.1145/3763349","DOIUrl":"https://doi.org/10.1145/3763349","url":null,"abstract":"Prior research has demonstrated the efficacy of balanced trees as spatially adaptive grids for large-scale simulations. However, state-of-the-art methods for balanced tree construction are restricted by the iterative nature of the ripple effect, thus failing to fully leverage the massive parallelism offered by modern GPU architectures. We propose to reframe the construction of balanced trees as a process to merge <jats:italic toggle=\"yes\">N</jats:italic> -balanced Minimum Spanning Trees ( <jats:italic toggle=\"yes\">N</jats:italic> -balanced MSTs) generated from a collection of seed points. To ensure optimal performance, we propose a stack-free parallel strategy for constructing all internal nodes of a specified <jats:italic toggle=\"yes\">N</jats:italic> -balanced MST. This approach leverages two 32-bit integer registers as buffers rather than relying on an integer array as a stack during construction, which helps maintain balanced workloads across different GPU threads. We then propose a dynamic update algorithm utilizing refinement counters for all internal nodes to enable parallel insertion and deletion operations of <jats:italic toggle=\"yes\">N</jats:italic> -balanced MSTs. This design achieves significant efficiency improvements compared to full reconstruction from scratch, thereby facilitating fluid simulations in handling dynamic moving boundaries. Our approach is fully compatible with GPU implementation and demonstrates up to an order-of-magnitude speedup compared to the state-of-the-art method [Wang et al. 2024]. The source code for the paper is publicly available at https://github.com/peridyno/peridyno.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"115 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large-Area Fabrication-aware Computational Diffractive Optics 面向大面积制造的计算衍射光学
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763358
Kaixuan Wei, Hector Romero, Hadi Amata, Jipeng Sun, Qiang Fu, Felix Heide, Wolfgang Heidrich
Differentiable optics, as an emerging paradigm that jointly optimizes optics and (optional) image processing algorithms, has made many innovative optical designs possible across a broad range of imaging and display applications. Many of these systems utilize diffractive optical components for holography, PSF engineering, or wavefront shaping. Existing approaches have, however, mostly remained limited to laboratory prototypes, owing to a large quality gap between simulation and manufactured devices. We aim at lifting the fundamental technical barriers to the practical use of learned diffractive optical systems. To this end, we propose a fabrication-aware design pipeline for diffractive optics fabricated by direct-write grayscale lithography followed by replication with nano-imprinting, which is directly suited for inexpensive mass-production of large area designs. We propose a super-resolved neural lithography model that can accurately predict the 3D geometry generated by the fabrication process. This model can be seamlessly integrated into existing differentiable optics frameworks, enabling fabrication-aware, end-to-end optimization of computational optical systems. To tackle the computational challenges, we also devise tensor-parallel compute framework centered on distributing large-scale FFT computation across many GPUs. As such, we demonstrate large scale diffractive optics designs up to 32.16 mm × 21.44 mm, simulated on grids of up to 128,640 by 85,760 feature points. We find adequate agreement between simulation and fabricated prototypes for applications such as holography and PSF engineering. We also achieve high image quality from an imaging system comprised only of a single diffractive optical element, with images processed only by a one-step inverse filter utilizing the simulation PSF. We believe our findings lift the fabrication limitations for real-world applications of diffractive optics and differentiable optical design.
可微光学作为一种新兴的范例,共同优化光学和(可选)图像处理算法,使许多创新的光学设计在广泛的成像和显示应用中成为可能。许多这些系统利用衍射光学元件全息,PSF工程,或波前整形。然而,由于模拟和制造设备之间存在很大的质量差距,现有的方法大多仍然局限于实验室原型。我们的目标是解除基本的技术障碍,以实际使用的学习衍射光学系统。为此,我们提出了一种制造感知设计管道,用于通过直接写入灰度光刻制造的衍射光学器件,然后使用纳米压印复制,这直接适用于大面积设计的廉价批量生产。我们提出了一种超分辨神经光刻模型,可以准确地预测制造过程中产生的三维几何形状。该模型可以无缝集成到现有的可微分光学框架中,实现计算光学系统的制造感知端到端优化。为了解决计算挑战,我们还设计了张量并行计算框架,该框架以跨多个gpu分布大规模FFT计算为中心。因此,我们展示了32.16 mm × 21.44 mm的大规模衍射光学设计,在高达128,640 × 85,760个特征点的网格上进行了模拟。我们发现模拟和制造原型之间有足够的一致性,用于全息和PSF工程等应用。我们还通过仅由单个衍射光学元件组成的成像系统实现了高图像质量,图像仅通过利用模拟PSF的一步反滤波器处理。我们相信我们的发现解除了衍射光学和可微光学设计在实际应用中的制造限制。
{"title":"Large-Area Fabrication-aware Computational Diffractive Optics","authors":"Kaixuan Wei, Hector Romero, Hadi Amata, Jipeng Sun, Qiang Fu, Felix Heide, Wolfgang Heidrich","doi":"10.1145/3763358","DOIUrl":"https://doi.org/10.1145/3763358","url":null,"abstract":"Differentiable optics, as an emerging paradigm that jointly optimizes optics and (optional) image processing algorithms, has made many innovative optical designs possible across a broad range of imaging and display applications. Many of these systems utilize diffractive optical components for holography, PSF engineering, or wavefront shaping. Existing approaches have, however, mostly remained limited to laboratory prototypes, owing to a large quality gap between simulation and manufactured devices. We aim at lifting the fundamental technical barriers to the practical use of learned diffractive optical systems. To this end, we propose a fabrication-aware design pipeline for diffractive optics fabricated by direct-write grayscale lithography followed by replication with nano-imprinting, which is directly suited for inexpensive mass-production of large area designs. We propose a super-resolved neural lithography model that can accurately predict the 3D geometry generated by the fabrication process. This model can be seamlessly integrated into existing differentiable optics frameworks, enabling fabrication-aware, end-to-end optimization of computational optical systems. To tackle the computational challenges, we also devise tensor-parallel compute framework centered on distributing large-scale FFT computation across many GPUs. As such, we demonstrate large scale diffractive optics designs up to 32.16 mm × 21.44 mm, simulated on grids of up to 128,640 by 85,760 feature points. We find adequate agreement between simulation and fabricated prototypes for applications such as holography and PSF engineering. We also achieve high image quality from an imaging system comprised only of a single diffractive optical element, with images processed only by a one-step inverse filter utilizing the simulation PSF. We believe our findings lift the fabrication limitations for real-world applications of diffractive optics and differentiable optical design.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"21 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Practical Gaussian Process Implicit Surfaces with Sparse Convolutions 实用的稀疏卷积高斯过程隐式曲面
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763329
Kehan Xu, Benedikt Bitterli, Eugene d'Eon, Wojciech Jarosz
A fundamental challenge in rendering has been the dichotomy between surface and volume models. Gaussian Process Implicit Surfaces (GPISes) recently provided a unified approach for surfaces, volumes, and the spectrum in between. However, this representation remains impractical due to its high computational cost and mathematical complexity. We address these limitations by reformulating GPISes as procedural noise, eliminating expensive linear system solves while maintaining control over spatial correlations. Our method enables efficient sampling of stochastic realizations and supports flexible conditioning of values and derivatives through pathwise updates. To further enable practical rendering, we derive analytic distributions for surface normals, allowing for variance-reduced light transport via next-event estimation and multiple importance sampling. Our framework achieves efficient, high-quality rendering of stochastic surfaces and volumes with significantly simplified implementations on both CPU and GPU, while preserving the generality of the original GPIS representation.
渲染中的一个基本挑战是表面模型和体模型之间的二分法。高斯过程隐式曲面(gises)最近为表面、体积和两者之间的光谱提供了一种统一的方法。然而,由于其高计算成本和数学复杂性,这种表示仍然不切实际。我们通过将gps重新表述为程序噪声来解决这些限制,消除昂贵的线性系统解决方案,同时保持对空间相关性的控制。我们的方法能够有效地对随机实现进行采样,并通过路径更新支持值和导数的灵活调节。为了进一步实现实际渲染,我们推导了表面法线的解析分布,允许通过下一事件估计和多重重要采样来减少方差的光传输。我们的框架通过在CPU和GPU上显著简化的实现实现了随机表面和体积的高效、高质量渲染,同时保留了原始GPIS表示的通用性。
{"title":"Practical Gaussian Process Implicit Surfaces with Sparse Convolutions","authors":"Kehan Xu, Benedikt Bitterli, Eugene d'Eon, Wojciech Jarosz","doi":"10.1145/3763329","DOIUrl":"https://doi.org/10.1145/3763329","url":null,"abstract":"A fundamental challenge in rendering has been the dichotomy between surface and volume models. Gaussian Process Implicit Surfaces (GPISes) recently provided a unified approach for surfaces, volumes, and the spectrum in between. However, this representation remains impractical due to its high computational cost and mathematical complexity. We address these limitations by reformulating GPISes as procedural noise, eliminating expensive linear system solves while maintaining control over spatial correlations. Our method enables efficient sampling of stochastic realizations and supports flexible conditioning of values and derivatives through pathwise updates. To further enable practical rendering, we derive analytic distributions for surface normals, allowing for variance-reduced light transport via next-event estimation and multiple importance sampling. Our framework achieves efficient, high-quality rendering of stochastic surfaces and volumes with significantly simplified implementations on both CPU and GPU, while preserving the generality of the original GPIS representation.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"1 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Environment-aware Motion Matching 环境感知运动匹配
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763334
Jose Luis Ponton, Sheldon Andrews, Carlos Andujar, Nuria Pelechano
Interactive applications demand believable characters that respond naturally to dynamic environments. Traditional character animation techniques often struggle to handle arbitrary situations, leading to a growing trend of dynamically selecting motion-captured animations based on predefined features. While Motion Matching has proven effective for locomotion by aligning to target trajectories, animating environment interactions and crowd behaviors remains challenging due to the need to consider surrounding elements. Existing approaches often involve manual setup or lack the naturalism of motion capture. Furthermore, in crowd animation, body animation is frequently treated as a separate process from trajectory planning, leading to inconsistencies between body pose and root motion. To address these limitations, we present Environment-aware Motion Matching , a novel real-time system for full-body character animation that dynamically adapts to obstacles and other agents, emphasizing the bidirectional relationship between pose and trajectory. In a preprocessing step, we extract shape, pose, and trajectory features from a motion capture database. At runtime, we perform an efficient search that matches user input and current pose while penalizing collisions with a dynamic environment. Our method allows characters to naturally adjust their pose and trajectory to navigate crowded scenes.
交互式应用程序需要能够对动态环境做出自然响应的可信角色。传统的角色动画技术往往难以处理任意情况,导致基于预定义特征动态选择动作捕捉动画的趋势日益增长。虽然运动匹配已经被证明是有效的运动对齐目标轨迹,动画环境的相互作用和人群的行为仍然具有挑战性,因为需要考虑周围的元素。现有的方法通常涉及手动设置或缺乏运动捕捉的自然性。此外,在人群动画中,身体动画经常被视为与轨迹规划分开的过程,导致身体姿态和根部运动之间的不一致。为了解决这些限制,我们提出了环境感知运动匹配,这是一种新的全身角色动画实时系统,可以动态适应障碍物和其他代理,强调姿态和轨迹之间的双向关系。在预处理步骤中,我们从动作捕捉数据库中提取形状、姿态和轨迹特征。在运行时,我们执行有效的搜索,匹配用户输入和当前姿势,同时惩罚与动态环境的冲突。我们的方法允许角色自然地调整他们的姿势和轨迹来导航拥挤的场景。
{"title":"Environment-aware Motion Matching","authors":"Jose Luis Ponton, Sheldon Andrews, Carlos Andujar, Nuria Pelechano","doi":"10.1145/3763334","DOIUrl":"https://doi.org/10.1145/3763334","url":null,"abstract":"Interactive applications demand believable characters that respond naturally to dynamic environments. Traditional character animation techniques often struggle to handle arbitrary situations, leading to a growing trend of dynamically selecting motion-captured animations based on predefined features. While Motion Matching has proven effective for locomotion by aligning to target trajectories, animating environment interactions and crowd behaviors remains challenging due to the need to consider surrounding elements. Existing approaches often involve manual setup or lack the naturalism of motion capture. Furthermore, in crowd animation, body animation is frequently treated as a separate process from trajectory planning, leading to inconsistencies between body pose and root motion. To address these limitations, we present <jats:italic toggle=\"yes\">Environment-aware Motion Matching</jats:italic> , a novel real-time system for full-body character animation that dynamically adapts to obstacles and other agents, emphasizing the bidirectional relationship between pose and trajectory. In a preprocessing step, we extract shape, pose, and trajectory features from a motion capture database. At runtime, we perform an efficient search that matches user input and current pose while penalizing collisions with a dynamic environment. Our method allows characters to naturally adjust their pose and trajectory to navigate crowded scenes.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"155 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1