首页 > 最新文献

ACM Transactions on Graphics最新文献

英文 中文
Learning Based Toolpath Planner on Diverse Graphs for 3D Printing 用于 3D 打印的基于学习的多样化图形工具路径规划器
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687933
Yuming Huang, Yuhu Guo, Renbo Su, Xingjian Han, Junhao Ding, Tianyu Zhang, Tao Liu, Weiming Wang, Guoxin Fang, Xu Song, Emily Whiting, Charlie Wang
This paper presents a learning based planner for computing optimized 3D printing toolpaths on prescribed graphs, the challenges of which include the varying graph structures on different models and the large scale of nodes & edges on a graph. We adopt an on-the-fly strategy to tackle these challenges, formulating the planner as a Deep Q-Network (DQN) based optimizer to decide the next 'best' node to visit. We construct the state spaces by the Local Search Graph (LSG) centered at different nodes on a graph, which is encoded by a carefully designed algorithm so that LSGs in similar configurations can be identified to re-use the earlier learned DQN priors for accelerating the computation of toolpath planning. Our method can cover different 3D printing applications by defining their corresponding reward functions. Toolpath planning problems in wire-frame printing, continuous fiber printing, and metallic printing are selected to demonstrate its generality. The performance of our planner has been verified by testing the resultant toolpaths in physical experiments. By using our planner, wire-frame models with up to 4.2k struts can be successfully printed, up to 93.3% of sharp turns on continuous fiber toolpaths can be avoided, and the thermal distortion in metallic printing can be reduced by 24.9%.
本文提出了一种基于学习的规划器,用于计算规定图形上的优化 3D 打印工具路径,其挑战包括不同模型上的不同图形结构以及图形上的大规模节点和边。我们采用即时策略来应对这些挑战,将规划器设计为基于深度 Q 网络(DQN)的优化器,以决定下一个要访问的 "最佳 "节点。我们通过以图上不同节点为中心的局部搜索图(LSG)来构建状态空间,并通过精心设计的算法对其进行编码,这样就可以识别出类似配置中的 LSG,从而重新使用先前学习的 DQN 先验,加快工具路径规划的计算速度。通过定义相应的奖励函数,我们的方法可以涵盖不同的 3D 打印应用。我们选择了线框打印、连续纤维打印和金属打印中的工具路径规划问题来证明其通用性。通过在物理实验中测试生成的工具路径,我们的规划器的性能得到了验证。通过使用我们的规划器,可以成功地打印出多达 4.2k 支杆的线框模型,在连续纤维工具路径上可以避免多达 93.3% 的急转弯,在金属打印中可以减少 24.9% 的热变形。
{"title":"Learning Based Toolpath Planner on Diverse Graphs for 3D Printing","authors":"Yuming Huang, Yuhu Guo, Renbo Su, Xingjian Han, Junhao Ding, Tianyu Zhang, Tao Liu, Weiming Wang, Guoxin Fang, Xu Song, Emily Whiting, Charlie Wang","doi":"10.1145/3687933","DOIUrl":"https://doi.org/10.1145/3687933","url":null,"abstract":"This paper presents a learning based planner for computing optimized 3D printing toolpaths on prescribed graphs, the challenges of which include the varying graph structures on different models and the large scale of nodes &amp; edges on a graph. We adopt an on-the-fly strategy to tackle these challenges, formulating the planner as a <jats:italic>Deep Q-Network</jats:italic> (DQN) based optimizer to decide the next 'best' node to visit. We construct the state spaces by the <jats:italic>Local Search Graph</jats:italic> (LSG) centered at different nodes on a graph, which is encoded by a carefully designed algorithm so that LSGs in similar configurations can be identified to re-use the earlier learned DQN priors for accelerating the computation of toolpath planning. Our method can cover different 3D printing applications by defining their corresponding reward functions. Toolpath planning problems in wire-frame printing, continuous fiber printing, and metallic printing are selected to demonstrate its generality. The performance of our planner has been verified by testing the resultant toolpaths in physical experiments. By using our planner, wire-frame models with up to 4.2k struts can be successfully printed, up to 93.3% of sharp turns on continuous fiber toolpaths can be avoided, and the thermal distortion in metallic printing can be reduced by 24.9%.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"38 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Volumetric Homogenization for Knitwear Simulation 用于针织品模拟的体积均质化技术
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687911
Chun Yuan, Haoyang Shi, Lei Lan, Yuxing Qiu, Cem Yuksel, Huamin Wang, Chenfanfu Jiang, Kui Wu, Yin Yang
This paper presents volumetric homogenization, a spatially varying homogenization scheme for knitwear simulation. We are motivated by the observation that macro-scale fabric dynamics is strongly correlated with its underlying knitting patterns. Therefore, homogenization towards a single material is less effective when the knitting is complex and non-repetitive. Our method tackles this challenge by homogenizing the yarn-level material locally at volumetric elements. Assigning a virtual volume of a knitting structure enables us to model bending and twisting effects via a simple volume-preserving penalty and thus effectively alleviates the material nonlinearity. We employ an adjoint Gauss-Newton formulation[Zehnder et al. 2021] to battle the dimensionality challenge of such per-element material optimization. This intuitive material model makes the forward simulation GPU-friendly. To this end, our pipeline also equips a novel domain-decomposed subspace solver crafted for GPU projective dynamics, which makes our simulator hundreds of times faster than the yarn-level simulator. Experiments validate the capability and effectiveness of volumetric homogenization. Our method produces realistic animations of knitwear matching the quality of full-scale yarn-level simulations. It is also orders of magnitude faster than existing homogenization techniques in both the training and simulation stages.
本文介绍了一种用于针织模拟的空间变化均质化方案--体积均质化。我们观察到,宏观尺度的织物动态与底层针织模式密切相关。因此,当针织复杂且不重复时,针对单一材料的均质化效果较差。我们的方法通过在体积元素局部均匀化纱线级材料来应对这一挑战。分配针织结构的虚拟体积使我们能够通过简单的体积保留惩罚来模拟弯曲和扭曲效应,从而有效缓解材料的非线性问题。我们采用了一种邻接高斯-牛顿公式[Zehnder 等人,2021 年]来应对这种每元素材料优化的维度挑战。这种直观的材料模型使得前向模拟对 GPU 非常友好。为此,我们的管道还配备了专为 GPU 投影动力学设计的新型域分解子空间求解器,使我们的模拟器比矢量级模拟器快数百倍。实验验证了体积均质化的能力和有效性。我们的方法能生成逼真的针织品动画,其质量可与全尺寸纱线级模拟相媲美。在训练和模拟阶段,它也比现有的均质化技术快几个数量级。
{"title":"Volumetric Homogenization for Knitwear Simulation","authors":"Chun Yuan, Haoyang Shi, Lei Lan, Yuxing Qiu, Cem Yuksel, Huamin Wang, Chenfanfu Jiang, Kui Wu, Yin Yang","doi":"10.1145/3687911","DOIUrl":"https://doi.org/10.1145/3687911","url":null,"abstract":"This paper presents volumetric homogenization, a spatially varying homogenization scheme for knitwear simulation. We are motivated by the observation that macro-scale fabric dynamics is strongly correlated with its underlying knitting patterns. Therefore, homogenization towards a single material is less effective when the knitting is complex and non-repetitive. Our method tackles this challenge by homogenizing the yarn-level material locally at volumetric elements. Assigning a virtual volume of a knitting structure enables us to model bending and twisting effects via a simple volume-preserving penalty and thus effectively alleviates the material nonlinearity. We employ an adjoint Gauss-Newton formulation[Zehnder et al. 2021] to battle the dimensionality challenge of such per-element material optimization. This intuitive material model makes the forward simulation GPU-friendly. To this end, our pipeline also equips a novel domain-decomposed subspace solver crafted for GPU projective dynamics, which makes our simulator hundreds of times faster than the yarn-level simulator. Experiments validate the capability and effectiveness of volumetric homogenization. Our method produces realistic animations of knitwear matching the quality of full-scale yarn-level simulations. It is also orders of magnitude faster than existing homogenization techniques in both the training and simulation stages.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"22 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
All you need is rotation: Construction of developable strips 你需要的只是旋转:建设可开发地带
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687947
Takashi Maekawa, Felix Scholz
We present a novel approach to generate developable strips along a space curve. The key idea of the new method is to use the rotation angle between the Frenet frame of the input space curve, and its Darboux frame of the curve on the resulting developable strip as a free design parameter, thereby revolving the strip around the tangential axis of the input space curve. This angle is not restricted to be constant but it can be any differentiable function defined on the curve, thereby creating a large design space of developable strips that share a common directrix curve. The range of possibilities for choosing the rotation angle is diverse, encompassing constant angles, linearly varying angles, sinusoidal patterns, and even solutions derived from initial value problems involving ordinary differential equations. This enables the potential of the proposed method to be used for a wide range of practical applications, spanning fields such as architectural design, industrial design, and papercraft modeling. In our computational and physical examples, we demonstrate the flexibility of the method by constructing, among others, toroidal and helical windmill blades for papercraft models, curved foldings, triply orthogonal structures, and developable strips featuring a log-aesthetic directrix curve.
我们提出了一种沿空间曲线生成可展开条带的新方法。这种新方法的主要思想是利用输入空间曲线的 Frenet 框架和曲线的 Darboux 框架之间的旋转角度作为自由设计参数,从而使条带围绕输入空间曲线的切向轴旋转。这个角度不一定是常数,也可以是定义在曲线上的任何可微分函数,从而创造出一个共享一条共同方向轴曲线的大型可展开带材设计空间。选择旋转角度的可能性多种多样,包括恒定角度、线性变化角度、正弦模式,甚至包括从涉及常微分方程的初值问题中得出的解决方案。这使得所提出的方法具有广泛的实际应用潜力,横跨建筑设计、工业设计和纸艺建模等领域。在我们的计算和物理示例中,我们通过构建用于纸艺模型的环形和螺旋风车叶片、曲线折叠、三重正交结构以及具有对数美学直角坐标曲线的可展开条带等,展示了该方法的灵活性。
{"title":"All you need is rotation: Construction of developable strips","authors":"Takashi Maekawa, Felix Scholz","doi":"10.1145/3687947","DOIUrl":"https://doi.org/10.1145/3687947","url":null,"abstract":"We present a novel approach to generate developable strips along a space curve. The key idea of the new method is to use the rotation angle between the Frenet frame of the input space curve, and its Darboux frame of the curve on the resulting developable strip as a free design parameter, thereby revolving the strip around the tangential axis of the input space curve. This angle is not restricted to be constant but it can be any differentiable function defined on the curve, thereby creating a large design space of developable strips that share a common directrix curve. The range of possibilities for choosing the rotation angle is diverse, encompassing constant angles, linearly varying angles, sinusoidal patterns, and even solutions derived from initial value problems involving ordinary differential equations. This enables the potential of the proposed method to be used for a wide range of practical applications, spanning fields such as architectural design, industrial design, and papercraft modeling. In our computational and physical examples, we demonstrate the flexibility of the method by constructing, among others, toroidal and helical windmill blades for papercraft models, curved foldings, triply orthogonal structures, and developable strips featuring a log-aesthetic directrix curve.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"10 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D Gaussian Ray Tracing: Fast Tracing of Particle Scenes 3D 高斯光线追踪:快速追踪粒子场景
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687934
Nicolas Moenne-Loccoz, Ashkan Mirzaei, Or Perel, Riccardo de Lutio, Janick Martinez Esturo, Gavriel State, Sanja Fidler, Nicholas Sharp, Zan Gojcic
Particle-based representations of radiance fields such as 3D Gaussian Splatting have found great success for reconstructing and re-rendering of complex scenes. Most existing methods render particles via rasterization, projecting them to screen space tiles for processing in a sorted order. This work instead considers ray tracing the particles, building a bounding volume hierarchy and casting a ray for each pixel using high-performance GPU ray tracing hardware. To efficiently handle large numbers of semi-transparent particles, we describe a specialized rendering algorithm which encapsulates particles with bounding meshes to leverage fast ray-triangle intersections, and shades batches of intersections in depth-order. The benefits of ray tracing are well-known in computer graphics: processing incoherent rays for secondary lighting effects such as shadows and reflections, rendering from highly-distorted cameras common in robotics, stochastically sampling rays, and more. With our renderer, this flexibility comes at little cost compared to rasterization. Experiments demonstrate the speed and accuracy of our approach, as well as several applications in computer graphics and vision. We further propose related improvements to the basic Gaussian representation, including a simple use of generalized kernel functions which significantly reduces particle hit counts.
基于粒子的辐射场表示法(如三维高斯溅射)在重建和重新渲染复杂场景方面取得了巨大成功。现有的大多数方法都是通过光栅化来渲染粒子,将粒子投射到屏幕空间的瓷砖上,按排序进行处理。而本作品则考虑对粒子进行光线追踪,利用高性能 GPU 光线追踪硬件建立边界体积层次结构,并为每个像素投射光线。为了高效处理大量半透明粒子,我们介绍了一种专门的渲染算法,该算法将粒子与边界网格封装在一起,以利用快速的光线三角形交点,并按深度顺序对交点批次进行阴影处理。光线追踪的优势在计算机图形学中众所周知:处理不连贯光线以获得二次光照效果(如阴影和反射)、从机器人技术中常见的高扭曲摄像头进行渲染、随机光线采样等。与光栅化相比,我们的渲染器只需付出很小的代价就能实现这种灵活性。实验证明了我们方法的速度和准确性,以及在计算机图形学和视觉领域的一些应用。我们进一步提出了对基本高斯表示法的相关改进,包括对广义核函数的简单使用,从而大大减少了粒子的撞击次数。
{"title":"3D Gaussian Ray Tracing: Fast Tracing of Particle Scenes","authors":"Nicolas Moenne-Loccoz, Ashkan Mirzaei, Or Perel, Riccardo de Lutio, Janick Martinez Esturo, Gavriel State, Sanja Fidler, Nicholas Sharp, Zan Gojcic","doi":"10.1145/3687934","DOIUrl":"https://doi.org/10.1145/3687934","url":null,"abstract":"Particle-based representations of radiance fields such as 3D Gaussian Splatting have found great success for reconstructing and re-rendering of complex scenes. Most existing methods render particles via rasterization, projecting them to screen space tiles for processing in a sorted order. This work instead considers ray tracing the particles, building a bounding volume hierarchy and casting a ray for each pixel using high-performance GPU ray tracing hardware. To efficiently handle large numbers of semi-transparent particles, we describe a specialized rendering algorithm which encapsulates particles with bounding meshes to leverage fast ray-triangle intersections, and shades batches of intersections in depth-order. The benefits of ray tracing are well-known in computer graphics: processing incoherent rays for secondary lighting effects such as shadows and reflections, rendering from highly-distorted cameras common in robotics, stochastically sampling rays, and more. With our renderer, this flexibility comes at little cost compared to rasterization. Experiments demonstrate the speed and accuracy of our approach, as well as several applications in computer graphics and vision. We further propose related improvements to the basic Gaussian representation, including a simple use of generalized kernel functions which significantly reduces particle hit counts.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"14 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quark: Real-time, High-resolution, and General Neural View Synthesis 夸克实时、高分辨率和通用神经视图合成
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687953
John Flynn, Michael Broxton, Lukas Murmann, Lucy Chai, Matthew DuVall, Clément Godard, Kathryn Heal, Srinivas Kaza, Stephen Lombardi, Xuan Luo, Supreeth Achar, Kira Prabhu, Tiancheng Sun, Lynn Tsai, Ryan Overbeck
We present a novel neural algorithm for performing high-quality, highresolution, real-time novel view synthesis. From a sparse set of input RGB images or videos streams, our network both reconstructs the 3D scene and renders novel views at 1080p resolution at 30fps on an NVIDIA A100. Our feed-forward network generalizes across a wide variety of datasets and scenes and produces state-of-the-art quality for a real-time method. Our quality approaches, and in some cases surpasses, the quality of some of the top offline methods. In order to achieve these results we use a novel combination of several key concepts, and tie them together into a cohesive and effective algorithm. We build on previous works that represent the scene using semi-transparent layers and use an iterative learned render-and-refine approach to improve those layers. Instead of flat layers, our method reconstructs layered depth maps (LDMs) that efficiently represent scenes with complex depth and occlusions. The iterative update steps are embedded in a multi-scale, UNet-style architecture to perform as much compute as possible at reduced resolution. Within each update step, to better aggregate the information from multiple input views, we use a specialized Transformer-based network component. This allows the majority of the per-input image processing to be performed in the input image space, as opposed to layer space, further increasing efficiency. Finally, due to the real-time nature of our reconstruction and rendering, we dynamically create and discard the internal 3D geometry for each frame, generating the LDM for each view. Taken together, this produces a novel and effective algorithm for view synthesis. Through extensive evaluation, we demonstrate that we achieve state-of-the-art quality at real-time rates.
我们介绍了一种用于进行高质量、高分辨率、实时新视角合成的新型神经算法。从一组稀疏的输入 RGB 图像或视频流中,我们的网络既能重建三维场景,又能在英伟达 A100 上以 30fps 的 1080p 分辨率渲染新视图。我们的前馈网络可通用于各种数据集和场景,并为实时方法提供最先进的质量。我们的质量接近并在某些情况下超过了一些顶级离线方法的质量。为了取得这些成果,我们采用了几种关键概念的新颖组合,并将它们结合在一起,形成了一种具有凝聚力的有效算法。我们借鉴了以往使用半透明图层表示场景的方法,并使用迭代学习渲染-精修方法来改进这些图层。我们的方法不使用平面图层,而是重建分层深度图 (LDM),从而有效地表现具有复杂深度和遮挡的场景。迭代更新步骤被嵌入到多尺度、UNet 风格的架构中,以便在降低分辨率的情况下执行尽可能多的计算。在每个更新步骤中,为了更好地汇总来自多个输入视图的信息,我们使用了基于变换器的专用网络组件。这样就可以在输入图像空间(而不是图层空间)执行大部分的每次输入图像处理,从而进一步提高效率。最后,由于我们重建和渲染的实时性,我们为每一帧动态创建和丢弃内部三维几何图形,为每个视图生成 LDM。综上所述,这就产生了一种新颖有效的视图合成算法。通过广泛的评估,我们证明了我们能以实时速率实现最先进的质量。
{"title":"Quark: Real-time, High-resolution, and General Neural View Synthesis","authors":"John Flynn, Michael Broxton, Lukas Murmann, Lucy Chai, Matthew DuVall, Clément Godard, Kathryn Heal, Srinivas Kaza, Stephen Lombardi, Xuan Luo, Supreeth Achar, Kira Prabhu, Tiancheng Sun, Lynn Tsai, Ryan Overbeck","doi":"10.1145/3687953","DOIUrl":"https://doi.org/10.1145/3687953","url":null,"abstract":"We present a novel neural algorithm for performing high-quality, highresolution, real-time novel view synthesis. From a sparse set of input RGB images or videos streams, our network both reconstructs the 3D scene and renders novel views at 1080p resolution at 30fps on an NVIDIA A100. Our feed-forward network generalizes across a wide variety of datasets and scenes and produces state-of-the-art quality for a real-time method. Our quality approaches, and in some cases surpasses, the quality of some of the top offline methods. In order to achieve these results we use a novel combination of several key concepts, and tie them together into a cohesive and effective algorithm. We build on previous works that represent the scene using semi-transparent layers and use an iterative learned render-and-refine approach to improve those layers. Instead of flat layers, our method reconstructs layered depth maps (LDMs) that efficiently represent scenes with complex depth and occlusions. The iterative update steps are embedded in a multi-scale, UNet-style architecture to perform as much compute as possible at reduced resolution. Within each update step, to better aggregate the information from multiple input views, we use a specialized Transformer-based network component. This allows the majority of the per-input image processing to be performed in the input image space, as opposed to layer space, further increasing efficiency. Finally, due to the real-time nature of our reconstruction and rendering, we dynamically create and discard the internal 3D geometry for each frame, generating the LDM for each view. Taken together, this produces a novel and effective algorithm for view synthesis. Through extensive evaluation, we demonstrate that we achieve state-of-the-art quality at real-time rates.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"14 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142672828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural Kernel Regression for Consistent Monte Carlo Denoising 用于一致蒙特卡罗去噪的神经核回归
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687949
Pengju Qiao, Qi Wang, Yuchi Huo, Shiji Zhai, Zixuan Xie, Wei Hua, Hujun Bao, Tao Liu
Unbiased Monte Carlo path tracing that is extensively used in realistic rendering produces undesirable noise, especially with low samples per pixel (spp). Recently, several methods have coped with this problem by importing unbiased noisy images and auxiliary features to neural networks to either predict a fixed-sized kernel for convolution or directly predict the denoised result. Since it is impossible to produce arbitrarily high spp images as the training dataset, the network-based denoising fails to produce high-quality images under high spp. More specifically, network-based denoising is inconsistent and does not converge to the ground truth as the sampling rate increases. On the other hand, the post-correction estimators yield a blending coefficient for a pair of biased and unbiased images influenced by image errors or variances to ensure the consistency of the denoised image. As the sampling rate increases, the blending coefficient of the unbiased image converges to 1, that is, using the unbiased image as the denoised results. However, these estimators usually produce artifacts due to the difficulty of accurately predicting image errors or variances with low spp. To address the above problems, we take advantage of both kernel-predicting methods and post-correction denoisers. A novel kernel-based denoiser is proposed based on distribution-free kernel regression consistency theory, which does not explicitly combine the biased and unbiased results but constrain the kernel bandwidth to produce consistent results under high spp. Meanwhile, our kernel regression method explores bandwidth optimization in the robust auxiliary feature space instead of the noisy image space. This leads to consistent high-quality denoising at both low and high spp. Experiment results demonstrate that our method outperforms existing denoisers in accuracy and consistency.
在现实渲染中广泛使用的无偏蒙特卡洛路径追踪会产生不理想的噪声,尤其是在每像素采样率(spp)较低的情况下。最近,有几种方法通过向神经网络导入无偏噪声图像和辅助特征来预测用于卷积的固定大小内核或直接预测去噪结果,从而解决了这一问题。由于不可能生成任意高 spp 的图像作为训练数据集,基于网络的去噪无法生成高 spp 下的高质量图像。另一方面,后校正估计器为一对受图像误差或方差影响的有偏和无偏图像生成混合系数,以确保去噪图像的一致性。随着采样率的增加,无偏图像的混合系数会趋近于 1,即使用无偏图像作为去噪结果。为了解决上述问题,我们利用了核预测方法和后校正去噪器。基于无分布内核回归一致性理论,我们提出了一种新的基于内核的去噪器,它并不明确结合有偏和无偏的结果,而是限制内核带宽,以便在高 spp 下产生一致的结果。实验结果表明,我们的方法在准确性和一致性方面都优于现有的去噪方法。
{"title":"Neural Kernel Regression for Consistent Monte Carlo Denoising","authors":"Pengju Qiao, Qi Wang, Yuchi Huo, Shiji Zhai, Zixuan Xie, Wei Hua, Hujun Bao, Tao Liu","doi":"10.1145/3687949","DOIUrl":"https://doi.org/10.1145/3687949","url":null,"abstract":"Unbiased Monte Carlo path tracing that is extensively used in realistic rendering produces undesirable noise, especially with low samples per pixel (spp). Recently, several methods have coped with this problem by importing unbiased noisy images and auxiliary features to neural networks to either predict a fixed-sized kernel for convolution or directly predict the denoised result. Since it is impossible to produce arbitrarily high spp images as the training dataset, the network-based denoising fails to produce high-quality images under high spp. More specifically, network-based denoising is inconsistent and does not converge to the ground truth as the sampling rate increases. On the other hand, the post-correction estimators yield a blending coefficient for a pair of biased and unbiased images influenced by image errors or variances to ensure the consistency of the denoised image. As the sampling rate increases, the blending coefficient of the unbiased image converges to 1, that is, using the unbiased image as the denoised results. However, these estimators usually produce artifacts due to the difficulty of accurately predicting image errors or variances with low spp. To address the above problems, we take advantage of both kernel-predicting methods and post-correction denoisers. A novel kernel-based denoiser is proposed based on distribution-free kernel regression consistency theory, which does not explicitly combine the biased and unbiased results but constrain the kernel bandwidth to produce consistent results under high spp. Meanwhile, our kernel regression method explores bandwidth optimization in the robust auxiliary feature space instead of the noisy image space. This leads to consistent high-quality denoising at both low and high spp. Experiment results demonstrate that our method outperforms existing denoisers in accuracy and consistency.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"197 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Chebyshev Parameterization for Woven Fabric Modeling 用于织物建模的切比雪夫参数化
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687928
Annika Öhri, Aviv Segall, Jing Ren, Olga Sorkine-Hornung
Distortion-minimizing surface parameterization is an essential step for computing 2D pieces necessary to fabricate a target 3D shape from flat material. Garment design and textile fabrication are a prominent application example. Common distortion measures quantify length, angle or area preservation in an isotropic manner, so that when applied to woven textile fabrication, they implicitly assume fabric behaves like paper, which is inextensible in all directions and does not permit shearing. However, woven fabric differs significantly from paper: it exhibits anisotropy along the yarn directions and allows for some degree of shearing. We propose a novel distortion energy based on Chebyshev nets that anisotropically penalizes shearing and stretching. Our energy formulation can be used as an optimization objective for surface parameterization and is simple to minimize via a local-global algorithm. We demonstrate its advantages in modeling nets or woven fabric behavior over the commonly used isotropic distortion energies.
畸变最小化表面参数化是计算二维碎片的重要步骤,而计算二维碎片则是利用平面材料制造目标三维形状的必要条件。服装设计和纺织品制造就是一个突出的应用实例。常见的变形测量方法以各向同性的方式量化长度、角度或面积的保持,因此在应用于编织纺织品制造时,它们隐含地假定织物的行为与纸张类似,在所有方向上都无法拉伸,也不允许剪切。然而,机织物与纸张有很大不同:它沿纱线方向呈现各向异性,允许一定程度的剪切。我们提出了一种基于切比雪夫网的新型变形能量,它可以各向异性地惩罚剪切和拉伸。我们的能量公式可用作表面参数化的优化目标,并可通过局部-全局算法实现最小化。与常用的各向同性变形能量相比,我们证明了它在模拟网或编织物行为方面的优势。
{"title":"Chebyshev Parameterization for Woven Fabric Modeling","authors":"Annika Öhri, Aviv Segall, Jing Ren, Olga Sorkine-Hornung","doi":"10.1145/3687928","DOIUrl":"https://doi.org/10.1145/3687928","url":null,"abstract":"Distortion-minimizing surface parameterization is an essential step for computing 2D pieces necessary to fabricate a target 3D shape from flat material. Garment design and textile fabrication are a prominent application example. Common distortion measures quantify length, angle or area preservation in an isotropic manner, so that when applied to woven textile fabrication, they implicitly assume fabric behaves like paper, which is inextensible in all directions and does not permit shearing. However, woven fabric differs significantly from paper: it exhibits anisotropy along the yarn directions and allows for some degree of shearing. We propose a novel distortion energy based on Chebyshev nets that anisotropically penalizes shearing and stretching. Our energy formulation can be used as an optimization objective for surface parameterization and is simple to minimize via a local-global algorithm. We demonstrate its advantages in modeling nets or woven fabric behavior over the commonly used isotropic distortion energies.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"38 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Geometry-Aware Retargeting for Two-Skinned Characters Interaction 双皮肤角色交互的几何感知重定位
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687962
Inseo Jang, Soojin Choi, Seokhyeon Hong, Chaelin Kim, Junyong Noh
Interactive motion between multiple characters is widely utilized in games and movies. However, the method for generating interactive motions considering the character's diverse mesh shape has yet to be studied. We propose a Spatio Cooperative Transformer (SCT) to retarget the interacting motions of two characters having arbitrary mesh connectivity. SCT predicts the residual of root position and joint rotations considering the shape difference between the source and target of interacting characters. In addition, we introduce an anchor loss function for SCT to maintain the geometric distance between the interacting characters when they are retargeted. We also propose a motion augmentation method with deformation-based adaptation to prepare a source-target paired dataset with an identical mesh connectivity for training. In experiments, our method achieved higher accuracy for semantic preservation and produced less artifacts of inter-penetration between the interacting characters for unseen characters and motions than the baselines. Moreover, we conducted a user evaluation using characters with various shapes, spanning low-to-high interaction levels to prove better semantic preservation of our method compared to previous studies.
多个角色之间的互动运动在游戏和电影中被广泛使用。然而,考虑到角色不同的网格形状,生成交互运动的方法仍有待研究。我们提出了一种空间合作变换器(SCT),用于重新定位具有任意网格连接性的两个角色的交互运动。考虑到交互角色的源和目标之间的形状差异,SCT 可以预测根位置和关节旋转的残差。此外,我们还为 SCT 引入了一个锚损失函数,以便在重定目标时保持交互角色之间的几何距离。我们还提出了一种基于变形适应的运动增强方法,以准备一个具有相同网格连接性的源目标配对数据集进行训练。在实验中,与基线方法相比,我们的方法在语义保存方面达到了更高的准确度,并且在未见过的角色和运动中,交互角色之间的相互渗透产生的假象更少。此外,我们还使用从低到高交互级别的各种形状的字符进行了用户评估,证明与之前的研究相比,我们的方法能更好地保留语义。
{"title":"Geometry-Aware Retargeting for Two-Skinned Characters Interaction","authors":"Inseo Jang, Soojin Choi, Seokhyeon Hong, Chaelin Kim, Junyong Noh","doi":"10.1145/3687962","DOIUrl":"https://doi.org/10.1145/3687962","url":null,"abstract":"Interactive motion between multiple characters is widely utilized in games and movies. However, the method for generating interactive motions considering the character's diverse mesh shape has yet to be studied. We propose a Spatio Cooperative Transformer (SCT) to retarget the interacting motions of two characters having arbitrary mesh connectivity. SCT predicts the residual of root position and joint rotations considering the shape difference between the source and target of interacting characters. In addition, we introduce an anchor loss function for SCT to maintain the geometric distance between the interacting characters when they are retargeted. We also propose a motion augmentation method with deformation-based adaptation to prepare a source-target paired dataset with an identical mesh connectivity for training. In experiments, our method achieved higher accuracy for semantic preservation and produced less artifacts of inter-penetration between the interacting characters for unseen characters and motions than the baselines. Moreover, we conducted a user evaluation using characters with various shapes, spanning low-to-high interaction levels to prove better semantic preservation of our method compared to previous studies.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"99 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Particle-Laden Fluid on Flow Maps 流动图上的含颗粒流体
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687916
Zhiqi Li, Duowen Chen, Candong Lin, Jinyuan Liu, Bo Zhu
We propose a novel framework for simulating ink as a particle-laden flow using particle flow maps. Our method addresses the limitations of existing flow-map techniques, which struggle with dissipative forces like viscosity and drag, thereby extending the application scope from solving the Euler equations to solving the Navier-Stokes equations with accurate viscosity and laden-particle treatment. Our key contribution lies in a coupling mechanism for two particle systems, coupling physical sediment particles and virtual flow-map particles on a background grid by solving a Poisson system. We implemented a novel path integral formula to incorporate viscosity and drag forces into the particle flow map process. Our approach enables state-of-the-art simulation of various particle-laden flow phenomena, exemplified by the bulging and breakup of suspension drop tails, torus formation, torus disintegration, and the coalescence of sedimenting drops. In particular, our method delivered high-fidelity ink diffusion simulations by accurately capturing vortex bulbs, viscous tails, fractal branching, and hierarchical structures.
我们提出了一个新颖的框架,利用粒子流图将墨水模拟为粒子流。我们的方法解决了现有流动图技术在处理粘性和阻力等耗散力方面的局限性,从而将应用范围从求解欧拉方程扩展到求解纳维-斯托克斯方程,并进行了精确的粘性和负载粒子处理。我们的主要贡献在于两个粒子系统的耦合机制,即通过求解泊松系统,在背景网格上耦合物理沉积物粒子和虚拟流图粒子。我们采用了新颖的路径积分公式,将粘滞力和阻力纳入粒子流图过程。我们的方法能够对各种颗粒载流现象进行最先进的模拟,例如悬浮水滴尾部的隆起和破裂、环的形成、环的解体以及沉降水滴的凝聚。特别是,我们的方法通过准确捕捉涡球、粘性尾流、分形分支和层次结构,实现了高保真油墨扩散模拟。
{"title":"Particle-Laden Fluid on Flow Maps","authors":"Zhiqi Li, Duowen Chen, Candong Lin, Jinyuan Liu, Bo Zhu","doi":"10.1145/3687916","DOIUrl":"https://doi.org/10.1145/3687916","url":null,"abstract":"We propose a novel framework for simulating ink as a particle-laden flow using particle flow maps. Our method addresses the limitations of existing flow-map techniques, which struggle with dissipative forces like viscosity and drag, thereby extending the application scope from solving the Euler equations to solving the Navier-Stokes equations with accurate viscosity and laden-particle treatment. Our key contribution lies in a coupling mechanism for two particle systems, coupling physical sediment particles and virtual flow-map particles on a background grid by solving a Poisson system. We implemented a novel path integral formula to incorporate viscosity and drag forces into the particle flow map process. Our approach enables state-of-the-art simulation of various particle-laden flow phenomena, exemplified by the bulging and breakup of suspension drop tails, torus formation, torus disintegration, and the coalescence of sedimenting drops. In particular, our method delivered high-fidelity ink diffusion simulations by accurately capturing vortex bulbs, viscous tails, fractal branching, and hierarchical structures.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"35 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ToonCrafter: Generative Cartoon Interpolation ToonCrafter:生成式卡通插值
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-19 DOI: 10.1145/3687761
Jinbo Xing, Hanyuan Liu, Menghan Xia, Yong Zhang, Xintao Wang, Ying Shan, Tien-Tsin Wong
We introduce ToonCrafter, a novel approach that transcends traditional correspondence-based cartoon video interpolation, paving the way for generative interpolation. Traditional methods, that implicitly assume linear motion and the absence of complicated phenomena like dis-occlusion, often struggle with the exaggerated non-linear and large motions with occlusion commonly found in cartoons, resulting in implausible or even failed interpolation results. To overcome these limitations, we explore the potential of adapting live-action video priors to better suit cartoon interpolation within a generative framework. ToonCrafter effectively addresses the challenges faced when applying live-action video motion priors to generative cartoon interpolation. First, we design a toon rectification learning strategy that seamlessly adapts live-action video priors to the cartoon domain, resolving the domain gap and content leakage issues. Next, we introduce a dual-reference-based 3D decoder to compensate for lost details due to the highly compressed latent prior spaces, ensuring the preservation of fine details in interpolation results. Finally, we design a flexible sketch encoder that empowers users with interactive control over the interpolation results. Experimental results demonstrate that our proposed method not only produces visually convincing and more natural dynamics, but also effectively handles dis-occlusion. The comparative evaluation demonstrates the notable superiority of our approach over existing competitors. Code and model weights are available at https://doubiiu.github.io/projects/ToonCrafter
我们介绍的 ToonCrafter 是一种新颖的方法,它超越了传统的基于对应关系的卡通视频插值,为生成插值铺平了道路。传统方法隐含地假定了线性运动和不存在不闭塞等复杂现象,但在处理动画片中常见的夸张非线性运动和带有闭塞的大运动时往往力不从心,导致插值结果难以置信甚至失败。为了克服这些局限性,我们探索了在生成框架内调整真人视频先验以更好地适应卡通插值的可能性。ToonCrafter 有效地解决了将真人视频运动先验应用于生成式卡通插值时所面临的挑战。首先,我们设计了一种卡通矫正学习策略,将真人视频前验无缝调整到卡通领域,解决了领域差距和内容泄漏问题。其次,我们引入了基于双参考的 3D 解码器,以补偿因高度压缩的潜先验空间而丢失的细节,确保插值结果中精细细节的保留。最后,我们设计了一种灵活的草图编码器,使用户能够对插值结果进行交互式控制。实验结果表明,我们提出的方法不仅能产生视觉上令人信服且更自然的动态效果,还能有效处理不闭塞现象。对比评估结果表明,我们的方法明显优于现有的竞争对手。代码和模型权重见 https://doubiiu.github.io/projects/ToonCrafter。
{"title":"ToonCrafter: Generative Cartoon Interpolation","authors":"Jinbo Xing, Hanyuan Liu, Menghan Xia, Yong Zhang, Xintao Wang, Ying Shan, Tien-Tsin Wong","doi":"10.1145/3687761","DOIUrl":"https://doi.org/10.1145/3687761","url":null,"abstract":"We introduce ToonCrafter, a novel approach that transcends traditional correspondence-based cartoon video interpolation, paving the way for generative interpolation. Traditional methods, that implicitly assume linear motion and the absence of complicated phenomena like dis-occlusion, often struggle with the exaggerated non-linear and large motions with occlusion commonly found in cartoons, resulting in implausible or even failed interpolation results. To overcome these limitations, we explore the potential of adapting live-action video priors to better suit cartoon interpolation within a generative framework. ToonCrafter effectively addresses the challenges faced when applying live-action video motion priors to generative cartoon interpolation. First, we design a toon rectification learning strategy that seamlessly adapts live-action video priors to the cartoon domain, resolving the domain gap and content leakage issues. Next, we introduce a dual-reference-based 3D decoder to compensate for lost details due to the highly compressed latent prior spaces, ensuring the preservation of fine details in interpolation results. Finally, we design a flexible sketch encoder that empowers users with interactive control over the interpolation results. Experimental results demonstrate that our proposed method not only produces visually convincing and more natural dynamics, but also effectively handles dis-occlusion. The comparative evaluation demonstrates the notable superiority of our approach over existing competitors. Code and model weights are available at https://doubiiu.github.io/projects/ToonCrafter","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"250 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1