首页 > 最新文献

Computer Graphics Forum最新文献

英文 中文
Front Matter 前言
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-14 DOI: 10.1111/cgf.14853
<p>The 32nd Pacific Conference on Computer Graphics and Applications</p><p>Huangshan (Yellow Mountain), China</p><p>October 13 – 16, 2024</p><p><b>Conference Co-Chairs</b></p><p>Jan Bender, RWTH Aachen, Germany</p><p>Ligang Liu, University of Science and Technology of China, China</p><p>Denis Zorin, New York University, USA</p><p><b>Program Co-Chairs</b></p><p>Renjie Chen, University of Science and Technology of China, China</p><p>Tobias Ritschel, University College London, UK</p><p>Emily Whiting, Boston University, USA</p><p><b>Organization Co-Chairs</b></p><p>Xiao-Ming Fu, University of Science and Technology of China, China</p><p>Jianwei Hu, Huangshan University, China</p><p>The 2024 Pacific Graphics Conference, held in the scenic city of Huangshan, China from October 13-16, marked a milestone year with record-breaking participation and submissions. As one of the premier forums for computer graphics research, the conference maintained its high standards of academic excellence while taking measures to handle unprecedented submission volumes.</p><p>This year saw an extraordinary 360 full paper submissions, the highest in Pacific Graphics history. To maintain our rigorous review standards, we implemented a streamlined process including an initial sorting committee and desk review phase. Of the 305 submissions that proceeded to full review, each received a minimum of 3 reviews, with an average of 3.76 reviews per submission. Our double-blind review process was managed by an International Program Committee (IPC) comprising 112 experts, carefully selected to ensure regular renewal of perspectives in the field.</p><p>In the review process, each submission was assigned to two IPC members as primary and secondary reviewers. These reviewers, in turn, invited two additional tertiary reviewers, ensuring comprehensive evaluation. Authors were provided a five-day window to submit 1,000-word rebuttals addressing reviewer comments and potential misunderstandings. This year's IPC meeting was conducted virtually over one week through asynchronous discussions.</p><p>From the initial 360 submissions, 109 papers were conditionally accepted, yielding an acceptance rate of 30.28%. Following the acceptance notifications, resulting in a final publication count of 105 papers. These were distributed across publication venues as follows: 59 papers were selected for journal publication in Computer Graphics Forum, while 50 papers were accepted to the Conference Track and published in the Proceedings. Additionally, 6 papers were recommended for fast-track review with major revisions for future Computer Graphics Forum consideration.</p><p>The accepted papers showcase the breadth of modern computer graphics research, spanning computational photography, geometry and mesh processing, appearance, shading, texture, rendering technologies, 3D scanning and analysis, physical simulation, human animation and motion capture, crowd and cloth simulation, 3D printing and fabrication, dig
第 32 届太平洋计算机图形学与应用大会中国黄山2024 年 10 月 13-16 日大会联合主席Jan Bender,德国亚琛工业大学刘立刚,中国科技大学Denis Zorin,美国纽约大学程序联合主席陈仁杰,中国科技大学Tobias Ritschel、2024 太平洋图形大会于 10 月 13 日至 16 日在风景秀丽的中国黄山举行,今年是具有里程碑意义的一年,参会人数和提交论文数量均创历史新高。作为计算机图形学研究领域的顶级论坛之一,本届大会在保持高标准学术水准的同时,还采取了各种措施来应对前所未有的论文投稿量。为了保持严格的评审标准,我们实施了简化流程,包括初步分类委员会和案头评审阶段。在进入全面评审的 305 篇投稿中,每篇至少获得 3 次评审,平均每篇投稿获得 3.76 次评审。我们的双盲评审过程由国际项目委员会(IPC)管理,该委员会由 112 名专家组成,他们都是经过精心挑选的,以确保定期更新该领域的观点。这些审稿人又会邀请另外两名三级审稿人,以确保评估的全面性。作者有五天的时间提交 1000 字的反驳意见,以回应审稿人的意见和可能存在的误解。今年的 IPC 会议通过异步讨论的方式进行,为期一周。在最初提交的 360 篇论文中,109 篇被有条件接受,接受率为 30.28%。在收到录用通知后,最终发表了 105 篇论文。这些论文的发表渠道分布如下:59 篇论文入选《计算机图形论坛》期刊,50 篇论文入选会议专栏并发表在论文集中。被录用的论文展示了现代计算机图形学研究的广度,涉及计算摄影、几何和网格处理、外观、着色、纹理、渲染技术、三维扫描和分析、物理模拟、人体动画和动作捕捉、人群和布料模拟、三维打印和制造、数字内容编辑以及机器学习和生成建模。我们衷心感谢 IPC 成员在审稿协调和论文指导方面的奉献,感谢所有审稿人全面而深刻的评价,感谢作者们的宝贵意见和修改,感谢 Eurographics Publishing 的 Stefanie Behnke 的大力支持,感谢组织团队在黄山为我们提供了一次绝佳的会议体验。我们希望这些论文和会议经历能对您未来的研究工作有所启发和激励。太平洋制图 2024 项目联合主席艾格曼-诺姆(蒙特利尔大学)阿克索伊-亚吉兹(西蒙弗雷泽大学)白承焕(POSTECH)巴蒂-克里斯托弗(滑铁卢大学)贝斯梅尔采夫-米哈伊尔(蒙特利尔大学)邦内尔-尼古拉斯(法国国家科学研究中心/UNIV.
{"title":"Front Matter","authors":"","doi":"10.1111/cgf.14853","DOIUrl":"https://doi.org/10.1111/cgf.14853","url":null,"abstract":"&lt;p&gt;The 32nd Pacific Conference on Computer Graphics and Applications&lt;/p&gt;&lt;p&gt;Huangshan (Yellow Mountain), China&lt;/p&gt;&lt;p&gt;October 13 – 16, 2024&lt;/p&gt;&lt;p&gt;&lt;b&gt;Conference Co-Chairs&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Jan Bender, RWTH Aachen, Germany&lt;/p&gt;&lt;p&gt;Ligang Liu, University of Science and Technology of China, China&lt;/p&gt;&lt;p&gt;Denis Zorin, New York University, USA&lt;/p&gt;&lt;p&gt;&lt;b&gt;Program Co-Chairs&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Renjie Chen, University of Science and Technology of China, China&lt;/p&gt;&lt;p&gt;Tobias Ritschel, University College London, UK&lt;/p&gt;&lt;p&gt;Emily Whiting, Boston University, USA&lt;/p&gt;&lt;p&gt;&lt;b&gt;Organization Co-Chairs&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Xiao-Ming Fu, University of Science and Technology of China, China&lt;/p&gt;&lt;p&gt;Jianwei Hu, Huangshan University, China&lt;/p&gt;&lt;p&gt;The 2024 Pacific Graphics Conference, held in the scenic city of Huangshan, China from October 13-16, marked a milestone year with record-breaking participation and submissions. As one of the premier forums for computer graphics research, the conference maintained its high standards of academic excellence while taking measures to handle unprecedented submission volumes.&lt;/p&gt;&lt;p&gt;This year saw an extraordinary 360 full paper submissions, the highest in Pacific Graphics history. To maintain our rigorous review standards, we implemented a streamlined process including an initial sorting committee and desk review phase. Of the 305 submissions that proceeded to full review, each received a minimum of 3 reviews, with an average of 3.76 reviews per submission. Our double-blind review process was managed by an International Program Committee (IPC) comprising 112 experts, carefully selected to ensure regular renewal of perspectives in the field.&lt;/p&gt;&lt;p&gt;In the review process, each submission was assigned to two IPC members as primary and secondary reviewers. These reviewers, in turn, invited two additional tertiary reviewers, ensuring comprehensive evaluation. Authors were provided a five-day window to submit 1,000-word rebuttals addressing reviewer comments and potential misunderstandings. This year's IPC meeting was conducted virtually over one week through asynchronous discussions.&lt;/p&gt;&lt;p&gt;From the initial 360 submissions, 109 papers were conditionally accepted, yielding an acceptance rate of 30.28%. Following the acceptance notifications, resulting in a final publication count of 105 papers. These were distributed across publication venues as follows: 59 papers were selected for journal publication in Computer Graphics Forum, while 50 papers were accepted to the Conference Track and published in the Proceedings. Additionally, 6 papers were recommended for fast-track review with major revisions for future Computer Graphics Forum consideration.&lt;/p&gt;&lt;p&gt;The accepted papers showcase the breadth of modern computer graphics research, spanning computational photography, geometry and mesh processing, appearance, shading, texture, rendering technologies, 3D scanning and analysis, physical simulation, human animation and motion capture, crowd and cloth simulation, 3D printing and fabrication, dig","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":"i-xxii"},"PeriodicalIF":2.7,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.14853","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142664863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DiffPop: Plausibility-Guided Object Placement Diffusion for Image Composition DiffPop:用于图像合成的似是而非引导的物体位置扩散
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-14 DOI: 10.1111/cgf.15246
Jiacheng Liu, Hang Zhou, Shida Wei, Rui Ma

In this paper, we address the problem of plausible object placement for the challenging task of realistic image composition. We propose DiffPop, the first framework that utilizes plausibility-guided denoising diffusion probabilistic model to learn the scale and spatial relations among multiple objects and the corresponding scene image. First, we train an unguided diffusion model to directly learn the object placement parameters in a self-supervised manner. Then, we develop a human-in-the-loop pipeline which exploits human labeling on the diffusion-generated composite images to provide the weak supervision for training a structural plausibility classifier. The classifier is further used to guide the diffusion sampling process towards generating the plausible object placement. Experimental results verify the superiority of our method for producing plausible and diverse composite images on the new Cityscapes-OP dataset and the public OPA dataset, as well as demonstrate its potential in applications such as data augmentation and multi-object placement tasks. Our dataset and code will be released.

在本文中,我们针对现实图像合成这一具有挑战性的任务,探讨了合理放置物体的问题。我们提出了 DiffPop,这是第一个利用可信度引导的去噪扩散概率模型来学习多个物体与相应场景图像之间的比例和空间关系的框架。首先,我们训练一个非引导扩散模型,以自我监督的方式直接学习物体放置参数。然后,我们开发了一个 "人在回路 "管道,利用人类对扩散生成的合成图像的标记,为训练结构可信度分类器提供弱监督。分类器进一步用于指导扩散采样过程,以生成可信的物体位置。实验结果验证了我们的方法在新的 Cityscapes-OP 数据集和公共 OPA 数据集上生成可信和多样化合成图像的优越性,同时也证明了它在数据增强和多物体放置任务等应用中的潜力。我们将发布数据集和代码。
{"title":"DiffPop: Plausibility-Guided Object Placement Diffusion for Image Composition","authors":"Jiacheng Liu,&nbsp;Hang Zhou,&nbsp;Shida Wei,&nbsp;Rui Ma","doi":"10.1111/cgf.15246","DOIUrl":"https://doi.org/10.1111/cgf.15246","url":null,"abstract":"<p>In this paper, we address the problem of plausible object placement for the challenging task of realistic image composition. We propose DiffPop, the first framework that utilizes plausibility-guided denoising diffusion probabilistic model to learn the scale and spatial relations among multiple objects and the corresponding scene image. First, we train an unguided diffusion model to directly learn the object placement parameters in a self-supervised manner. Then, we develop a human-in-the-loop pipeline which exploits human labeling on the diffusion-generated composite images to provide the weak supervision for training a structural plausibility classifier. The classifier is further used to guide the diffusion sampling process towards generating the plausible object placement. Experimental results verify the superiority of our method for producing plausible and diverse composite images on the new Cityscapes-OP dataset and the public OPA dataset, as well as demonstrate its potential in applications such as data augmentation and multi-object placement tasks. Our dataset and code will be released.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142664803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
iShapEditing: Intelligent Shape Editing with Diffusion Models iShapEditing:利用扩散模型进行智能形状编辑
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-08 DOI: 10.1111/cgf.15253
Jing Li, Juyong Zhang, Falai Chen

Recent advancements in generative models have enabled image editing very effective with impressive results. By extending this progress to 3D geometry models, we introduce iShapEditing, a novel framework for 3D shape editing which is applicable to both generated and real shapes. Users manipulate shapes by dragging handle points to corresponding targets, offering an intuitive and intelligent editing interface. Leveraging the Triplane Diffusion model and robust intermediate feature correspondence, our framework utilizes classifier guidance to adjust noise representations during sampling process, ensuring alignment with user expectations while preserving plausibility. For real shapes, we employ shape predictions at each time step alongside a DDPM-based inversion algorithm to derive their latent codes, facilitating seamless editing. iShapEditing provides effective and intelligent control over shapes without the need for additional model training or fine-tuning. Experimental examples demonstrate the effectiveness and superiority of our method in terms of editing accuracy and plausibility.

生成模型的最新进展使图像编辑变得非常有效,效果令人印象深刻。通过将这一进展扩展到三维几何模型,我们引入了 iShapEditing,这是一种新颖的三维形状编辑框架,既适用于生成的形状,也适用于真实的形状。用户通过拖动手柄点到相应的目标来操作形状,提供了一个直观、智能的编辑界面。利用三平面扩散模型和稳健的中间特征对应关系,我们的框架在采样过程中利用分类器引导来调整噪声表示,确保与用户期望保持一致,同时保留可信度。对于真实形状,我们在每个时间步采用形状预测和基于 DDPM 的反演算法来推导其潜在代码,从而促进无缝编辑。iShapEditing 可对形状进行有效的智能控制,而无需额外的模型训练或微调。实验实例证明了我们的方法在编辑准确性和可信度方面的有效性和优越性。
{"title":"iShapEditing: Intelligent Shape Editing with Diffusion Models","authors":"Jing Li,&nbsp;Juyong Zhang,&nbsp;Falai Chen","doi":"10.1111/cgf.15253","DOIUrl":"https://doi.org/10.1111/cgf.15253","url":null,"abstract":"<p>Recent advancements in generative models have enabled image editing very effective with impressive results. By extending this progress to 3D geometry models, we introduce iShapEditing, a novel framework for 3D shape editing which is applicable to both generated and real shapes. Users manipulate shapes by dragging handle points to corresponding targets, offering an intuitive and intelligent editing interface. Leveraging the Triplane Diffusion model and robust intermediate feature correspondence, our framework utilizes classifier guidance to adjust noise representations during sampling process, ensuring alignment with user expectations while preserving plausibility. For real shapes, we employ shape predictions at each time step alongside a DDPM-based inversion algorithm to derive their latent codes, facilitating seamless editing. iShapEditing provides effective and intelligent control over shapes without the need for additional model training or fine-tuning. Experimental examples demonstrate the effectiveness and superiority of our method in terms of editing accuracy and plausibility.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142664546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
𝒢-Style: Stylized Gaussian Splatting 𝒢-风格:风格化高斯拼接
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-08 DOI: 10.1111/cgf.15259
Áron Samuel Kovács, Pedro Hermosilla, Renata G. Raidou

We introduce 𝒢-Style, a novel algorithm designed to transfer the style of an image onto a 3D scene represented using Gaussian Splatting. Gaussian Splatting is a powerful 3D representation for novel view synthesis, as—compared to other approaches based on Neural Radiance Fields—it provides fast scene renderings and user control over the scene. Recent pre-prints have demonstrated that the style of Gaussian Splatting scenes can be modified using an image exemplar. However, since the scene geometry remains fixed during the stylization process, current solutions fall short of producing satisfactory results. Our algorithm aims to address these limitations by following a three-step process: In a pre-processing step, we remove undesirable Gaussians with large projection areas or highly elongated shapes. Subsequently, we combine several losses carefully designed to preserve different scales of the style in the image, while maintaining as much as possible the integrity of the original scene content. During the stylization process and following the original design of Gaussian Splatting, we split Gaussians where additional detail is necessary within our scene by tracking the gradient of the stylized color. Our experiments demonstrate that 𝒢-Style generates high-quality stylizations within just a few minutes, outperforming existing methods both qualitatively and quantitatively.

我们介绍了 𝒢-Style,这是一种新颖的算法,旨在将图像的风格转移到使用高斯拼接法表示的三维场景上。与其他基于神经辐射场的方法相比,高斯拼接是一种用于新颖视图合成的功能强大的三维表示方法,它能提供快速的场景渲染和用户对场景的控制。最近的预发表论文表明,高斯拼接场景的风格可以通过图像示例进行修改。然而,由于场景几何形状在风格化过程中保持固定,目前的解决方案无法产生令人满意的结果。我们的算法旨在通过三步流程解决这些局限性:在预处理步骤中,我们会去除具有较大投影面积或高度拉长形状的不良高斯。随后,我们将精心设计的几种损失结合起来,以保留图像中不同尺度的风格,同时尽可能保持原始场景内容的完整性。在风格化过程中,按照高斯拼接的原始设计,我们通过跟踪风格化颜色的梯度,在场景中需要额外细节的地方分割高斯。我们的实验证明,𝒢-Style 能在几分钟内生成高质量的风格化效果,在质量和数量上都优于现有方法。
{"title":"𝒢-Style: Stylized Gaussian Splatting","authors":"Áron Samuel Kovács,&nbsp;Pedro Hermosilla,&nbsp;Renata G. Raidou","doi":"10.1111/cgf.15259","DOIUrl":"https://doi.org/10.1111/cgf.15259","url":null,"abstract":"<div>\u0000 \u0000 <p>We introduce 𝒢-Style, a novel algorithm designed to transfer the style of an image onto a 3D scene represented using Gaussian Splatting. Gaussian Splatting is a powerful 3D representation for novel view synthesis, as—compared to other approaches based on Neural Radiance Fields—it provides fast scene renderings and user control over the scene. Recent pre-prints have demonstrated that the style of Gaussian Splatting scenes can be modified using an image exemplar. However, since the scene geometry remains fixed during the stylization process, current solutions fall short of producing satisfactory results. Our algorithm aims to address these limitations by following a three-step process: In a pre-processing step, we remove undesirable Gaussians with large projection areas or highly elongated shapes. Subsequently, we combine several losses carefully designed to preserve different scales of the style in the image, while maintaining as much as possible the integrity of the original scene content. During the stylization process and following the original design of Gaussian Splatting, we split Gaussians where additional detail is necessary within our scene by tracking the gradient of the stylized color. Our experiments demonstrate that 𝒢-Style generates high-quality stylizations within just a few minutes, outperforming existing methods both qualitatively and quantitatively.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15259","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142664544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LGSur-Net: A Local Gaussian Surface Representation Network for Upsampling Highly Sparse Point Cloud LGSur-Net:用于高稀疏点云升采样的局部高斯曲面表示网络
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-08 DOI: 10.1111/cgf.15257
Zijian Xiao, Tianchen Zhou, Li Yao

We introduce LGSur-Net, an end-to-end deep learning architecture, engineered for the upsampling of sparse point clouds. LGSur-Net harnesses a trainable Gaussian local representation by positioning a series of Gaussian functions on an oriented plane, complemented by the optimization of individual covariance matrices. The integration of parametric factors allows for the encoding of the plane's rotational dynamics and Gaussian weightings into a linear transformation matrix. Then we extract the feature maps from the point cloud and its adjoining edges and learn the local Gaussian depictions to accurately model the shape's local geometry through an attention-based network. The Gaussian representation's inherent high-order continuity endows LGSur-Net with the natural ability to predict surface normals and support upsampling to any specified resolution. Comprehensive experiments validate that LGSur-Net efficiently learns from sparse data inputs, surpassing the performance of existing state-of-the-art upsampling methods. Our code is publicly available at https://github.com/Rangiant5b72/LGSur-Net.

我们介绍的 LGSur-Net 是一种端到端深度学习架构,专为稀疏点云的上采样而设计。LGSur-Net 通过在定向平面上定位一系列高斯函数,利用可训练的高斯局部表示法,并辅以对单个协方差矩阵的优化。参数因子的整合可将平面的旋转动态和高斯权重编码到线性变换矩阵中。然后,我们从点云及其相邻边缘中提取特征图,并学习局部高斯描述,通过基于注意力的网络对形状的局部几何形状进行精确建模。高斯表示法固有的高阶连续性赋予了 LGSur-Net 预测表面法线的自然能力,并支持以任何指定分辨率进行上采样。综合实验验证了 LGSur-Net 能高效地从稀疏数据输入中学习,其性能超过了现有最先进的上采样方法。我们的代码可在 https://github.com/Rangiant5b72/LGSur-Net 公开获取。
{"title":"LGSur-Net: A Local Gaussian Surface Representation Network for Upsampling Highly Sparse Point Cloud","authors":"Zijian Xiao,&nbsp;Tianchen Zhou,&nbsp;Li Yao","doi":"10.1111/cgf.15257","DOIUrl":"https://doi.org/10.1111/cgf.15257","url":null,"abstract":"<p>We introduce LGSur-Net, an end-to-end deep learning architecture, engineered for the upsampling of sparse point clouds. LGSur-Net harnesses a trainable Gaussian local representation by positioning a series of Gaussian functions on an oriented plane, complemented by the optimization of individual covariance matrices. The integration of parametric factors allows for the encoding of the plane's rotational dynamics and Gaussian weightings into a linear transformation matrix. Then we extract the feature maps from the point cloud and its adjoining edges and learn the local Gaussian depictions to accurately model the shape's local geometry through an attention-based network. The Gaussian representation's inherent high-order continuity endows LGSur-Net with the natural ability to predict surface normals and support upsampling to any specified resolution. Comprehensive experiments validate that LGSur-Net efficiently learns from sparse data inputs, surpassing the performance of existing state-of-the-art upsampling methods. Our code is publicly available at https://github.com/Rangiant5b72/LGSur-Net.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142664543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cinematic Gaussians: Real-Time HDR Radiance Fields with Depth of Field 电影般的高斯具有景深的实时 HDR 辐射场
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-07 DOI: 10.1111/cgf.15214
Chao Wang, Krzysztof Wolski, Bernhard Kerbl, Ana Serrano, Mojtaba Bemana, Hans-Peter Seidel, Karol Myszkowski, Thomas Leimkühler

Radiance field methods represent the state of the art in reconstructing complex scenes from multi-view photos. However, these reconstructions often suffer from one or both of the following limitations: First, they typically represent scenes in low dynamic range (LDR), which restricts their use to evenly lit environments and hinders immersive viewing experiences. Secondly, their reliance on a pinhole camera model, assuming all scene elements are in focus in the input images, presents practical challenges and complicates refocusing during novel-view synthesis. Addressing these limitations, we present a lightweight method based on 3D Gaussian Splatting that utilizes multi-view LDR images of a scene with varying exposure times, apertures, and focus distances as input to reconstruct a high-dynamic-range (HDR) radiance field. By incorporating analytical convolutions of Gaussians based on a thin-lens camera model as well as a tonemapping module, our reconstructions enable the rendering of HDR content with flexible refocusing capabilities. We demonstrate that our combined treatment of HDR and depth of field facilitates real-time cinematic rendering, outperforming the state of the art.

辐射场方法代表了从多视角照片重建复杂场景的最新技术水平。然而,这些重建方法往往存在以下一个或两个局限性:首先,它们通常以低动态范围(LDR)表示场景,这就限制了它们在光线均匀环境中的使用,妨碍了身临其境的观看体验。其次,它们依赖于针孔摄像机模型,假设所有场景元素在输入图像中都处于聚焦状态,这带来了实际挑战,并使小说视图合成过程中的重新聚焦变得复杂。为了解决这些局限性,我们提出了一种基于三维高斯拼接的轻量级方法,利用不同曝光时间、光圈和聚焦距离的场景多视角 LDR 图像作为输入,重建高动态范围(HDR)辐射场。通过结合基于薄透镜相机模型的高斯分析卷积以及色调映射模块,我们的重建技术能够渲染具有灵活再聚焦功能的 HDR 内容。我们证明,我们对 HDR 和景深的综合处理有助于实时电影渲染,其性能优于目前的技术水平。
{"title":"Cinematic Gaussians: Real-Time HDR Radiance Fields with Depth of Field","authors":"Chao Wang,&nbsp;Krzysztof Wolski,&nbsp;Bernhard Kerbl,&nbsp;Ana Serrano,&nbsp;Mojtaba Bemana,&nbsp;Hans-Peter Seidel,&nbsp;Karol Myszkowski,&nbsp;Thomas Leimkühler","doi":"10.1111/cgf.15214","DOIUrl":"https://doi.org/10.1111/cgf.15214","url":null,"abstract":"<div>\u0000 <p>Radiance field methods represent the state of the art in reconstructing complex scenes from multi-view photos. However, these reconstructions often suffer from one or both of the following limitations: First, they typically represent scenes in low dynamic range (LDR), which restricts their use to evenly lit environments and hinders immersive viewing experiences. Secondly, their reliance on a pinhole camera model, assuming all scene elements are in focus in the input images, presents practical challenges and complicates refocusing during novel-view synthesis. Addressing these limitations, we present a lightweight method based on 3D Gaussian Splatting that utilizes multi-view LDR images of a scene with varying exposure times, apertures, and focus distances as input to reconstruct a high-dynamic-range (HDR) radiance field. By incorporating analytical convolutions of Gaussians based on a thin-lens camera model as well as a tonemapping module, our reconstructions enable the rendering of HDR content with flexible refocusing capabilities. We demonstrate that our combined treatment of HDR and depth of field facilitates real-time cinematic rendering, outperforming the state of the art.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15214","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142664606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GETr: A Geometric Equivariant Transformer for Point Cloud Registration GETr:用于点云注册的几何等差变换器
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-07 DOI: 10.1111/cgf.15216
Chang Yu, Sanguo Zhang, Li-Yong Shen

As a fundamental problem in computer vision, 3D point cloud registration (PCR) aims to seek the optimal transformation to align point cloud pairs. Meanwhile, the equivariance lies at the core of matching point clouds at arbitrary pose. In this paper, we propose GETr, a geometric equivariant transformer for PCR. By learning the point-wise orientations, we decouple the coordinate to the pose of the point clouds, which is the key to achieve equivariance in our framework. Then we utilize attention mechanism to learn the geometric features for superpoints matching, the proposed novel self-attention mechanism encodes the geometric information of point clouds. Finally, the coarse-to-fine manner is used to obtain high-quality correspondence for registration. Extensive experiments on both indoor and outdoor benchmarks demonstrate that our method outperforms various existing state-of-the-art methods.

作为计算机视觉领域的一个基本问题,三维点云配准(PCR)旨在寻求最佳变换来对齐点云对。同时,等方差是任意姿态点云匹配的核心。本文提出了用于 PCR 的几何等差变换器 GETr。通过学习点的方向,我们将坐标与点云的姿态解耦,这是在我们的框架中实现等差性的关键。然后,我们利用注意力机制来学习超点匹配的几何特征,所提出的新型自注意力机制对点云的几何信息进行了编码。最后,我们采用从粗到细的方式来获得高质量的配准对应关系。在室内和室外基准上进行的大量实验表明,我们的方法优于现有的各种先进方法。
{"title":"GETr: A Geometric Equivariant Transformer for Point Cloud Registration","authors":"Chang Yu,&nbsp;Sanguo Zhang,&nbsp;Li-Yong Shen","doi":"10.1111/cgf.15216","DOIUrl":"https://doi.org/10.1111/cgf.15216","url":null,"abstract":"<div>\u0000 \u0000 <p>As a fundamental problem in computer vision, 3D point cloud registration (PCR) aims to seek the optimal transformation to align point cloud pairs. Meanwhile, the equivariance lies at the core of matching point clouds at arbitrary pose. In this paper, we propose GETr, a geometric equivariant transformer for PCR. By learning the point-wise orientations, we decouple the coordinate to the pose of the point clouds, which is the key to achieve equivariance in our framework. Then we utilize attention mechanism to learn the geometric features for superpoints matching, the proposed novel self-attention mechanism encodes the geometric information of point clouds. Finally, the coarse-to-fine manner is used to obtain high-quality correspondence for registration. Extensive experiments on both indoor and outdoor benchmarks demonstrate that our method outperforms various existing state-of-the-art methods.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15216","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142664645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Strictly Conservative Neural Implicits 严格保守的神经暗示
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-07 DOI: 10.1111/cgf.15241
I. Ludwig, M. Campen

We describe a method to convert 3D shapes into neural implicit form such that the shape is approximated in a guaranteed conservative manner. This means the input shape is strictly contained inside the neural implicit or, alternatively, vice versa. Such conservative approximations are of interest in a variety of applications, including collision detection, occlusion culling, or intersection testing. Our approach is the first to guarantee conservativeness in this context of neural implicits. We support input given as mesh, voxel set, or implicit function. Adaptive affine arithmetic is employed in the neural network fitting process, enabling the reasoning over infinite sets of points despite using a finite set of training data. Combined with an interior point style optimization approach this yields the desired guarantee.

我们介绍了一种将三维形状转换为神经隐式的方法,这种方法可以保证以保守的方式近似形状。这意味着输入形状严格包含在神经隐式中,反之亦然。这种保守的近似在碰撞检测、遮挡剔除或交叉测试等多种应用中都很有意义。我们的方法是第一种保证神经隐含式的保守性的方法。我们支持网格、体素集或隐函数输入。在神经网络拟合过程中采用了自适应仿射算法,尽管使用的是有限的训练数据集,却能对无限的点集进行推理。结合内点式优化方法,可获得所需的保证。
{"title":"Strictly Conservative Neural Implicits","authors":"I. Ludwig,&nbsp;M. Campen","doi":"10.1111/cgf.15241","DOIUrl":"https://doi.org/10.1111/cgf.15241","url":null,"abstract":"<div>\u0000 \u0000 <p>We describe a method to convert 3D shapes into neural implicit form such that the shape is approximated in a guaranteed conservative manner. This means the input shape is strictly contained inside the neural implicit or, alternatively, vice versa. Such conservative approximations are of interest in a variety of applications, including collision detection, occlusion culling, or intersection testing. Our approach is the first to guarantee conservativeness in this context of neural implicits. We support input given as mesh, voxel set, or implicit function. Adaptive affine arithmetic is employed in the neural network fitting process, enabling the reasoning over infinite sets of points despite using a finite set of training data. Combined with an interior point style optimization approach this yields the desired guarantee.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15241","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142664670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SCARF: Scalable Continual Learning Framework for Memory-efficient Multiple Neural Radiance Fields SCARF:记忆高效多重神经辐射场的可扩展连续学习框架
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-07 DOI: 10.1111/cgf.15255
Yuze Wang, Junyi Wang, Chen Wang, Wantong Duan, Yongtang Bao, Yue Qi

This paper introduces a novel continual learning framework for synthesising novel views of multiple scenes, learning multiple 3D scenes incrementally, and updating the network parameters only with the training data of the upcoming new scene. We build on Neural Radiance Fields (NeRF), which uses multi-layer perceptron to model the density and radiance field of a scene as the implicit function. While NeRF and its extensions have shown a powerful capability of rendering photo-realistic novel views in a single 3D scene, managing these growing 3D NeRF assets efficiently is a new scientific problem. Very few works focus on the efficient representation or continuous learning capability of multiple scenes, which is crucial for the practical applications of NeRF. To achieve these goals, our key idea is to represent multiple scenes as the linear combination of a cross-scene weight matrix and a set of scene-specific weight matrices generated from a global parameter generator. Furthermore, we propose an uncertain surface knowledge distillation strategy to transfer the radiance field knowledge of previous scenes to the new model. Representing multiple 3D scenes with such weight matrices significantly reduces memory requirements. At the same time, the uncertain surface distillation strategy greatly overcomes the catastrophic forgetting problem and maintains the photo-realistic rendering quality of previous scenes. Experiments show that the proposed approach achieves state-of-the-art rendering quality of continual learning NeRF on NeRF-Synthetic, LLFF, and TanksAndTemples datasets while preserving extra low storage cost.

本文介绍了一种新颖的持续学习框架,用于合成多个场景的新视图,以增量方式学习多个三维场景,并仅根据即将到来的新场景的训练数据更新网络参数。我们以神经辐射场(NeRF)为基础,利用多层感知器将场景的密度和辐射场作为隐函数建模。尽管 NeRF 及其扩展功能已显示出在单一三维场景中渲染照片般逼真的新颖视图的强大能力,但有效管理这些不断增长的三维 NeRF 资产仍是一个新的科学问题。很少有作品关注多场景的高效表示或持续学习能力,而这对 NeRF 的实际应用至关重要。为了实现这些目标,我们的主要想法是将多场景表示为一个跨场景权重矩阵和一组由全局参数生成器生成的特定场景权重矩阵的线性组合。此外,我们还提出了一种不确定表面知识提炼策略,将以前场景的辐射场知识转移到新模型中。用这样的权重矩阵来表示多个三维场景,可以大大降低内存需求。同时,不确定表面蒸馏策略极大地克服了灾难性遗忘问题,并保持了先前场景的逼真渲染质量。实验表明,在 NeRF-Synthetic、LLFF 和 TanksAndTemples 数据集上,所提出的方法达到了持续学习 NeRF 最先进的渲染质量,同时保持了超低的存储成本。
{"title":"SCARF: Scalable Continual Learning Framework for Memory-efficient Multiple Neural Radiance Fields","authors":"Yuze Wang,&nbsp;Junyi Wang,&nbsp;Chen Wang,&nbsp;Wantong Duan,&nbsp;Yongtang Bao,&nbsp;Yue Qi","doi":"10.1111/cgf.15255","DOIUrl":"https://doi.org/10.1111/cgf.15255","url":null,"abstract":"<p>This paper introduces a novel continual learning framework for synthesising novel views of multiple scenes, learning multiple 3D scenes incrementally, and updating the network parameters only with the training data of the upcoming new scene. We build on Neural Radiance Fields (NeRF), which uses multi-layer perceptron to model the density and radiance field of a scene as the implicit function. While NeRF and its extensions have shown a powerful capability of rendering photo-realistic novel views in a single 3D scene, managing these growing 3D NeRF assets efficiently is a new scientific problem. Very few works focus on the efficient representation or continuous learning capability of multiple scenes, which is crucial for the practical applications of NeRF. To achieve these goals, our key idea is to represent multiple scenes as the linear combination of a cross-scene weight matrix and a set of scene-specific weight matrices generated from a global parameter generator. Furthermore, we propose an uncertain surface knowledge distillation strategy to transfer the radiance field knowledge of previous scenes to the new model. Representing multiple 3D scenes with such weight matrices significantly reduces memory requirements. At the same time, the uncertain surface distillation strategy greatly overcomes the catastrophic forgetting problem and maintains the photo-realistic rendering quality of previous scenes. Experiments show that the proposed approach achieves state-of-the-art rendering quality of continual learning NeRF on NeRF-Synthetic, LLFF, and TanksAndTemples datasets while preserving extra low storage cost.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142664672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digital Garment Alteration 数码服装修改
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-11-07 DOI: 10.1111/cgf.15248
A. M. Eggler, R. Falque, M. Liu, T. Vidal-Calleja, O. Sorkine-Hornung, N. Pietroni

Garment alteration is a practical technique to adapt an existing garment to fit a target body shape. Typically executed by skilled tailors, this process involves a series of strategic fabric operations—removing or adding material—to achieve the desired fit on a target body. We propose an innovative approach to automate this process by computing a set of practically feasible modifications that adapt an existing garment to fit a different body shape. We first assess the garment's fit on a reference body; then, we replicate this fit on the target by deriving a set of pattern modifications via a linear program. We compute these alterations by employing an iterative process that alternates between global geometric optimization and physical simulation. Our method utilizes geometry-based simulation of woven fabric's anisotropic behavior, accounts for tailoring details like seam matching, and incorporates elements such as darts or gussets. We validate our technique by producing digital and physical garments, demonstrating practical and achievable alterations.

服装修改是一种实用技术,用于调整现有服装以适应目标体型。这一过程通常由熟练的裁缝执行,涉及一系列策略性织物操作--去除或添加材料--以达到目标体型所需的合身效果。我们提出了一种创新方法,通过计算一系列实际可行的修改,使现有服装适合不同体形,从而实现这一过程的自动化。我们首先在参照体上评估服装的合身性,然后通过线性程序推导出一套版型修改方案,从而在目标体上复制这种合身性。我们采用全局几何优化和物理模拟交替进行的迭代过程来计算这些修改。我们的方法利用各向异性梭织面料行为的几何模拟,考虑了缝合匹配等裁剪细节,并加入了褶裥或袼褙等元素。我们通过制作数字服装和实物服装来验证我们的技术,展示了切实可行的改动。
{"title":"Digital Garment Alteration","authors":"A. M. Eggler,&nbsp;R. Falque,&nbsp;M. Liu,&nbsp;T. Vidal-Calleja,&nbsp;O. Sorkine-Hornung,&nbsp;N. Pietroni","doi":"10.1111/cgf.15248","DOIUrl":"https://doi.org/10.1111/cgf.15248","url":null,"abstract":"<div>\u0000 \u0000 <p>Garment alteration is a practical technique to adapt an existing garment to fit a target body shape. Typically executed by skilled tailors, this process involves a series of strategic fabric operations—removing or adding material—to achieve the desired fit on a target body. We propose an innovative approach to automate this process by computing a set of practically feasible modifications that adapt an existing garment to fit a different body shape. We first assess the garment's fit on a reference body; then, we replicate this fit on the target by deriving a set of pattern modifications via a linear program. We compute these alterations by employing an iterative process that alternates between global geometric optimization and physical simulation. Our method utilizes geometry-based simulation of woven fabric's anisotropic behavior, accounts for tailoring details like seam matching, and incorporates elements such as darts or gussets. We validate our technique by producing digital and physical garments, demonstrating practical and achievable alterations.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15248","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142664667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Graphics Forum
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1