首页 > 最新文献

Computer Graphics World最新文献

英文 中文
Uncertainty guidance in proton therapy planning visualization 质子治疗计划可视化中的不确定性指导
Q4 Computer Science Pub Date : 2023-02-01 DOI: 10.2139/ssrn.4263600
Maath Musleh, L. Muren, L. Toussaint, A. Vestergaard, E. Gröller, R. Raidou
{"title":"Uncertainty guidance in proton therapy planning visualization","authors":"Maath Musleh, L. Muren, L. Toussaint, A. Vestergaard, E. Gröller, R. Raidou","doi":"10.2139/ssrn.4263600","DOIUrl":"https://doi.org/10.2139/ssrn.4263600","url":null,"abstract":"","PeriodicalId":51003,"journal":{"name":"Computer Graphics World","volume":"211 1","pages":"166-179"},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79011910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vulkan all the way: Transitioning to a modern low-level graphics API in academia Vulkan一路走来:在学术界过渡到现代的低级图形API
Q4 Computer Science Pub Date : 2023-02-01 DOI: 10.2139/ssrn.4263599
Johannes Unterguggenberger, B. Kerbl, M. Wimmer
For over two decades, the OpenGL API provided users with the means for implementing versatile, feature-rich, and portable real-time graphics applications. Consequently, it has been widely adopted by practitioners and educators alike and is deeply ingrained in many curricula that teach realtime graphics for higher education. Over the years, the architecture of graphics processing units (GPUs) incrementally diverged from OpenGL’s conceptual design. The more recently introduced Vulkan API provides a more modern, fine-grained approach for interfacing with the GPU, which allows a high level of controllability and, thereby, deep insights into the inner workings of modern GPUs. This property makes the Vulkan API especially well suitable for teaching graphics programming in university education, where fundamental knowledge shall be conveyed. Hence, it stands to reason that educators who have their students’ best interests at heart should provide them with corresponding lecture material. However, Vulkan is notoriously verbose and rather challenging for first-time users, thus transitioning to this new API bears a considerable risk of failing to achieve expected teaching goals. In this paper, we document our experiences after teaching Vulkan in both introductory and advanced graphics courses side-by-side with conventional OpenGL. A collection of surveys enables us to draw conclusions about perceived workload, difficulty, and students’ acceptance of either approach. In doing so, we identify suitable conditions and recommendations for teaching Vulkan to both undergraduate and graduate students. © 2023 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
二十多年来,OpenGL API为用户提供了实现多功能、功能丰富和可移植的实时图形应用程序的方法。因此,它已经被从业者和教育者广泛采用,并在许多高等教育的实时图形教学课程中根深蒂固。多年来,图形处理单元(gpu)的架构逐渐偏离了OpenGL的概念设计。最近推出的Vulkan API为与GPU的接口提供了更现代,更细粒度的方法,这允许高水平的可控性,从而深入了解现代GPU的内部工作原理。这种特性使得Vulkan API特别适合在大学教育中教授图形编程,因为大学教育需要教授基础知识。因此,把学生的最大利益放在心上的教育者应该为他们提供相应的讲座材料,这是理所当然的。然而,对于第一次使用Vulkan的用户来说,它是出了名的冗长,而且相当具有挑战性,因此过渡到这个新的API有相当大的风险,可能无法达到预期的教学目标。在本文中,我们记录了我们在与传统OpenGL并行的入门和高级图形课程中教授Vulkan后的经验。一系列调查使我们能够得出关于感知工作量、难度和学生对两种方法的接受程度的结论。在此过程中,我们确定了向本科生和研究生教授Vulkan的合适条件和建议。©2023作者。Elsevier Ltd.出版。这是一篇基于CC BY许可(http://creativecommons.org/licenses/by/4.0/)的开放获取文章。
{"title":"Vulkan all the way: Transitioning to a modern low-level graphics API in academia","authors":"Johannes Unterguggenberger, B. Kerbl, M. Wimmer","doi":"10.2139/ssrn.4263599","DOIUrl":"https://doi.org/10.2139/ssrn.4263599","url":null,"abstract":"For over two decades, the OpenGL API provided users with the means for implementing versatile, feature-rich, and portable real-time graphics applications. Consequently, it has been widely adopted by practitioners and educators alike and is deeply ingrained in many curricula that teach realtime graphics for higher education. Over the years, the architecture of graphics processing units (GPUs) incrementally diverged from OpenGL’s conceptual design. The more recently introduced Vulkan API provides a more modern, fine-grained approach for interfacing with the GPU, which allows a high level of controllability and, thereby, deep insights into the inner workings of modern GPUs. This property makes the Vulkan API especially well suitable for teaching graphics programming in university education, where fundamental knowledge shall be conveyed. Hence, it stands to reason that educators who have their students’ best interests at heart should provide them with corresponding lecture material. However, Vulkan is notoriously verbose and rather challenging for first-time users, thus transitioning to this new API bears a considerable risk of failing to achieve expected teaching goals. In this paper, we document our experiences after teaching Vulkan in both introductory and advanced graphics courses side-by-side with conventional OpenGL. A collection of surveys enables us to draw conclusions about perceived workload, difficulty, and students’ acceptance of either approach. In doing so, we identify suitable conditions and recommendations for teaching Vulkan to both undergraduate and graduate students. © 2023 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).","PeriodicalId":51003,"journal":{"name":"Computer Graphics World","volume":"3 1","pages":"155-165"},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78619431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Turning carpets into multi-image switchable displays 把地毯变成多图像可切换的显示器
Q4 Computer Science Pub Date : 2023-02-01 DOI: 10.2139/ssrn.4213086
Takumi Yamamoto, Yutaka Sugiura
{"title":"Turning carpets into multi-image switchable displays","authors":"Takumi Yamamoto, Yutaka Sugiura","doi":"10.2139/ssrn.4213086","DOIUrl":"https://doi.org/10.2139/ssrn.4213086","url":null,"abstract":"","PeriodicalId":51003,"journal":{"name":"Computer Graphics World","volume":"378 1","pages":"190-198"},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80626381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel Multi-scale architecture driven by decoupled semantic attention transfer for person image generation 一种解耦语义注意力转移驱动的多尺度人物图像生成体系结构
Q4 Computer Science Pub Date : 2023-01-01 DOI: 10.2139/ssrn.4123243
Meng Wang, Jiaxing Chen, Haipeng Liu
{"title":"A novel Multi-scale architecture driven by decoupled semantic attention transfer for person image generation","authors":"Meng Wang, Jiaxing Chen, Haipeng Liu","doi":"10.2139/ssrn.4123243","DOIUrl":"https://doi.org/10.2139/ssrn.4123243","url":null,"abstract":"","PeriodicalId":51003,"journal":{"name":"Computer Graphics World","volume":"42 1","pages":"24-36"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77375715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A large-scale point cloud semantic segmentation network via local dual features and global correlations 基于局部对偶特征和全局关联的大规模点云语义分割网络
Q4 Computer Science Pub Date : 2023-01-01 DOI: 10.2139/ssrn.4226670
Yiqiang Zhao, Xingyi Ma, Bin Hu, Qi Zhang, Mao Ye, Guoqing Zhou
{"title":"A large-scale point cloud semantic segmentation network via local dual features and global correlations","authors":"Yiqiang Zhao, Xingyi Ma, Bin Hu, Qi Zhang, Mao Ye, Guoqing Zhou","doi":"10.2139/ssrn.4226670","DOIUrl":"https://doi.org/10.2139/ssrn.4226670","url":null,"abstract":"","PeriodicalId":51003,"journal":{"name":"Computer Graphics World","volume":"1 1","pages":"133-144"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89123111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
UDAformer: Underwater image enhancement based on dual attention transformer UDAformer:基于双注意转换器的水下图像增强
Q4 Computer Science Pub Date : 2023-01-01 DOI: 10.2139/ssrn.4162640
Zhen Shen, Haiyong Xu, Ting Luo, Yang Song, Zhouyan He
{"title":"UDAformer: Underwater image enhancement based on dual attention transformer","authors":"Zhen Shen, Haiyong Xu, Ting Luo, Yang Song, Zhouyan He","doi":"10.2139/ssrn.4162640","DOIUrl":"https://doi.org/10.2139/ssrn.4162640","url":null,"abstract":"","PeriodicalId":51003,"journal":{"name":"Computer Graphics World","volume":"4 1","pages":"77-88"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79953028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Light subpath reservoir for interactive ray-traced global illumination 用于交互式光线跟踪全局照明的光子路径库
Q4 Computer Science Pub Date : 2023-01-01 DOI: 10.2139/ssrn.4202290
Fuyan Liu, Junwen Gan
{"title":"Light subpath reservoir for interactive ray-traced global illumination","authors":"Fuyan Liu, Junwen Gan","doi":"10.2139/ssrn.4202290","DOIUrl":"https://doi.org/10.2139/ssrn.4202290","url":null,"abstract":"","PeriodicalId":51003,"journal":{"name":"Computer Graphics World","volume":"35 1","pages":"37-46"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74926290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Anisotropic screen space rendering for particle-based fluid simulation 基于粒子的流体模拟的各向异性屏幕空间渲染
Q4 Computer Science Pub Date : 2022-12-01 DOI: 10.2139/ssrn.4217325
Yanrui Xu, Yuanmu Xu, Yuege Xiong, Dou Yin, Xiaojuan Ban, Xiaokun Wang, Jian Chang, Jian Zhang
This paper proposes a real-time fluid rendering method based on the screen space rendering scheme for particle-based fluid simulation. Our method applies anisotropic transformations to the point sprites to stretch the point sprites along appropriate axes, obtaining smooth fluid surfaces based on the weighted principal components analysis of the particle distribution. Then we combine the processed anisotropic point sprite information with popular screen space filters like curvature flow and narrow-range filters to process the depth information. Experiments show that the proposed method can efficiently resolve the issues of jagged edges and unevenness on the surface that existed in previous methods while preserving sharp high-frequency details
本文提出了一种基于屏幕空间绘制方案的基于粒子的流体仿真实时流体绘制方法。我们的方法对点精灵进行各向异性变换,沿适当的轴向拉伸点精灵,基于粒子分布的加权主成分分析获得光滑的流体表面。然后将处理后的各向异性点精灵信息与流行的屏幕空间滤波器(如曲率流滤波器和窄范围滤波器)结合起来处理深度信息。实验表明,该方法可以有效地解决以往方法中存在的边缘锯齿和表面不均匀的问题,同时保留清晰的高频细节
{"title":"Anisotropic screen space rendering for particle-based fluid simulation","authors":"Yanrui Xu, Yuanmu Xu, Yuege Xiong, Dou Yin, Xiaojuan Ban, Xiaokun Wang, Jian Chang, Jian Zhang","doi":"10.2139/ssrn.4217325","DOIUrl":"https://doi.org/10.2139/ssrn.4217325","url":null,"abstract":"This paper proposes a real-time fluid rendering method based on the screen space rendering scheme for particle-based fluid simulation. Our method applies anisotropic transformations to the point sprites to stretch the point sprites along appropriate axes, obtaining smooth fluid surfaces based on the weighted principal components analysis of the particle distribution. Then we combine the processed anisotropic point sprite information with popular screen space filters like curvature flow and narrow-range filters to process the depth information. Experiments show that the proposed method can efficiently resolve the issues of jagged edges and unevenness on the surface that existed in previous methods while preserving sharp high-frequency details","PeriodicalId":51003,"journal":{"name":"Computer Graphics World","volume":"8 1","pages":"118-124"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89962543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BrightFormer: A transformer to brighten the image BrightFormer:使图像变亮的转换器
Q4 Computer Science Pub Date : 2022-12-01 DOI: 10.2139/ssrn.4194700
Yong Wang, Bo Li, Xi Yuan
{"title":"BrightFormer: A transformer to brighten the image","authors":"Yong Wang, Bo Li, Xi Yuan","doi":"10.2139/ssrn.4194700","DOIUrl":"https://doi.org/10.2139/ssrn.4194700","url":null,"abstract":"","PeriodicalId":51003,"journal":{"name":"Computer Graphics World","volume":"33 1","pages":"49-57"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80805601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Neural Implicit Representations with Surface Signal Parameterizations 用表面信号参数化学习神经隐式表示
Q4 Computer Science Pub Date : 2022-11-01 DOI: 10.48550/arXiv.2211.00519
Yanran Guan, Andrei Chubarau, Ruby Rao, D. Nowrouzezahrai
Neural implicit surface representations have recently emerged as popular alternative to explicit 3D object encodings, such as polygonal meshes, tabulated points, or voxels. While significant work has improved the geometric fidelity of these representations, much less attention is given to their final appearance. Traditional explicit object representations commonly couple the 3D shape data with auxiliary surface-mapped image data, such as diffuse color textures and fine-scale geometric details in normal maps that typically require a mapping of the 3D surface onto a plane, i.e., a surface parameterization; implicit representations, on the other hand, cannot be easily textured due to lack of configurable surface parameterization. Inspired by this digital content authoring methodology, we design a neural network architecture that implicitly encodes the underlying surface parameterization suitable for appearance data. As such, our model remains compatible with existing mesh-based digital content with appearance data. Motivated by recent work that overfits compact networks to individual 3D objects, we present a new weight-encoded neural implicit representation that extends the capability of neural implicit surfaces to enable various common and important applications of texture mapping. Our method outperforms reasonable baselines and state-of-the-art alternatives.
神经隐式表面表示最近作为显式3D对象编码(如多边形网格、表格点或体素)的流行替代方案出现。虽然重要的工作已经提高了这些表示的几何保真度,但对它们最终外观的关注要少得多。传统的显式对象表示通常将3D形状数据与辅助的表面映射图像数据相耦合,例如漫射颜色纹理和法线映射中的精细几何细节,这些数据通常需要将3D表面映射到平面上,即表面参数化;另一方面,由于缺乏可配置的表面参数化,隐式表示不能容易地纹理化。受这种数字内容创作方法的启发,我们设计了一种神经网络架构,该架构可以隐式地对适合于外观数据的底层表面参数化进行编码。因此,我们的模型与现有的基于网格的数字内容和外观数据保持兼容。受最近将紧凑网络过度拟合到单个3D对象的工作的启发,我们提出了一种新的权重编码神经隐式表示,该表示扩展了神经隐式表面的能力,以实现各种常见和重要的纹理映射应用。我们的方法优于合理的基线和最先进的替代方法。
{"title":"Learning Neural Implicit Representations with Surface Signal Parameterizations","authors":"Yanran Guan, Andrei Chubarau, Ruby Rao, D. Nowrouzezahrai","doi":"10.48550/arXiv.2211.00519","DOIUrl":"https://doi.org/10.48550/arXiv.2211.00519","url":null,"abstract":"Neural implicit surface representations have recently emerged as popular alternative to explicit 3D object encodings, such as polygonal meshes, tabulated points, or voxels. While significant work has improved the geometric fidelity of these representations, much less attention is given to their final appearance. Traditional explicit object representations commonly couple the 3D shape data with auxiliary surface-mapped image data, such as diffuse color textures and fine-scale geometric details in normal maps that typically require a mapping of the 3D surface onto a plane, i.e., a surface parameterization; implicit representations, on the other hand, cannot be easily textured due to lack of configurable surface parameterization. Inspired by this digital content authoring methodology, we design a neural network architecture that implicitly encodes the underlying surface parameterization suitable for appearance data. As such, our model remains compatible with existing mesh-based digital content with appearance data. Motivated by recent work that overfits compact networks to individual 3D objects, we present a new weight-encoded neural implicit representation that extends the capability of neural implicit surfaces to enable various common and important applications of texture mapping. Our method outperforms reasonable baselines and state-of-the-art alternatives.","PeriodicalId":51003,"journal":{"name":"Computer Graphics World","volume":"39 1","pages":"257-264"},"PeriodicalIF":0.0,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77080768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Computer Graphics World
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1