首页 > 最新文献

Proceedings of the 29th annual conference on Computer graphics and interactive techniques最新文献

英文 中文
Animation and rendering of complex water surfaces 复杂水面的动画和渲染
Douglas Enright, Steve Marschner, Ronald Fedkiw
We present a new method for the animation and rendering of photo-realistic water effects. Our method is designed to produce visually plausible three dimensional effects, for example the pouring of water into a glass (see figure 1) and the breaking of an ocean wave, in a manner which can be used in a computer animation environment. In order to better obtain photorealism in the behavior of the simulated water surface, we introduce a new "thickened" front tracking technique to accurately represent the water surface and a new velocity extrapolation method to move the surface in a smooth, water-like manner. The velocity extrapolation method allows us to provide a degree of control to the surface motion, e.g. to generate a windblown look or to force the water to settle quickly. To ensure that the photorealism of the simulation carries over to the final images, we have integrated our method with an advanced physically based rendering system.
我们提出了一种逼真的水效果动画和渲染的新方法。我们的方法旨在产生视觉上可信的三维效果,例如将水倒进玻璃杯(见图1)和海浪的破碎,以一种可以在计算机动画环境中使用的方式。为了更好地获得模拟水面行为的真实感,我们引入了一种新的“加厚”前跟踪技术来准确地表示水面,并引入了一种新的速度外推方法来使水面以光滑的、像水一样的方式移动。速度外推方法允许我们对表面运动提供一定程度的控制,例如产生被风吹的外观或迫使水快速沉降。为了确保模拟的真实感延续到最终的图像,我们将我们的方法与先进的基于物理的渲染系统相结合。
{"title":"Animation and rendering of complex water surfaces","authors":"Douglas Enright, Steve Marschner, Ronald Fedkiw","doi":"10.1145/566570.566645","DOIUrl":"https://doi.org/10.1145/566570.566645","url":null,"abstract":"We present a new method for the animation and rendering of photo-realistic water effects. Our method is designed to produce visually plausible three dimensional effects, for example the pouring of water into a glass (see figure 1) and the breaking of an ocean wave, in a manner which can be used in a computer animation environment. In order to better obtain photorealism in the behavior of the simulated water surface, we introduce a new \"thickened\" front tracking technique to accurately represent the water surface and a new velocity extrapolation method to move the surface in a smooth, water-like manner. The velocity extrapolation method allows us to provide a degree of control to the surface motion, e.g. to generate a windblown look or to force the water to settle quickly. To ensure that the photorealism of the simulation carries over to the final images, we have integrated our method with an advanced physically based rendering system.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132076675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 690
Light field mapping: efficient representation and hardware rendering of surface light fields 光场映射:表面光场的有效表示和硬件渲染
Wei-Chao Chen, J. Bouguet, Michael H. Chu, R. Grzeszczuk
A light field parameterized on the surface offers a natural and intuitive description of the view-dependent appearance of scenes with complex reflectance properties. To enable the use of surface light fields in real-time rendering we develop a compact representation suitable for an accelerated graphics pipeline. We propose to approximate the light field data by partitioning it over elementary surface primitives and factorizing each part into a small set of lower-dimensional functions. We show that our representation can be further compressed using standard image compression techniques leading to extremely compact data sets that are up to four orders of magnitude smaller than the input data. Finally, we develop an image-based rendering method, light field mapping, that can visualize surface light fields directly from this compact representation at interactive frame rates on a personal computer. We also implement a new method of approximating the light field data that produces positive only factors allowing for faster rendering using simpler graphics hardware than earlier methods. We demonstrate the results for a variety of non-trivial synthetic scenes and physical objects scanned through 3D photography.
在表面上参数化的光场提供了对具有复杂反射特性的场景的视图依赖外观的自然和直观的描述。为了在实时渲染中使用表面光场,我们开发了一种适合加速图形管道的紧凑表示。我们建议通过将光场数据划分为初等表面基元并将每个部分分解为一个小的低维函数集来近似光场数据。我们表明,我们的表示可以使用标准图像压缩技术进一步压缩,从而产生比输入数据小4个数量级的极其紧凑的数据集。最后,我们开发了一种基于图像的渲染方法,光场映射,可以在个人计算机上以交互帧率直接从这种紧凑的表示中可视化表面光场。我们还实现了一种近似光场数据的新方法,该方法只产生正因子,允许使用比以前的方法更简单的图形硬件更快地渲染。我们展示了通过3D摄影扫描的各种非平凡合成场景和物理对象的结果。
{"title":"Light field mapping: efficient representation and hardware rendering of surface light fields","authors":"Wei-Chao Chen, J. Bouguet, Michael H. Chu, R. Grzeszczuk","doi":"10.1145/566570.566601","DOIUrl":"https://doi.org/10.1145/566570.566601","url":null,"abstract":"A light field parameterized on the surface offers a natural and intuitive description of the view-dependent appearance of scenes with complex reflectance properties. To enable the use of surface light fields in real-time rendering we develop a compact representation suitable for an accelerated graphics pipeline. We propose to approximate the light field data by partitioning it over elementary surface primitives and factorizing each part into a small set of lower-dimensional functions. We show that our representation can be further compressed using standard image compression techniques leading to extremely compact data sets that are up to four orders of magnitude smaller than the input data. Finally, we develop an image-based rendering method, light field mapping, that can visualize surface light fields directly from this compact representation at interactive frame rates on a personal computer. We also implement a new method of approximating the light field data that produces positive only factors allowing for faster rendering using simpler graphics hardware than earlier methods. We demonstrate the results for a variety of non-trivial synthetic scenes and physical objects scanned through 3D photography.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130991617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 257
Image-based 3D photography using opacity hulls 使用不透明船体的基于图像的3D摄影
W. Matusik, H. Pfister, A. Ngan, P. Beardsley, R. Ziegler, L. McMillan
We have built a system for acquiring and displaying high quality graphical models of objects that are impossible to scan with traditional scanners. Our system can acquire highly specular and fuzzy materials, such as fur and feathers. The hardware set-up consists of a turntable, two plasma displays, an array of cameras, and a rotating array of directional lights. We use multi-background matting techniques to acquire alpha mattes of the object from multiple viewpoints. The alpha mattes are used to construct an opacity hull. The opacity hull is a new shape representation, defined as the visual hull of the object with view-dependent opacity. It enables visualization of complex object silhouettes and seamless blending of objects into new environments. Our system also supports relighting of objects with arbitrary appearance using surface reflectance fields, a purely image-based appearance representation. Our system is the first to acquire and render surface reflectance fields under varying illumination from arbitrary viewpoints. We have built three generations of digitizers with increasing sophistication. In this paper, we present our results from digitizing hundreds of models.
我们已经建立了一个系统,用于获取和显示高质量的图形模型,这些模型是传统扫描仪无法扫描的。我们的系统可以获得高度镜面和模糊的材料,如皮毛和羽毛。硬件装置包括一个转盘、两个等离子显示器、一组摄像机和一组旋转定向灯。我们使用多背景抠图技术从多个视点获取物体的alpha抠图。alpha mattes用于构建不透明船体。不透明度外壳是一种新的形状表示,定义为具有视图依赖不透明度的对象的视觉外壳。它可以实现复杂对象轮廓的可视化,并将对象无缝地混合到新环境中。我们的系统还支持使用表面反射域(一种纯粹基于图像的外观表示)对具有任意外观的物体进行重新照明。我们的系统是第一个从任意视点获取和渲染不同光照下的表面反射率场的系统。我们已经制造了三代越来越复杂的数字化设备。在本文中,我们介绍了对数百个模型进行数字化处理的结果。
{"title":"Image-based 3D photography using opacity hulls","authors":"W. Matusik, H. Pfister, A. Ngan, P. Beardsley, R. Ziegler, L. McMillan","doi":"10.1145/566570.566599","DOIUrl":"https://doi.org/10.1145/566570.566599","url":null,"abstract":"We have built a system for acquiring and displaying high quality graphical models of objects that are impossible to scan with traditional scanners. Our system can acquire highly specular and fuzzy materials, such as fur and feathers. The hardware set-up consists of a turntable, two plasma displays, an array of cameras, and a rotating array of directional lights. We use multi-background matting techniques to acquire alpha mattes of the object from multiple viewpoints. The alpha mattes are used to construct an opacity hull. The opacity hull is a new shape representation, defined as the visual hull of the object with view-dependent opacity. It enables visualization of complex object silhouettes and seamless blending of objects into new environments. Our system also supports relighting of objects with arbitrary appearance using surface reflectance fields, a purely image-based appearance representation. Our system is the first to acquire and render surface reflectance fields under varying illumination from arbitrary viewpoints. We have built three generations of digitizers with increasing sophistication. In this paper, we present our results from digitizing hundreds of models.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127050848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 242
Transferring color to greyscale images 将颜色转移到灰度图像
Tom Welsh, M. Ashikhmin, K. Mueller
We introduce a general technique for "colorizing" greyscale images by transferring color between a source, color image and a destination, greyscale image. Although the general problem of adding chromatic values to a greyscale image has no exact, objective solution, the current approach attempts to provide a method to help minimize the amount of human labor required for this task. Rather than choosing RGB colors from a palette to color individual components, we transfer the entire color "mood" of the source to the target image by matching luminance and texture information between the images. We choose to transfer only chromatic information and retain the original luminance values of the target image. Further, the procedure is enhanced by allowing the user to match areas of the two images with rectangular swatches. We show that this simple technique can be successfully applied to a variety of images and video, provided that texture and luminance are sufficiently distinct. The images generated demonstrate the potential and utility of our technique in a diverse set of application domains.
我们介绍了一种通过在源图像(彩色图像)和目标图像(灰度图像)之间传递颜色来“着色”灰度图像的一般技术。虽然在灰度图像中添加色彩值的一般问题没有精确、客观的解决方案,但目前的方法试图提供一种方法来帮助减少完成这项任务所需的人力。我们不是从调色板中选择RGB颜色来为单个组件上色,而是通过在图像之间匹配亮度和纹理信息,将源的整个颜色“情绪”转移到目标图像。我们选择仅传递色度信息并保留目标图像的原始亮度值。此外,通过允许用户将两个图像的区域与矩形样本相匹配,该过程得到了增强。我们表明,这种简单的技术可以成功地应用于各种图像和视频,只要纹理和亮度足够明显。生成的图像展示了我们的技术在各种应用程序领域中的潜力和实用性。
{"title":"Transferring color to greyscale images","authors":"Tom Welsh, M. Ashikhmin, K. Mueller","doi":"10.1145/566570.566576","DOIUrl":"https://doi.org/10.1145/566570.566576","url":null,"abstract":"We introduce a general technique for \"colorizing\" greyscale images by transferring color between a source, color image and a destination, greyscale image. Although the general problem of adding chromatic values to a greyscale image has no exact, objective solution, the current approach attempts to provide a method to help minimize the amount of human labor required for this task. Rather than choosing RGB colors from a palette to color individual components, we transfer the entire color \"mood\" of the source to the target image by matching luminance and texture information between the images. We choose to transfer only chromatic information and retain the original luminance values of the target image. Further, the procedure is enhanced by allowing the user to match areas of the two images with rectangular swatches. We show that this simple technique can be successfully applied to a variety of images and video, provided that texture and luminance are sufficiently distinct. The images generated demonstrate the potential and utility of our technique in a diverse set of application domains.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116015715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 890
Gradient domain high dynamic range compression 梯度域高动态范围压缩
Raanan Fattal, Dani Lischinski, M. Werman
We present a new method for rendering high dynamic range images on conventional displays. Our method is conceptually simple, computationally efficient, robust, and easy to use. We manipulate the gradient field of the luminance image by attenuating the magnitudes of large gradients. A new, low dynamic range image is then obtained by solving a Poisson equation on the modified gradient field. Our results demonstrate that the method is capable of drastic dynamic range compression, while preserving fine details and avoiding common artifacts, such as halos, gradient reversals, or loss of local contrast. The method is also able to significantly enhance ordinary images by bringing out detail in dark regions.
提出了一种在传统显示器上呈现高动态范围图像的新方法。我们的方法概念简单,计算效率高,鲁棒性好,易于使用。我们通过衰减大梯度的大小来操纵亮度图像的梯度场。通过求解修正梯度场上的泊松方程,得到新的低动态范围图像。我们的结果表明,该方法能够剧烈的动态范围压缩,同时保留精细的细节和避免常见的伪影,如光晕,梯度反转,或局部对比度的损失。该方法还可以通过在黑暗区域显示细节来显著增强普通图像。
{"title":"Gradient domain high dynamic range compression","authors":"Raanan Fattal, Dani Lischinski, M. Werman","doi":"10.1145/566570.566573","DOIUrl":"https://doi.org/10.1145/566570.566573","url":null,"abstract":"We present a new method for rendering high dynamic range images on conventional displays. Our method is conceptually simple, computationally efficient, robust, and easy to use. We manipulate the gradient field of the luminance image by attenuating the magnitudes of large gradients. A new, low dynamic range image is then obtained by solving a Poisson equation on the modified gradient field. Our results demonstrate that the method is capable of drastic dynamic range compression, while preserving fine details and avoiding common artifacts, such as halos, gradient reversals, or loss of local contrast. The method is also able to significantly enhance ordinary images by bringing out detail in dark regions.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"167 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123507117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1467
Dual contouring of hermite data 埃尔米特数据的双重轮廓
T. Ju, Frank Losasso, S. Schaefer, J. Warren
This paper describes a new method for contouring a signed grid whose edges are tagged by Hermite data (i.e; exact intersection points and normals). This method avoids the need to explicitly identify and process "features" as required in previous Hermite contouring methods. Using a new, numerically stable representation for quadratic error functions, we develop an octree-based method for simplifying contours produced by this method. We next extend our contouring method to these simpli£ed octrees. This new method imposes no constraints on the octree (such as being a restricted octree) and requires no "crack patching". We conclude with a simple test for preserving the topology of the contour during simplification.
本文描述了一种用Hermite数据(即;确切的交点和法线)。这种方法避免了像以前的Hermite轮廓方法那样需要明确地识别和处理“特征”。利用一种新的、数值稳定的二次误差函数表示,我们开发了一种基于八叉树的方法来简化由该方法产生的轮廓。接下来,我们将我们的轮廓方法扩展到这些简化的八叉树。这种新方法没有对八叉树施加任何约束(比如成为一个受限制的八叉树),也不需要“修补裂缝”。最后,我们给出了一个简化过程中保持轮廓拓扑的简单测试。
{"title":"Dual contouring of hermite data","authors":"T. Ju, Frank Losasso, S. Schaefer, J. Warren","doi":"10.1145/566570.566586","DOIUrl":"https://doi.org/10.1145/566570.566586","url":null,"abstract":"This paper describes a new method for contouring a signed grid whose edges are tagged by Hermite data (i.e; exact intersection points and normals). This method avoids the need to explicitly identify and process \"features\" as required in previous Hermite contouring methods. Using a new, numerically stable representation for quadratic error functions, we develop an octree-based method for simplifying contours produced by this method. We next extend our contouring method to these simpli£ed octrees. This new method imposes no constraints on the octree (such as being a restricted octree) and requires no \"crack patching\". We conclude with a simple test for preserving the topology of the contour during simplification.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132058872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 722
Physically based modeling and animation of fire 基于物理建模和动画的火
Duc Quang Nguyen, Ronald Fedkiw, H. Jensen
We present a physically based method for modeling and animating fire. Our method is suitable for both smooth (laminar) and turbulent flames, and it can be used to animate the burning of either solid or gas fuels. We use the incompressible Navier-Stokes equations to independently model both vaporized fuel and hot gaseous products. We develop a physically based model for the expansion that takes place when a vaporized fuel reacts to form hot gaseous products, and a related model for the similar expansion that takes place when a solid fuel is vaporized into a gaseous state. The hot gaseous products, smoke and soot rise under the influence of buoyancy and are rendered using a blackbody radiation model. We also model and render the blue core that results from radicals in the chemical reaction zone where fuel is converted into products. Our method allows the fire and smoke to interact with objects, and flammable objects can catch on fire.
我们提出了一种基于物理的方法来建模和动画火。我们的方法适用于光滑(层流)和湍流火焰,它可以用来模拟固体或气体燃料的燃烧。我们使用不可压缩的Navier-Stokes方程来独立地模拟汽化燃料和热气体产物。我们开发了一个基于物理的模型,用于描述当汽化燃料反应形成热气体产物时发生的膨胀,以及当固体燃料汽化成气态时发生的类似膨胀的相关模型。热气体产物、烟雾和烟尘在浮力的影响下上升,并使用黑体辐射模型进行渲染。我们还模拟和渲染了由化学反应区自由基产生的蓝色核心,在那里燃料被转化为产物。我们的方法允许火焰和烟雾与物体相互作用,易燃物体会着火。
{"title":"Physically based modeling and animation of fire","authors":"Duc Quang Nguyen, Ronald Fedkiw, H. Jensen","doi":"10.1145/566570.566643","DOIUrl":"https://doi.org/10.1145/566570.566643","url":null,"abstract":"We present a physically based method for modeling and animating fire. Our method is suitable for both smooth (laminar) and turbulent flames, and it can be used to animate the burning of either solid or gas fuels. We use the incompressible Navier-Stokes equations to independently model both vaporized fuel and hot gaseous products. We develop a physically based model for the expansion that takes place when a vaporized fuel reacts to form hot gaseous products, and a related model for the similar expansion that takes place when a solid fuel is vaporized into a gaseous state. The hot gaseous products, smoke and soot rise under the influence of buoyancy and are rendered using a blackbody radiation model. We also model and render the blue core that results from radicals in the chemical reaction zone where fuel is converted into products. Our method allows the fire and smoke to interact with objects, and flammable objects can catch on fire.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132368247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 389
期刊
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1