首页 > 最新文献

Proceedings of the 11th European Conference on Visual Media Production最新文献

英文 中文
Rerendering landscape photographs 重新渲染风景照片
Pub Date : 2014-11-13 DOI: 10.1145/2668904.2668942
Pu Wang, Diana Bicazan, A. Ghosh
We present a practical approach for realistic rerendering of landscape photographs. We extract a view dependent depth map from single input landscape images by examining global and local pixel color distributions and demonstrate applications of depth dependent rendering such as novel viewpoints, digital refocusing and dehazing. We also present a simple approach to relight the input landscape photograph under novel sky illumination. Here, we assume diffuse reflectance and relight landscapes by estimating the irradiance due the sky in the input photograph. Finally, we also take into account specular reflections on water surfaces which are common in landscape photography and demonstrate a semiautomatic process for relighting scenes with still water.
我们提出了一种实用的方法来逼真地再现风景照片。我们通过检查全局和局部像素颜色分布,从单输入景观图像中提取视图相关深度图,并演示深度相关渲染的应用,如新视点、数字重聚焦和去雾。我们还提出了一种简单的方法来在新的天空照明下重新照亮输入的景观照片。在这里,我们通过估计输入照片中天空的辐照度来假设漫反射和光照景观。最后,我们还考虑了在水面上的镜面反射,这在风景摄影中很常见,并演示了用静水重新照明场景的半自动过程。
{"title":"Rerendering landscape photographs","authors":"Pu Wang, Diana Bicazan, A. Ghosh","doi":"10.1145/2668904.2668942","DOIUrl":"https://doi.org/10.1145/2668904.2668942","url":null,"abstract":"We present a practical approach for realistic rerendering of landscape photographs. We extract a view dependent depth map from single input landscape images by examining global and local pixel color distributions and demonstrate applications of depth dependent rendering such as novel viewpoints, digital refocusing and dehazing. We also present a simple approach to relight the input landscape photograph under novel sky illumination. Here, we assume diffuse reflectance and relight landscapes by estimating the irradiance due the sky in the input photograph. Finally, we also take into account specular reflections on water surfaces which are common in landscape photography and demonstrate a semiautomatic process for relighting scenes with still water.","PeriodicalId":401915,"journal":{"name":"Proceedings of the 11th European Conference on Visual Media Production","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133289653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bullet time using multi-viewpoint robotic camera system 子弹时间使用多视点机器人摄像机系统
Pub Date : 2014-11-13 DOI: 10.1145/2668904.2668932
Kensuke Ikeya, K. Hisatomi, Miwa Katayama, T. Mishina, Y. Iwadate
The main purpose of our research was to generate the bullet time of dynamically moving subjects in 3D space or multiple shots of subjects within 3D space. In addition, we wanted to create a practical and generic bullet time system that required less time for advance preparation and generated bullet time in semi-real time after subjects had been captured that enabled sports broadcasting to be replayed. We developed a multi-viewpoint robotic camera system to achieve our purpose. A cameraman controls multi-viewpoint robotic cameras to simultaneously focus on subjects in 3D space in our system, and captures multi-viewpoint videos. Bullet time is generated from these videos in semi-real time by correcting directional control errors due to operating errors by the cameraman or mechanical control errors by robotic cameras using directional control of virtual cameras based on projective transformation. The experimental results revealed our system was able to generate bullet time for a dynamically moving player in 3D space or multiple shots of players within 3D space in volleyball, gymnastics, and basketball in just about a minute. System preparation in calibrating the cameras in advance was finished in just about five minutes. Our system was utilized in the "ISU Grand Prix of Figure Skating 2013/2014, NHK Trophy" live sports program in November 2013. The bullet time of a dynamically moving skater on a large skating rink was generated in semi-real time using our system and broadcast in a replay just after the competition. Thus, we confirmed our bullet time system was more practical and generic.
我们研究的主要目的是生成三维空间中动态移动主体的子弹时间,或者三维空间中主体的多个镜头。此外,我们想要创建一个实用和通用的子弹时间系统,它需要更少的提前准备时间,并在捕获对象后半实时地生成子弹时间,使体育广播能够重播。我们开发了一个多视点机器人相机系统来实现我们的目的。在我们的系统中,摄影师控制多视点机器人摄像机同时聚焦于3D空间中的对象,并捕获多视点视频。子弹时间是利用基于投影变换的虚拟摄像机方向控制,对摄像师操作误差引起的方向控制误差或机器人摄像机的机械控制误差进行校正,从而半实时地从这些视频中生成子弹时间。实验结果表明,我们的系统能够在大约一分钟内为3D空间中动态移动的球员或3D空间中排球,体操和篮球中的多个球员生成子弹时间。相机标定的系统准备工作在五分钟内完成。我们的系统在2013年11月的“国际滑联花样滑冰大奖赛2013/2014,NHK奖杯”体育直播节目中得到了应用。我们的系统半实时地生成了一个大型滑冰场动态运动选手的子弹时间,并在比赛结束后进行了回放。因此,我们确认了我们的子弹时间系统更加实用和通用。
{"title":"Bullet time using multi-viewpoint robotic camera system","authors":"Kensuke Ikeya, K. Hisatomi, Miwa Katayama, T. Mishina, Y. Iwadate","doi":"10.1145/2668904.2668932","DOIUrl":"https://doi.org/10.1145/2668904.2668932","url":null,"abstract":"The main purpose of our research was to generate the bullet time of dynamically moving subjects in 3D space or multiple shots of subjects within 3D space. In addition, we wanted to create a practical and generic bullet time system that required less time for advance preparation and generated bullet time in semi-real time after subjects had been captured that enabled sports broadcasting to be replayed. We developed a multi-viewpoint robotic camera system to achieve our purpose. A cameraman controls multi-viewpoint robotic cameras to simultaneously focus on subjects in 3D space in our system, and captures multi-viewpoint videos. Bullet time is generated from these videos in semi-real time by correcting directional control errors due to operating errors by the cameraman or mechanical control errors by robotic cameras using directional control of virtual cameras based on projective transformation. The experimental results revealed our system was able to generate bullet time for a dynamically moving player in 3D space or multiple shots of players within 3D space in volleyball, gymnastics, and basketball in just about a minute. System preparation in calibrating the cameras in advance was finished in just about five minutes. Our system was utilized in the \"ISU Grand Prix of Figure Skating 2013/2014, NHK Trophy\" live sports program in November 2013. The bullet time of a dynamically moving skater on a large skating rink was generated in semi-real time using our system and broadcast in a replay just after the competition. Thus, we confirmed our bullet time system was more practical and generic.","PeriodicalId":401915,"journal":{"name":"Proceedings of the 11th European Conference on Visual Media Production","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114870618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Web-based visualisation of on-set point cloud data 基于web的定点云数据可视化
Pub Date : 2014-11-13 DOI: 10.1145/2668904.2668937
A. Evans, J. Agenjo, J. Blat
In this paper we present a system for progressive encoding, storage, transmission, and web based visualization of large point cloud datasets. Point cloud data is typically recorded on-set during a film production, and is later used to assist with various stages of the post-production process. The remote visualization of this data (on or off-set, either via desktop or mobile device) can be difficult, as the volume of data can take a long time to be transferred, and can easily overwhelm the memory of a typical 3D web or mobile client. Yet web-based visualization of this data opens up many possibilities for remote and collaborative workflow models. In order to facilitate this workflow, we present a system to progressively transfer point cloud data to a WebGL based client, updating the visualisation as more information is downloaded and maintaining a coherent structure at lower resolutions. Existing work on progressive transfer of 3D assets has focused on well-formed triangle meshes, and thus is unsuitable for use with raw LIDAR data. Our work addresses this challenge directly, and as such the principal contribution is that it is the first published method of progressive visualization of point cloud data via the web.
在本文中,我们提出了一个渐进编码、存储、传输和基于web的大型点云数据集可视化系统。点云数据通常在电影制作期间在现场记录,后来用于协助后期制作过程的各个阶段。这些数据的远程可视化(打开或关闭,通过桌面或移动设备)可能很困难,因为数据量可能需要很长时间才能传输,并且很容易超过典型3D web或移动客户端的内存。然而,基于web的数据可视化为远程和协作工作流模型提供了许多可能性。为了简化这个工作流程,我们提出了一个系统,可以逐步将点云数据传输到基于WebGL的客户端,随着更多信息的下载更新可视化,并在较低分辨率下保持连贯的结构。现有的3D资产累进转移工作主要集中在格式良好的三角形网格上,因此不适合用于原始激光雷达数据。我们的工作直接解决了这一挑战,因此主要贡献是它是第一个通过网络渐进式可视化点云数据的公开方法。
{"title":"Web-based visualisation of on-set point cloud data","authors":"A. Evans, J. Agenjo, J. Blat","doi":"10.1145/2668904.2668937","DOIUrl":"https://doi.org/10.1145/2668904.2668937","url":null,"abstract":"In this paper we present a system for progressive encoding, storage, transmission, and web based visualization of large point cloud datasets. Point cloud data is typically recorded on-set during a film production, and is later used to assist with various stages of the post-production process. The remote visualization of this data (on or off-set, either via desktop or mobile device) can be difficult, as the volume of data can take a long time to be transferred, and can easily overwhelm the memory of a typical 3D web or mobile client. Yet web-based visualization of this data opens up many possibilities for remote and collaborative workflow models. In order to facilitate this workflow, we present a system to progressively transfer point cloud data to a WebGL based client, updating the visualisation as more information is downloaded and maintaining a coherent structure at lower resolutions. Existing work on progressive transfer of 3D assets has focused on well-formed triangle meshes, and thus is unsuitable for use with raw LIDAR data. Our work addresses this challenge directly, and as such the principal contribution is that it is the first published method of progressive visualization of point cloud data via the web.","PeriodicalId":401915,"journal":{"name":"Proceedings of the 11th European Conference on Visual Media Production","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117308102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Frequency-based controls for terrain editing 基于频率的地形编辑控制
Pub Date : 2014-11-13 DOI: 10.1145/2668904.2668944
Gwyneth Bradbury, I. Choi, C. Amati, Kenny Mitchell, T. Weyrich
Authoring virtual terrains can be a challenging task. Procedural and stochastic methods for automated terrain generation produce plausible results but lack intuitive control of the terrain features, while data driven methods offer more creative control at the cost of a limited feature set, higher storage requirements and blending artefacts. Moreover, artists often prefer a workflow involving varied reference material such as photographs, concept art, elevation maps and satellite images, for the incorporation of which there is little support from commercial content-creation tools. We present a sketch-based toolset for asset-guided creation and intuitive editing of virtual terrains, allowing the manipulation of both elevation maps and 3D meshes, and exploiting a layer-based interface. We employ a frequency-band subdivision of elevation maps to allow using the appropriate editing tool for each level of detail. Using our system, we show that a user can start from various input types: storyboard sketches, photographs or height maps to easily develop and customise a virtual terrain.
创建虚拟地形可能是一项具有挑战性的任务。自动地形生成的程序和随机方法产生的结果似乎合理,但缺乏对地形特征的直观控制,而数据驱动方法以有限的特征集、更高的存储要求和混合伪影为代价,提供了更多的创造性控制。此外,美工通常更喜欢包含各种参考材料(如照片、概念图、高程图和卫星图像)的工作流程,而商业内容创作工具对这些材料的整合几乎没有支持。我们提出了一个基于草图的工具集,用于资产引导的创建和直观的虚拟地形编辑,允许操纵高程图和3D网格,并利用基于层的界面。我们采用高程图的频带细分,以便为每个细节级别使用适当的编辑工具。使用我们的系统,我们表明用户可以从各种输入类型开始:故事板草图,照片或高度图,轻松开发和自定义虚拟地形。
{"title":"Frequency-based controls for terrain editing","authors":"Gwyneth Bradbury, I. Choi, C. Amati, Kenny Mitchell, T. Weyrich","doi":"10.1145/2668904.2668944","DOIUrl":"https://doi.org/10.1145/2668904.2668944","url":null,"abstract":"Authoring virtual terrains can be a challenging task. Procedural and stochastic methods for automated terrain generation produce plausible results but lack intuitive control of the terrain features, while data driven methods offer more creative control at the cost of a limited feature set, higher storage requirements and blending artefacts. Moreover, artists often prefer a workflow involving varied reference material such as photographs, concept art, elevation maps and satellite images, for the incorporation of which there is little support from commercial content-creation tools. We present a sketch-based toolset for asset-guided creation and intuitive editing of virtual terrains, allowing the manipulation of both elevation maps and 3D meshes, and exploiting a layer-based interface. We employ a frequency-band subdivision of elevation maps to allow using the appropriate editing tool for each level of detail. Using our system, we show that a user can start from various input types: storyboard sketches, photographs or height maps to easily develop and customise a virtual terrain.","PeriodicalId":401915,"journal":{"name":"Proceedings of the 11th European Conference on Visual Media Production","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134644880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Line-preserving hole-filling for 2D-to-3D conversion 2d到3d转换的保线填充孔
Pub Date : 2014-11-13 DOI: 10.1145/2668904.2668931
Nils Plath, Lutz Goldmann, A. Nitsch, S. Knorr, T. Sikora
Many 2D-to-3D conversion techniques rely on image-based rendering methods in order to synthesize 3D views from monoscopic images. This leads to holes in the generated views due to previously occluded objects becoming visible for which no texture information is available. Approaches attempting to alleviate the effects of these artifacts are referred to as hole-filling. This paper proposes a method which determines a non-uniform deformation of the stereoscopic view such that no holes are visible. Additionally, an energy term is devised, which prevents straight lines in the input image from being bent due to the non-uniform image warp. This is achieved by constructing a triangle mesh, which approximates the depth map of the input image and by integrating a set of detected lines into it. The line information is incorporated into the underlying optimization problem in order to prevent bending of the lines. The evaluation of the proposed algorithm on a comprehensive dataset with a variety of scenes shows that holes are efficiently filled without obvious background distortions.
许多2D-to-3D转换技术依赖于基于图像的渲染方法,以便从单视角图像合成3D视图。这将导致生成视图中的漏洞,因为先前遮挡的对象变得可见,而没有可用的纹理信息。试图减轻这些人工制品影响的方法被称为填空。本文提出了一种确定立体视图不均匀变形的方法,使其不可见孔洞。此外,设计了能量项,防止输入图像中的直线由于图像不均匀弯曲而弯曲。这是通过构造一个三角形网格来实现的,它近似于输入图像的深度图,并通过将一组检测到的线整合到其中。为了防止线材弯曲,线材信息被纳入底层优化问题。在具有多种场景的综合数据集上对该算法进行了评估,结果表明该算法能够有效地填充空洞,且没有明显的背景失真。
{"title":"Line-preserving hole-filling for 2D-to-3D conversion","authors":"Nils Plath, Lutz Goldmann, A. Nitsch, S. Knorr, T. Sikora","doi":"10.1145/2668904.2668931","DOIUrl":"https://doi.org/10.1145/2668904.2668931","url":null,"abstract":"Many 2D-to-3D conversion techniques rely on image-based rendering methods in order to synthesize 3D views from monoscopic images. This leads to holes in the generated views due to previously occluded objects becoming visible for which no texture information is available. Approaches attempting to alleviate the effects of these artifacts are referred to as hole-filling. This paper proposes a method which determines a non-uniform deformation of the stereoscopic view such that no holes are visible. Additionally, an energy term is devised, which prevents straight lines in the input image from being bent due to the non-uniform image warp. This is achieved by constructing a triangle mesh, which approximates the depth map of the input image and by integrating a set of detected lines into it. The line information is incorporated into the underlying optimization problem in order to prevent bending of the lines. The evaluation of the proposed algorithm on a comprehensive dataset with a variety of scenes shows that holes are efficiently filled without obvious background distortions.","PeriodicalId":401915,"journal":{"name":"Proceedings of the 11th European Conference on Visual Media Production","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126343355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A comparison of night vision simulation methods for video 视频夜视仿真方法的比较
Pub Date : 2014-11-13 DOI: 10.1145/2668904.2668945
R. Wanat, Rafał K. Mantiuk
The properties of the human vision change depending on the absolute luminance of the perceived scene. The change is most noticeable at night, when cones lose their sensitivity and rods activate. This change is imitated in video footage using various tricks and filters. In this study, we compared 4 algorithms that can realistically simulate the appearance of night scenes on a standard display. We conducted a subjective evaluation study to compare the results of night vision simulation with a reference footage dimmed using a photographic filter to determine which algorithm offers the greatest accuracy. The results of our study can be used in computer graphics rendering to apply the most realistic simulation of night vision to the rendered night scenes or in photography to reproduce photographs taken at night as similar as possible to how the human eye would see them.
人类视觉的属性根据所感知场景的绝对亮度而变化。这种变化在夜间最为明显,此时视锥细胞失去敏感性,视杆细胞活跃起来。这种变化是模仿视频片段使用各种技巧和过滤器。在这项研究中,我们比较了4种能够在标准显示器上逼真地模拟夜景外观的算法。我们进行了一项主观评估研究,将夜视模拟结果与使用摄影滤镜调光的参考镜头进行比较,以确定哪种算法提供最高的准确性。我们的研究结果可以用于计算机图形渲染,将最逼真的夜视模拟应用到渲染的夜景中,或者用于摄影,以尽可能地重现夜间拍摄的照片,使其与人眼所看到的相似。
{"title":"A comparison of night vision simulation methods for video","authors":"R. Wanat, Rafał K. Mantiuk","doi":"10.1145/2668904.2668945","DOIUrl":"https://doi.org/10.1145/2668904.2668945","url":null,"abstract":"The properties of the human vision change depending on the absolute luminance of the perceived scene. The change is most noticeable at night, when cones lose their sensitivity and rods activate. This change is imitated in video footage using various tricks and filters. In this study, we compared 4 algorithms that can realistically simulate the appearance of night scenes on a standard display. We conducted a subjective evaluation study to compare the results of night vision simulation with a reference footage dimmed using a photographic filter to determine which algorithm offers the greatest accuracy. The results of our study can be used in computer graphics rendering to apply the most realistic simulation of night vision to the rendered night scenes or in photography to reproduce photographs taken at night as similar as possible to how the human eye would see them.","PeriodicalId":401915,"journal":{"name":"Proceedings of the 11th European Conference on Visual Media Production","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128577432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Proceedings of the 11th European Conference on Visual Media Production 第11届欧洲视觉媒体制作会议论文集
{"title":"Proceedings of the 11th European Conference on Visual Media Production","authors":"","doi":"10.1145/2668904","DOIUrl":"https://doi.org/10.1145/2668904","url":null,"abstract":"","PeriodicalId":401915,"journal":{"name":"Proceedings of the 11th European Conference on Visual Media Production","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127464889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the 11th European Conference on Visual Media Production
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1