首页 > 最新文献

2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)最新文献

英文 中文
DYNAMIC FRACTURING OF 3D MODELS FOR REAL TIME COMPUTER GRAPHICS 动态断裂三维模型的实时计算机图形
Yousif Ali Hassan Najim, G. Triantafyllidis, G. Palamas
This work proposes a method of fracturing one-sided 3D objects, in real time, using standard GPU shaders. Existing implementations include either pre-fracturing objects and replacing them at run-time, or precomputing the fracture patterns and using them to fracture the objects depending on user interaction. In this article we describe a novel method in which the fracturing calculations are handled by the GPU and only having the initial positions of the fracture fields handled by the CPU. To obtain higher resolutions of fractures, scalable tessellation is also implemented. As a result, this method allows for fast fracturing that could be utilized in real-time applications such as videogames.
这项工作提出了一种使用标准GPU着色器实时分割单面3D对象的方法。现有的实现方法包括预压裂对象并在运行时替换它们,或者预计算裂缝模式并根据用户交互使用它们来压裂对象。在本文中,我们描述了一种新的方法,其中压裂计算由GPU处理,仅由CPU处理裂缝场的初始位置。为了获得更高的裂缝分辨率,还实现了可扩展的镶嵌。因此,这种方法可以用于视频游戏等实时应用中的快速压裂。
{"title":"DYNAMIC FRACTURING OF 3D MODELS FOR REAL TIME COMPUTER GRAPHICS","authors":"Yousif Ali Hassan Najim, G. Triantafyllidis, G. Palamas","doi":"10.1109/3DTV.2018.8478546","DOIUrl":"https://doi.org/10.1109/3DTV.2018.8478546","url":null,"abstract":"This work proposes a method of fracturing one-sided 3D objects, in real time, using standard GPU shaders. Existing implementations include either pre-fracturing objects and replacing them at run-time, or precomputing the fracture patterns and using them to fracture the objects depending on user interaction. In this article we describe a novel method in which the fracturing calculations are handled by the GPU and only having the initial positions of the fracture fields handled by the CPU. To obtain higher resolutions of fractures, scalable tessellation is also implemented. As a result, this method allows for fast fracturing that could be utilized in real-time applications such as videogames.","PeriodicalId":267389,"journal":{"name":"2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128504472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
VIEWING SIMULATION OF INTEGRAL IMAGING DISPLAY BASED ON WAVE OPTICS 基于波光学的集成成像显示器视觉仿真
U. Akpinar, E. Sahin, A. Gotchev
We present an accurate model of integral imaging display based on wave optics. The model enables accurate characterization of the display through simulated perceived images by the human visual system. Thus, it is useful to investigate the capabilities of the display in terms of various quality factors such as depth of field and resolution, as well as delivering visual cues such as focus. Furthermore, due to the adopted wave optics formalism, simulation and analysis of more advanced techniques such as wavefront coding for increased depth of field are also possible.
提出了一种基于波光学的精确集成成像显示模型。该模型能够通过模拟人类视觉系统感知到的图像来准确表征显示器。因此,根据各种质量因素(如景深和分辨率)以及提供视觉线索(如焦点)来研究显示器的能力是有用的。此外,由于采用了波光学的形式,模拟和分析更先进的技术,如波前编码增加景深也是可能的。
{"title":"VIEWING SIMULATION OF INTEGRAL IMAGING DISPLAY BASED ON WAVE OPTICS","authors":"U. Akpinar, E. Sahin, A. Gotchev","doi":"10.1109/3DTV.2018.8478568","DOIUrl":"https://doi.org/10.1109/3DTV.2018.8478568","url":null,"abstract":"We present an accurate model of integral imaging display based on wave optics. The model enables accurate characterization of the display through simulated perceived images by the human visual system. Thus, it is useful to investigate the capabilities of the display in terms of various quality factors such as depth of field and resolution, as well as delivering visual cues such as focus. Furthermore, due to the adopted wave optics formalism, simulation and analysis of more advanced techniques such as wavefront coding for increased depth of field are also possible.","PeriodicalId":267389,"journal":{"name":"2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127876886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
ADAPTIVE COLOR CORRECTION IN VIRTUAL VIEW SYNTHESIS 虚拟视图合成中的自适应色彩校正
A. Dziembowski, M. Domański
In the paper an adaptive color correction method for virtual view synthesis is presented. It deals with the typical problem in free navigation systems – different illumination in views captured by different cameras acquiring the scene. The proposed technique adjusts the local color characteristics of objects visible in two real views. That approach allows to significantly reduce number and visibility of color artifacts in the virtual view. Proposed method was tested on 12 multiview test sequences. Obtained and presented in the paper results show, that proposed color correction provides increase of the virtual view quality measured by PSNR, SSIM and subjective evaluation.
提出了一种用于虚拟视图合成的自适应色彩校正方法。它解决了自由导航系统中的典型问题——不同相机获取场景时所捕获的视图的光照不同。该方法对两个真实视图中可见物体的局部颜色特征进行调整。这种方法可以显著减少虚拟视图中颜色伪影的数量和可见性。在12个多视图测试序列上对该方法进行了测试。研究结果表明,所提出的色彩校正方法提高了虚拟视图的PSNR、SSIM和主观评价的质量。
{"title":"ADAPTIVE COLOR CORRECTION IN VIRTUAL VIEW SYNTHESIS","authors":"A. Dziembowski, M. Domański","doi":"10.1109/3DTV.2018.8478439","DOIUrl":"https://doi.org/10.1109/3DTV.2018.8478439","url":null,"abstract":"In the paper an adaptive color correction method for virtual view synthesis is presented. It deals with the typical problem in free navigation systems – different illumination in views captured by different cameras acquiring the scene. The proposed technique adjusts the local color characteristics of objects visible in two real views. That approach allows to significantly reduce number and visibility of color artifacts in the virtual view. Proposed method was tested on 12 multiview test sequences. Obtained and presented in the paper results show, that proposed color correction provides increase of the virtual view quality measured by PSNR, SSIM and subjective evaluation.","PeriodicalId":267389,"journal":{"name":"2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125489980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
LIFE: A FLEXIBLE TESTBED FOR LIGHT FIELD EVALUATION 生命:一个灵活的光场评估试验台
Elijs Dima, Mårten Sjöström, R. Olsson, Martin Kjellqvist, Lukasz Litwic, Zhi Zhang, Lennart Rasmusson, Lars Flodén
Recording and imaging the 3D world has led to the use of light fields. Capturing, distributing and presenting light field data is challenging, and requires an evaluation platform. We define a framework for real-time processing, and present the design and implementation of a light field evaluation system. In order to serve as a testbed, the system is designed to be flexible, scalable, and able to model various end-to-end light field systems. This flexibility is achieved by encapsulating processes and devices in discrete framework systems. The modular capture system supports multiple camera types, general-purpose data processing, and streaming to network interfaces. The cloud system allows for parallel transcoding and distribution of streams. The presentation system encapsulates rendering and display specifics. The real-time ability was tested in a latency measurement; the capture and presentation systems process and stream frames within a 40 ms limit.
记录和成像3D世界导致了光场的使用。捕获、分发和呈现光场数据是具有挑战性的,需要一个评估平台。我们定义了一个实时处理框架,并给出了一个光场评估系统的设计与实现。为了作为测试平台,该系统被设计得灵活、可扩展,并且能够模拟各种端到端光场系统。这种灵活性是通过将过程和设备封装在离散的框架系统中实现的。模块化捕获系统支持多种摄像机类型、通用数据处理和流到网络接口。云系统允许并行转码和流分发。表示系统封装了呈现和显示细节。通过延迟测量测试实时能力;捕获和表示系统在40毫秒的限制内处理和流帧。
{"title":"LIFE: A FLEXIBLE TESTBED FOR LIGHT FIELD EVALUATION","authors":"Elijs Dima, Mårten Sjöström, R. Olsson, Martin Kjellqvist, Lukasz Litwic, Zhi Zhang, Lennart Rasmusson, Lars Flodén","doi":"10.1109/3DTV.2018.8478550","DOIUrl":"https://doi.org/10.1109/3DTV.2018.8478550","url":null,"abstract":"Recording and imaging the 3D world has led to the use of light fields. Capturing, distributing and presenting light field data is challenging, and requires an evaluation platform. We define a framework for real-time processing, and present the design and implementation of a light field evaluation system. In order to serve as a testbed, the system is designed to be flexible, scalable, and able to model various end-to-end light field systems. This flexibility is achieved by encapsulating processes and devices in discrete framework systems. The modular capture system supports multiple camera types, general-purpose data processing, and streaming to network interfaces. The cloud system allows for parallel transcoding and distribution of streams. The presentation system encapsulates rendering and display specifics. The real-time ability was tested in a latency measurement; the capture and presentation systems process and stream frames within a 40 ms limit.","PeriodicalId":267389,"journal":{"name":"2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130302216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
CHANNEL-MISMATCH DETECTION ALGORITHM FOR STEREOSCOPIC VIDEO USING CONVOLUTIONAL NEURAL NETWORK 基于卷积神经网络的立体视频信道失配检测算法
S. Lavrushkin, D. Vatolin
Channel mismatch (the result of swapping left and right views) is a 3D-video artifact that can cause major viewer discomfort. This work presents a novel high-accuracy method of channel-mismatch detection. In addition to the features described in our previous work, we introduce a new feature based on a convolutional neural network; it predicts channel-mismatch probability on the basis of the stereoscopic views and corresponding disparity maps. A logistic-regression model trained on the described features makes the final prediction. We tested this model on a set of 900 stereoscopic-video scenes, and it outperformed existing channel-mismatch detection methods that previously served in analyses of full-length stereoscopic movies.
通道不匹配(交换左视图和右视图的结果)是一个3d视频工件,可能会导致主要的观众不舒服。本文提出了一种新的高精度信道失配检测方法。除了我们之前工作中描述的特征之外,我们还引入了一个基于卷积神经网络的新特征;它根据立体视图和相应的视差图预测信道失配概率。对所描述的特征进行训练的逻辑回归模型进行最终预测。我们在一组900个立体视频场景上测试了这个模型,它优于现有的用于分析全长立体电影的频道不匹配检测方法。
{"title":"CHANNEL-MISMATCH DETECTION ALGORITHM FOR STEREOSCOPIC VIDEO USING CONVOLUTIONAL NEURAL NETWORK","authors":"S. Lavrushkin, D. Vatolin","doi":"10.1109/3DTV.2018.8478542","DOIUrl":"https://doi.org/10.1109/3DTV.2018.8478542","url":null,"abstract":"Channel mismatch (the result of swapping left and right views) is a 3D-video artifact that can cause major viewer discomfort. This work presents a novel high-accuracy method of channel-mismatch detection. In addition to the features described in our previous work, we introduce a new feature based on a convolutional neural network; it predicts channel-mismatch probability on the basis of the stereoscopic views and corresponding disparity maps. A logistic-regression model trained on the described features makes the final prediction. We tested this model on a set of 900 stereoscopic-video scenes, and it outperformed existing channel-mismatch detection methods that previously served in analyses of full-length stereoscopic movies.","PeriodicalId":267389,"journal":{"name":"2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121355686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
DEPTH IMAGE BASED VIEW SYNTHESIS WITH MULTIPLE REFERENCE VIEWS FOR VIRTUAL REALITY 基于深度图像的虚拟现实多参考视图合成
Sarah Fachada, Daniele Bonatto, Arnaud Schenkel, G. Lafruit
This paper presents a method for view synthesis from multiple views and their depth maps for free navigation in Virtual Reality with six degrees of freedom (6DoF) and 360 video (3DoF+), including synthesizing views corresponding to stepping in or out of the scene. Such scenarios should support large baseline view synthesis, typically going beyond the view synthesis involved in light field displays [1]. Our method allows to input an unlimited number of reference views, instead of the usual left and right reference views. Increasing the number of reference views overcomes problems such as occlusions, tangential surfaces to the cameras axis and artifacts in low quality depth maps. We outperform MPEG’s reference software, VSRS [2], with a gain of up to 2.5 dB in PSNR when using four reference views.
本文提出了一种在虚拟现实六自由度(6DoF)和360度视频(3DoF+)下,从多个视图及其深度图中合成自由导航视图的方法,包括进入或退出场景对应的视图合成。这样的场景应该支持大型基线视图合成,通常超出了光场显示所涉及的视图合成[1]。我们的方法允许输入无限数量的引用视图,而不是通常的左右引用视图。增加参考视图的数量可以克服诸如遮挡、相机轴的切向表面和低质量深度图中的伪影等问题。我们的性能优于MPEG的参考软件VSRS[2],当使用四个参考视图时,PSNR增益高达2.5 dB。
{"title":"DEPTH IMAGE BASED VIEW SYNTHESIS WITH MULTIPLE REFERENCE VIEWS FOR VIRTUAL REALITY","authors":"Sarah Fachada, Daniele Bonatto, Arnaud Schenkel, G. Lafruit","doi":"10.1109/3DTV.2018.8478484","DOIUrl":"https://doi.org/10.1109/3DTV.2018.8478484","url":null,"abstract":"This paper presents a method for view synthesis from multiple views and their depth maps for free navigation in Virtual Reality with six degrees of freedom (6DoF) and 360 video (3DoF+), including synthesizing views corresponding to stepping in or out of the scene. Such scenarios should support large baseline view synthesis, typically going beyond the view synthesis involved in light field displays [1]. Our method allows to input an unlimited number of reference views, instead of the usual left and right reference views. Increasing the number of reference views overcomes problems such as occlusions, tangential surfaces to the cameras axis and artifacts in low quality depth maps. We outperform MPEG’s reference software, VSRS [2], with a gain of up to 2.5 dB in PSNR when using four reference views.","PeriodicalId":267389,"journal":{"name":"2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114837053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
A NOVEL DISPARITY-ASSISTED BLOCK MATCHING-BASED APPROACH FOR SUPER-RESOLUTION OF LIGHT FIELD IMAGES 一种基于视差辅助块匹配的光场图像超分辨率新方法
S. Farag, V. Velisavljevic
Currently, available plenoptic imaging technology has limited resolution. That makes it challenging to use this technology in applications, where sharpness is essential, such as film industry. Previous attempts aimed at enhancing the spatial resolution of plenoptic light field (LF) images were based on block and patch matching inherited from classical image super-resolution, where multiple views were considered as separate frames. By contrast to these approaches, a novel super-resolution technique is proposed in this paper with a focus on exploiting estimated disparity information to reduce the matching area in the super-resolution process. We estimate the disparity information from the interpolated LR view point images (VPs). We denote our method as light field block matching super-resolution. We additionally combine our novel super-resolution method with directionally adaptive image interpolation from [1] to preserve sharpness of the high-resolution images. We prove a steady gain in the PSNR and SSIM quality of the super-resolved images for the resolution enhancement factor 8×8 as compared to the recent approaches and also to our previous work [2].
目前,可用的全光学成像技术分辨率有限。这使得在诸如电影工业等对清晰度至关重要的应用中使用这项技术具有挑战性。以往提高全光场(LF)图像空间分辨率的尝试都是基于继承经典图像超分辨率的块和补丁匹配,将多个视图视为单独的帧。与这些方法相比,本文提出了一种新的超分辨率技术,重点是利用估计的视差信息来减少超分辨率过程中的匹配面积。我们从插值的LR视点图像(vp)中估计视差信息。我们将这种方法称为光场块匹配超分辨率方法。此外,我们将新的超分辨率方法与[1]中的方向自适应图像插值相结合,以保持高分辨率图像的清晰度。与最近的方法和我们之前的工作[2]相比,我们证明了分辨率增强因子8×8的超分辨率图像的PSNR和SSIM质量的稳步增长。
{"title":"A NOVEL DISPARITY-ASSISTED BLOCK MATCHING-BASED APPROACH FOR SUPER-RESOLUTION OF LIGHT FIELD IMAGES","authors":"S. Farag, V. Velisavljevic","doi":"10.1109/3DTV.2018.8478627","DOIUrl":"https://doi.org/10.1109/3DTV.2018.8478627","url":null,"abstract":"Currently, available plenoptic imaging technology has limited resolution. That makes it challenging to use this technology in applications, where sharpness is essential, such as film industry. Previous attempts aimed at enhancing the spatial resolution of plenoptic light field (LF) images were based on block and patch matching inherited from classical image super-resolution, where multiple views were considered as separate frames. By contrast to these approaches, a novel super-resolution technique is proposed in this paper with a focus on exploiting estimated disparity information to reduce the matching area in the super-resolution process. We estimate the disparity information from the interpolated LR view point images (VPs). We denote our method as light field block matching super-resolution. We additionally combine our novel super-resolution method with directionally adaptive image interpolation from [1] to preserve sharpness of the high-resolution images. We prove a steady gain in the PSNR and SSIM quality of the super-resolved images for the resolution enhancement factor 8×8 as compared to the recent approaches and also to our previous work [2].","PeriodicalId":267389,"journal":{"name":"2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129548571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2018 - 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1