首页 > 最新文献

2021 International Conference on 3D Immersion (IC3D)最新文献

英文 中文
A Novel Compression Scheme Based on Hybrid Tucker-Vector Quantization Via Tensor Sketching for Dynamic Light Fields Acquired Through Coded Aperture Camera 一种基于张量绘制混合矢量量化的编码孔径相机动态光场压缩新方案
Pub Date : 2021-12-08 DOI: 10.1109/IC3D53758.2021.9687155
Joshitha Ravishankar, Mansi Sharma, Sally Khaidem
Emerging computational light field displays are a suitable choice for realistic presentation of 3D scenes on autostereoscopic glasses-free platforms. However, the enormous size of light field limits their utilization for streaming and 3D display applications. In this paper, we propose a novel representation, coding and streaming scheme for dynamic light fields based on a novel Hybrid Tucker TensorSketch Vector Quantization (HTTSVQ) algorithm. A dynamic light field can be generated from a static light field to capture a moving 3D scene. We acquire images through different coded aperture patterns for a dynamic light field and perform their low-rank approximation using our HTTSVQ scheme, followed by encoding with High Efficiency Video Coding (HEVC). The proposed single pass coding scheme can incrementally handle tensor elements and thus enables to stream and compress light field data without the need to store it in full. Additional encoding of low-rank approximated acquired images by HEVC eliminates intra-frame, inter-frame and intrinsic redundancies in light field data. Comparison with state-of-the-art coders HEVC and its multi-view extension (MV-HEVC) exhibits superior compression performance of the proposed scheme for real-world light fields.
新兴的计算光场显示器是在自动立体无眼镜平台上逼真呈现3D场景的合适选择。然而,巨大的光场尺寸限制了它们在流媒体和3D显示应用中的应用。在本文中,我们提出了一种新的基于混合Tucker TensorSketch矢量量化(HTTSVQ)算法的动态光场表示、编码和流处理方案。动态光场可以从静态光场生成,以捕捉移动的3D场景。我们通过不同的编码孔径模式获取动态光场图像,并使用我们的HTTSVQ方案进行低秩近似,然后使用高效视频编码(HEVC)进行编码。提出的单通道编码方案可以增量处理张量元素,从而实现光场数据的流化和压缩,而无需完全存储。利用HEVC对获取的低秩近似图像进行额外编码,消除了光场数据中的帧内、帧间和固有冗余。与最先进的编码器HEVC及其多视图扩展(MV-HEVC)相比,该方案在实际光场中表现出优越的压缩性能。
{"title":"A Novel Compression Scheme Based on Hybrid Tucker-Vector Quantization Via Tensor Sketching for Dynamic Light Fields Acquired Through Coded Aperture Camera","authors":"Joshitha Ravishankar, Mansi Sharma, Sally Khaidem","doi":"10.1109/IC3D53758.2021.9687155","DOIUrl":"https://doi.org/10.1109/IC3D53758.2021.9687155","url":null,"abstract":"Emerging computational light field displays are a suitable choice for realistic presentation of 3D scenes on autostereoscopic glasses-free platforms. However, the enormous size of light field limits their utilization for streaming and 3D display applications. In this paper, we propose a novel representation, coding and streaming scheme for dynamic light fields based on a novel Hybrid Tucker TensorSketch Vector Quantization (HTTSVQ) algorithm. A dynamic light field can be generated from a static light field to capture a moving 3D scene. We acquire images through different coded aperture patterns for a dynamic light field and perform their low-rank approximation using our HTTSVQ scheme, followed by encoding with High Efficiency Video Coding (HEVC). The proposed single pass coding scheme can incrementally handle tensor elements and thus enables to stream and compress light field data without the need to store it in full. Additional encoding of low-rank approximated acquired images by HEVC eliminates intra-frame, inter-frame and intrinsic redundancies in light field data. Comparison with state-of-the-art coders HEVC and its multi-view extension (MV-HEVC) exhibits superior compression performance of the proposed scheme for real-world light fields.","PeriodicalId":382937,"journal":{"name":"2021 International Conference on 3D Immersion (IC3D)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133077610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The Perceptually-Supported and the Subjectively-Preferred Viewing Distance of Projection-Based Light Field Displays 基于投影的光场显示的感知支持和主观偏好的观看距离
Pub Date : 2021-12-08 DOI: 10.1109/IC3D53758.2021.9687222
P. A. Kara, Mary Guindy, T. Balogh, Anikó Simon
As the research efforts and development processes behind light field visualization technologies advance, potential novel use cases emerge. These contexts of light field display utilization fundamentally depend on the distance of observation, due to the sheer technological nature of such glasses-free 3D systems. Yet, at the time of this paper, the number of works in the scientific literature that address viewing distance is rather limited, focusing solely on 3D visual experience based on angular density. Thus far, the personal preference of observers regarding viewing distance has not been considered by studies. Furthermore, the upcoming standardization efforts also necessitate research on the topic in order to coherently unify the methodologies of subjective tests. In this paper, we investigate the perceptually-supported and the subjectively-preferred viewing distance of light field visualization. We carried out a series of tests on multiple projection-based light field displays to study these distances, with the separate involvement of experts and regular test participants.
随着光场可视化技术背后的研究和开发进程的推进,潜在的新用例出现了。由于这种无需眼镜的3D系统的纯粹技术性质,这些光场显示的使用从根本上取决于观察距离。然而,在本文发表时,科学文献中研究观看距离的作品数量相当有限,只关注基于角密度的3D视觉体验。到目前为止,研究还没有考虑观察者对观看距离的个人偏好。此外,即将开展的标准化工作也需要对这一主题进行研究,以便连贯统一主观测试的方法。本文研究了光场可视化的感知支持距离和主观偏好距离。我们对多个基于投影的光场显示器进行了一系列测试,以研究这些距离,专家和常规测试参与者分别参与其中。
{"title":"The Perceptually-Supported and the Subjectively-Preferred Viewing Distance of Projection-Based Light Field Displays","authors":"P. A. Kara, Mary Guindy, T. Balogh, Anikó Simon","doi":"10.1109/IC3D53758.2021.9687222","DOIUrl":"https://doi.org/10.1109/IC3D53758.2021.9687222","url":null,"abstract":"As the research efforts and development processes behind light field visualization technologies advance, potential novel use cases emerge. These contexts of light field display utilization fundamentally depend on the distance of observation, due to the sheer technological nature of such glasses-free 3D systems. Yet, at the time of this paper, the number of works in the scientific literature that address viewing distance is rather limited, focusing solely on 3D visual experience based on angular density. Thus far, the personal preference of observers regarding viewing distance has not been considered by studies. Furthermore, the upcoming standardization efforts also necessitate research on the topic in order to coherently unify the methodologies of subjective tests. In this paper, we investigate the perceptually-supported and the subjectively-preferred viewing distance of light field visualization. We carried out a series of tests on multiple projection-based light field displays to study these distances, with the separate involvement of experts and regular test participants.","PeriodicalId":382937,"journal":{"name":"2021 International Conference on 3D Immersion (IC3D)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131164471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Adaptive Streaming and Rendering of Static Light Fields in the Web Browser Web浏览器中静态光场的自适应流和渲染
Pub Date : 2021-12-08 DOI: 10.1109/IC3D53758.2021.9687239
Hendrik Lievens, Maarten Wijnants, Brent Zoomers, J. Put, Nick Michiels, P. Quax, W. Lamotte
Static light fields are an image-based technology that allow for the photorealistic representation of inanimate objects and scenes in virtual environments. As such, static light fields have application opportunities in heterogeneous domains, including education, cultural heritage and entertainment. This paper contributes the design, implementation and performance evaluation of a web-based static light field consumption system. The proposed system allows static light field datasets to be adaptively streamed over the network and then to be visualized in a vanilla web browser. The performance evaluation results prove that real-time consumption of static light fields at AR/VR-compatible framerates of 90 FPS or more is feasible on commercial off-the-shelf hardware. Given the ubiquitous availability of web browsers on modern consumption devices (PCs, smart TVs, Head Mounted Displays, . . . ), our work is intended to significantly improve the accessibility and exploitation of static light field technology. The JavaScript client code is open-sourced to maximize our work’s impact.
静态光场是一种基于图像的技术,它允许在虚拟环境中对无生命物体和场景进行逼真的表现。因此,静态光场在教育、文化遗产和娱乐等异构领域具有应用机会。本文对基于web的静态光场消耗系统进行了设计、实现和性能评价。所提出的系统允许静态光场数据集自适应地在网络上流式传输,然后在普通web浏览器中进行可视化。性能评估结果证明,在商用现成硬件上,以90 FPS或更高的AR/ vr兼容帧率实时消耗静态光场是可行的。鉴于网络浏览器在现代消费设备(个人电脑、智能电视、头戴式显示器……)上无处不在。,我们的工作旨在显著提高静态光场技术的可及性和开发。JavaScript客户端代码是开源的,以最大限度地发挥我们工作的影响。
{"title":"Adaptive Streaming and Rendering of Static Light Fields in the Web Browser","authors":"Hendrik Lievens, Maarten Wijnants, Brent Zoomers, J. Put, Nick Michiels, P. Quax, W. Lamotte","doi":"10.1109/IC3D53758.2021.9687239","DOIUrl":"https://doi.org/10.1109/IC3D53758.2021.9687239","url":null,"abstract":"Static light fields are an image-based technology that allow for the photorealistic representation of inanimate objects and scenes in virtual environments. As such, static light fields have application opportunities in heterogeneous domains, including education, cultural heritage and entertainment. This paper contributes the design, implementation and performance evaluation of a web-based static light field consumption system. The proposed system allows static light field datasets to be adaptively streamed over the network and then to be visualized in a vanilla web browser. The performance evaluation results prove that real-time consumption of static light fields at AR/VR-compatible framerates of 90 FPS or more is feasible on commercial off-the-shelf hardware. Given the ubiquitous availability of web browsers on modern consumption devices (PCs, smart TVs, Head Mounted Displays, . . . ), our work is intended to significantly improve the accessibility and exploitation of static light field technology. The JavaScript client code is open-sourced to maximize our work’s impact.","PeriodicalId":382937,"journal":{"name":"2021 International Conference on 3D Immersion (IC3D)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130648272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Implementation of Multi-Focal Near-Eye Display Architecture: Optimization of Data Path 多焦点近眼显示架构的实现:数据路径的优化
Pub Date : 2021-12-08 DOI: 10.1109/IC3D53758.2021.9687169
R. Ruskuls, K. Slics, Sandra Balode, Reinis Ozolins, E. Linina, K. Osmanis, I. Osmanis
In this work we describe the concept of a stereoscopic multi-focal head mounted display for augmented reality applications as a means of mitigating the vergence-accommodation conflict (VAC).Investigated are means of a practical implementation of data transfer between the rendering station and direct control logic within the headset. We rely on a DisplayPort connection as a means to transfer the necessary multi-focal image packets in real time, whilst at the receiving end – the control logic is based on an FPGA architecture responsible for decoding the DisplayPort stream and reformatting data according to the optical layout of the display. Within the design we have chosen to omit local frame buffering which potentially can result in misrepresented data, nevertheless, this approach gains a latency reduction of about 16 ms as opposed to single-frame buffering.
在这项工作中,我们描述了用于增强现实应用的立体多焦点头戴式显示器的概念,作为减轻收敛调节冲突(VAC)的一种手段。研究了耳机内渲染站和直接控制逻辑之间数据传输的实际实现方法。我们依靠DisplayPort连接作为实时传输必要的多焦点图像数据包的手段,而在接收端-控制逻辑基于FPGA架构,负责解码DisplayPort流并根据显示器的光学布局重新格式化数据。在设计中,我们选择省略可能导致数据错误表示的本地帧缓冲,然而,与单帧缓冲相比,这种方法可以减少大约16毫秒的延迟。
{"title":"Implementation of Multi-Focal Near-Eye Display Architecture: Optimization of Data Path","authors":"R. Ruskuls, K. Slics, Sandra Balode, Reinis Ozolins, E. Linina, K. Osmanis, I. Osmanis","doi":"10.1109/IC3D53758.2021.9687169","DOIUrl":"https://doi.org/10.1109/IC3D53758.2021.9687169","url":null,"abstract":"In this work we describe the concept of a stereoscopic multi-focal head mounted display for augmented reality applications as a means of mitigating the vergence-accommodation conflict (VAC).Investigated are means of a practical implementation of data transfer between the rendering station and direct control logic within the headset. We rely on a DisplayPort connection as a means to transfer the necessary multi-focal image packets in real time, whilst at the receiving end – the control logic is based on an FPGA architecture responsible for decoding the DisplayPort stream and reformatting data according to the optical layout of the display. Within the design we have chosen to omit local frame buffering which potentially can result in misrepresented data, nevertheless, this approach gains a latency reduction of about 16 ms as opposed to single-frame buffering.","PeriodicalId":382937,"journal":{"name":"2021 International Conference on 3D Immersion (IC3D)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129315786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Photogrammetric Reconstruction to Immersive VR Environment 从摄影测量重建到沉浸式VR环境
Pub Date : 2021-12-08 DOI: 10.1109/IC3D53758.2021.9687232
M. Lhuillier
There are several steps to generate a VR environment from images: choose experimental conditions (scene, camera, trajectory, weather), take the images, reconstruct a textured 3D model thanks to a photogrammetry software, and import the 3D model into a game engine. This paper focuses on a post-processing of the photogrammetry step, mostly for outdoor environments that cannot be reconstructed by UAV. Since VR needs a 3D model in a good coordinate system (with a right scale and an axis that is vertical), a simple method is proposed to compute this. In the experiments, we first reconstruct both urban and natural immersive environments by using a helmet-held Gopro Max 360 camera, then import into Unity the 3D models in good coordinate systems, last explore the scenes like a pedestrian thanks to an Oculus Quest.
从图像中生成VR环境有几个步骤:选择实验条件(场景,相机,轨迹,天气),拍摄图像,通过摄影测量软件重建纹理3D模型,并将3D模型导入游戏引擎。本文主要研究摄影测量步骤的后处理,主要针对无人机无法重建的室外环境。由于VR需要一个良好坐标系下的3D模型(具有正确的比例和垂直的轴),因此提出了一种简单的计算方法。在实验中,我们首先使用头戴式Gopro Max 360相机重建城市和自然沉浸式环境,然后将良好坐标系下的3D模型导入Unity,最后借助Oculus Quest像行人一样探索场景。
{"title":"From Photogrammetric Reconstruction to Immersive VR Environment","authors":"M. Lhuillier","doi":"10.1109/IC3D53758.2021.9687232","DOIUrl":"https://doi.org/10.1109/IC3D53758.2021.9687232","url":null,"abstract":"There are several steps to generate a VR environment from images: choose experimental conditions (scene, camera, trajectory, weather), take the images, reconstruct a textured 3D model thanks to a photogrammetry software, and import the 3D model into a game engine. This paper focuses on a post-processing of the photogrammetry step, mostly for outdoor environments that cannot be reconstructed by UAV. Since VR needs a 3D model in a good coordinate system (with a right scale and an axis that is vertical), a simple method is proposed to compute this. In the experiments, we first reconstruct both urban and natural immersive environments by using a helmet-held Gopro Max 360 camera, then import into Unity the 3D models in good coordinate systems, last explore the scenes like a pedestrian thanks to an Oculus Quest.","PeriodicalId":382937,"journal":{"name":"2021 International Conference on 3D Immersion (IC3D)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131378459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance analysis of DIBR-based view synthesis with kinect azure kinect azure中基于dibr的视图合成性能分析
Pub Date : 2021-12-08 DOI: 10.1109/IC3D53758.2021.9687195
Yupeng Xie, André Souto, Sarah Fachada, Daniele Bonatto, Mehrdad Teratani, G. Lafruit
DIBR (Depth Image Based Rendering) can synthesize Free Navigation virtual views with sparse multiview texture images and corresponding depth maps. There are two ways to obtain depth maps: through software or depth sensors, which is a trade-off between precision versus speed (computational cost and processing time). This article compares the performance of depth maps estimated by MPEG-I’s Depth Estimation Reference Software with that acquired by Kinect Azure. We use IV-PSNR to evaluate their depth maps-based virtual views for the objective comparison. The quality metric with Kinect Azure regularly stay around 32 dB, and its active depth maps yields view synthesis results with better subjective performance in low-textured areas than DERS. Hence, we observe a worthy trade-off in depth performance between Kinect Azure and DERS, but with an advantage of negligible computational cost from the former. We recommend the Kinect Azure for real-time DIBR applications.
DIBR (Depth Image Based Rendering)是一种基于稀疏多视图纹理图像和相应深度图的自由导航虚拟视图合成技术。获得深度图有两种方法:通过软件或深度传感器,这是精度与速度(计算成本和处理时间)之间的权衡。本文比较了MPEG-I深度估计参考软件和Kinect Azure获得的深度图的性能。我们使用IV-PSNR来评估他们基于深度图的虚拟视图,以进行客观比较。Kinect Azure的质量指标通常保持在32db左右,其主动深度图在低纹理区域产生的视图合成结果比DERS具有更好的主观性能。因此,我们观察到Kinect Azure和DERS之间在深度性能上的值得权衡,但前者的计算成本可以忽略不计。我们推荐Kinect Azure用于实时DIBR应用程序。
{"title":"Performance analysis of DIBR-based view synthesis with kinect azure","authors":"Yupeng Xie, André Souto, Sarah Fachada, Daniele Bonatto, Mehrdad Teratani, G. Lafruit","doi":"10.1109/IC3D53758.2021.9687195","DOIUrl":"https://doi.org/10.1109/IC3D53758.2021.9687195","url":null,"abstract":"DIBR (Depth Image Based Rendering) can synthesize Free Navigation virtual views with sparse multiview texture images and corresponding depth maps. There are two ways to obtain depth maps: through software or depth sensors, which is a trade-off between precision versus speed (computational cost and processing time). This article compares the performance of depth maps estimated by MPEG-I’s Depth Estimation Reference Software with that acquired by Kinect Azure. We use IV-PSNR to evaluate their depth maps-based virtual views for the objective comparison. The quality metric with Kinect Azure regularly stay around 32 dB, and its active depth maps yields view synthesis results with better subjective performance in low-textured areas than DERS. Hence, we observe a worthy trade-off in depth performance between Kinect Azure and DERS, but with an advantage of negligible computational cost from the former. We recommend the Kinect Azure for real-time DIBR applications.","PeriodicalId":382937,"journal":{"name":"2021 International Conference on 3D Immersion (IC3D)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133143852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2021 International Conference on 3D Immersion (IC3D)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1