首页 > 最新文献

2020 IEEE International Conference on Computational Photography (ICCP)最新文献

英文 中文
Distributed Sky Imaging Radiometry and Tomography 分布式天空成像辐射测量和断层扫描
Pub Date : 2020-04-01 DOI: 10.1109/ICCP48838.2020.9105241
Amit Aides, Aviad Levis, Vadim Holodovsky, Y. Schechner, D. Althausen, Adi Vainiger
The composition of the atmosphere is significant to our ecosystem. Accordingly, there is a need to sense distributions of atmospheric scatterers such as aerosols and cloud droplets. There is growing interest in recovering these scattering fields in three-dimensions (3D). Even so, current atmospheric observations usually use expensive and unscalable equipment. Moreover, current analysis retrieves partial information (e.g., cloud-base altitudes, water droplet size at cloud tops) based on simplified 1D models. To advance observations and retrievals, we develop a new computational imaging approach for sensing and analyzing the atmosphere, volumetrically. Our approach comprises a ground-based network of cameras. We deployed it in conjunction with additional remote sensing equipment, including a Raman lidar and a sunphotometer, which provide initialization for algorithms and ground truth. The camera network is scalable, low cost, and enables 3D observations in high spatial and temporal resolution. We describe how the system is calibrated to provide absolute radiometric readouts of the light field. Consequently, we describe how to recover the volumetric field of scatterers, using tomography. The tomography process is adapted relative to prior art, to run on large-scale domains and being in-situ within scatterer fields. We empirically demonstrate the feasibility of tomography of clouds, using ground-based data.
大气的组成对我们的生态系统很重要。因此,有必要感知大气散射体的分布,如气溶胶和云滴。人们对在三维(3D)中恢复这些散射场越来越感兴趣。即便如此,目前的大气观测通常使用昂贵且无法扩展的设备。此外,目前的分析基于简化的一维模型检索部分信息(例如,云底高度,云顶水滴大小)。为了推进观测和检索,我们开发了一种新的计算成像方法来感知和分析大气,体积。我们的方法包括一个地面摄像头网络。我们将它与其他遥感设备一起部署,包括拉曼激光雷达和太阳光度计,这些设备为算法和地面真相提供初始化。相机网络是可扩展的,低成本的,并且能够在高空间和时间分辨率下进行3D观测。我们描述了如何校准系统,以提供光场的绝对辐射读数。因此,我们描述了如何使用层析成像恢复散射体的体积场。层析成像过程相对于现有技术进行了调整,可以在大规模域上运行,并且在散射场内处于原位。我们从经验上论证了利用地面数据进行云层析成像的可行性。
{"title":"Distributed Sky Imaging Radiometry and Tomography","authors":"Amit Aides, Aviad Levis, Vadim Holodovsky, Y. Schechner, D. Althausen, Adi Vainiger","doi":"10.1109/ICCP48838.2020.9105241","DOIUrl":"https://doi.org/10.1109/ICCP48838.2020.9105241","url":null,"abstract":"The composition of the atmosphere is significant to our ecosystem. Accordingly, there is a need to sense distributions of atmospheric scatterers such as aerosols and cloud droplets. There is growing interest in recovering these scattering fields in three-dimensions (3D). Even so, current atmospheric observations usually use expensive and unscalable equipment. Moreover, current analysis retrieves partial information (e.g., cloud-base altitudes, water droplet size at cloud tops) based on simplified 1D models. To advance observations and retrievals, we develop a new computational imaging approach for sensing and analyzing the atmosphere, volumetrically. Our approach comprises a ground-based network of cameras. We deployed it in conjunction with additional remote sensing equipment, including a Raman lidar and a sunphotometer, which provide initialization for algorithms and ground truth. The camera network is scalable, low cost, and enables 3D observations in high spatial and temporal resolution. We describe how the system is calibrated to provide absolute radiometric readouts of the light field. Consequently, we describe how to recover the volumetric field of scatterers, using tomography. The tomography process is adapted relative to prior art, to run on large-scale domains and being in-situ within scatterer fields. We empirically demonstrate the feasibility of tomography of clouds, using ground-based data.","PeriodicalId":406823,"journal":{"name":"2020 IEEE International Conference on Computational Photography (ICCP)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134331588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Multiscale-VR: Multiscale Gigapixel 3D Panoramic Videography for Virtual Reality 多尺度vr:用于虚拟现实的多尺度千兆像素3D全景摄像
Pub Date : 2020-04-01 DOI: 10.1109/ICCP48838.2020.9105244
Jianing Zhang, Tianyi Zhu, Anke Zhang, Xiaoyun Yuan, Zihan Wang, Sebastian Beetschen, Lan Xu, Xing Lin, Qionghai Dai, Lu Fang
Creating virtual reality (VR) content with effective imaging systems has attracted significant attention worldwide following the broad applications of VR in various fields, including entertainment, surveillance, sports, etc. However, due to the inherent trade-off between field-of-view and resolution of the imaging system as well as the prohibitive computational cost, live capturing and generating multiscale 360° 3D video content at an eye-limited resolution to provide immersive VR experiences confront significant challenges. In this work, we propose Multiscale-VR, a multiscale unstructured camera array computational imaging system for high-quality gigapixel 3D panoramic videography that creates the six-degree-of-freedom multiscale interactive VR content. The Multiscale-VR imaging system comprises scalable cylindrical-distributed global and local cameras, where global stereo cameras are stitched to cover 360° field-of-view, and unstructured local monocular cameras are adapted to the global camera for flexible high-resolution video streaming arrangement. We demonstrate that a high-quality gigapixel depth video can be faithfully reconstructed by our deep neural network-based algorithm pipeline where the global depth via stereo matching and the local depth via high-resolution RGB-guided refinement are associated. To generate the immersive 3D VR content, we present a three-layer rendering framework that includes an original layer for scene rendering, a diffusion layer for handling occlusion regions, and a dynamic layer for efficient dynamic foreground rendering. Our multiscale reconstruction architecture enables the proposed prototype system for rendering highly effective 3D, 360° gigapixel live VR video at 30 fps from the captured high-throughput multiscale video sequences. The proposed multiscale interactive VR content generation approach by using a heterogeneous camera system design, in contrast to the existing single-scale VR imaging systems with structured homogeneous cameras, will open up new avenues of research in VR and provide an unprecedented immersive experience benefiting various novel applications.
随着VR在娱乐、监控、体育等各个领域的广泛应用,利用有效的成像系统创建虚拟现实(VR)内容引起了全世界的广泛关注。然而,由于成像系统的视场和分辨率之间的内在权衡以及令人望而却步的计算成本,在人眼有限的分辨率下实时捕获和生成多尺度360°3D视频内容以提供沉浸式VR体验面临着重大挑战。在这项工作中,我们提出了多尺度非结构化相机阵列计算成像系统multiscale -VR,用于高质量的千兆像素3D全景摄像,创建六自由度多尺度交互式VR内容。多尺度vr成像系统包括可扩展的圆柱形分布全局和局部摄像机,其中全局立体摄像机缝合以覆盖360°视场,非结构化局部单目摄像机适应全局摄像机,以实现灵活的高分辨率视频流安排。我们证明了高质量的十亿像素深度视频可以通过基于深度神经网络的算法管道忠实地重建,其中通过立体匹配的全局深度和通过高分辨率rgb引导的细化的局部深度相关联。为了生成沉浸式3D VR内容,我们提出了一个三层渲染框架,其中包括用于场景渲染的原始层,用于处理遮挡区域的扩散层和用于高效动态前景渲染的动态层。我们的多尺度重建架构使所提出的原型系统能够从捕获的高吞吐量多尺度视频序列中以30 fps的速度渲染高效的3D, 360°十亿像素实时VR视频。采用异构相机系统设计的多尺度交互式VR内容生成方法,与现有的采用结构化同质相机的单尺度VR成像系统形成对比,将为VR研究开辟新的途径,并提供前所未有的沉浸式体验,有利于各种新颖的应用。
{"title":"Multiscale-VR: Multiscale Gigapixel 3D Panoramic Videography for Virtual Reality","authors":"Jianing Zhang, Tianyi Zhu, Anke Zhang, Xiaoyun Yuan, Zihan Wang, Sebastian Beetschen, Lan Xu, Xing Lin, Qionghai Dai, Lu Fang","doi":"10.1109/ICCP48838.2020.9105244","DOIUrl":"https://doi.org/10.1109/ICCP48838.2020.9105244","url":null,"abstract":"Creating virtual reality (VR) content with effective imaging systems has attracted significant attention worldwide following the broad applications of VR in various fields, including entertainment, surveillance, sports, etc. However, due to the inherent trade-off between field-of-view and resolution of the imaging system as well as the prohibitive computational cost, live capturing and generating multiscale 360° 3D video content at an eye-limited resolution to provide immersive VR experiences confront significant challenges. In this work, we propose Multiscale-VR, a multiscale unstructured camera array computational imaging system for high-quality gigapixel 3D panoramic videography that creates the six-degree-of-freedom multiscale interactive VR content. The Multiscale-VR imaging system comprises scalable cylindrical-distributed global and local cameras, where global stereo cameras are stitched to cover 360° field-of-view, and unstructured local monocular cameras are adapted to the global camera for flexible high-resolution video streaming arrangement. We demonstrate that a high-quality gigapixel depth video can be faithfully reconstructed by our deep neural network-based algorithm pipeline where the global depth via stereo matching and the local depth via high-resolution RGB-guided refinement are associated. To generate the immersive 3D VR content, we present a three-layer rendering framework that includes an original layer for scene rendering, a diffusion layer for handling occlusion regions, and a dynamic layer for efficient dynamic foreground rendering. Our multiscale reconstruction architecture enables the proposed prototype system for rendering highly effective 3D, 360° gigapixel live VR video at 30 fps from the captured high-throughput multiscale video sequences. The proposed multiscale interactive VR content generation approach by using a heterogeneous camera system design, in contrast to the existing single-scale VR imaging systems with structured homogeneous cameras, will open up new avenues of research in VR and provide an unprecedented immersive experience benefiting various novel applications.","PeriodicalId":406823,"journal":{"name":"2020 IEEE International Conference on Computational Photography (ICCP)","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123627409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Unveiling Optical Properties in Underwater Images 揭示水下图像的光学特性
Pub Date : 2020-04-01 DOI: 10.1109/ICCP48838.2020.9105267
Yael Bekerman, S. Avidan, T. Treibitz
The appearance of underwater scenes is highly governed by the optical properties of the water (attenuation and scattering). However, most research effort in physics-based underwater image reconstruction methods is placed on devising image priors for estimating scene transmission, and less on estimating the optical properties. This limits the quality of the results. This work focuses on robust estimation of the water properties. First, as opposed to previous methods that used fixed values for attenuation, we estimate it from the color distribution in the image. Second, we estimate the veiling-light color from objects in the scene, contrary to looking at background pixels. We conduct an extensive qualitative and quantitative evaluation of our method vs. most recent methods on several datasets. As our estimation is more robust our method provides superior results including on challenging scenes.
水下场景的外观高度取决于水的光学特性(衰减和散射)。然而,大多数基于物理的水下图像重建方法的研究工作都集中在设计图像先验来估计场景传输,而对光学性质的估计较少。这限制了结果的质量。这项工作的重点是对水性质的鲁棒估计。首先,与之前使用固定值进行衰减的方法相反,我们从图像中的颜色分布来估计衰减。其次,我们估计场景中物体的遮光颜色,而不是看背景像素。我们在几个数据集上对我们的方法与最新方法进行了广泛的定性和定量评估。由于我们的估计更加稳健,我们的方法提供了更好的结果,包括具有挑战性的场景。
{"title":"Unveiling Optical Properties in Underwater Images","authors":"Yael Bekerman, S. Avidan, T. Treibitz","doi":"10.1109/ICCP48838.2020.9105267","DOIUrl":"https://doi.org/10.1109/ICCP48838.2020.9105267","url":null,"abstract":"The appearance of underwater scenes is highly governed by the optical properties of the water (attenuation and scattering). However, most research effort in physics-based underwater image reconstruction methods is placed on devising image priors for estimating scene transmission, and less on estimating the optical properties. This limits the quality of the results. This work focuses on robust estimation of the water properties. First, as opposed to previous methods that used fixed values for attenuation, we estimate it from the color distribution in the image. Second, we estimate the veiling-light color from objects in the scene, contrary to looking at background pixels. We conduct an extensive qualitative and quantitative evaluation of our method vs. most recent methods on several datasets. As our estimation is more robust our method provides superior results including on challenging scenes.","PeriodicalId":406823,"journal":{"name":"2020 IEEE International Conference on Computational Photography (ICCP)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126232506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Learning a Probabilistic Strategy for Computational Imaging Sensor Selection 学习计算成像传感器选择的概率策略
Pub Date : 2020-03-23 DOI: 10.1109/ICCP48838.2020.9105133
He Sun, Adrian V. Dalca, K. Bouman
Optimized sensing is important for computational imaging in low-resource environments, when images must be recovered from severely limited measurements. In this paper, we propose a physics-constrained, fully differentiable, autoencoder that learns a probabilistic sensor-sampling strategy for optimized sensor design. The proposed method learns a system's preferred sampling distribution that characterizes the correlations between different sensor selections as a binary, fully-connected Ising model. The learned probabilistic model is achieved by using a Gibbs sampling inspired network architecture, and is trained end-to-end with a reconstruction network for efficient co-design. The proposed framework is applicable to sensor selection problems in a variety of computational imaging applications. In this paper, we demonstrate the approach in the context of a very-long-baseline-interferometry (VLBI) array design task, where sensor correlations and atmospheric noise present unique challenges. We demonstrate results broadly consistent with expectation, and draw attention to particular structures preferred in the telescope array geometry that can be leveraged to plan future observations and design array expansions.
在资源匮乏的环境中,当图像必须从严重受限的测量中恢复时,优化传感对于计算成像非常重要。在本文中,我们提出了一种物理约束的、完全可微的自编码器,它学习了一种用于优化传感器设计的概率传感器采样策略。所提出的方法学习系统的首选采样分布,该分布将不同传感器选择之间的相关性表征为二元,全连接的Ising模型。学习到的概率模型采用Gibbs抽样启发的网络架构实现,并与重构网络进行端到端训练,以实现高效的协同设计。提出的框架适用于各种计算成像应用中的传感器选择问题。在本文中,我们在超长基线干涉测量(VLBI)阵列设计任务的背景下演示了该方法,其中传感器相关性和大气噪声提出了独特的挑战。我们展示了与预期大致一致的结果,并提请注意望远镜阵列几何中首选的特定结构,这些结构可以用于规划未来的观测和设计阵列扩展。
{"title":"Learning a Probabilistic Strategy for Computational Imaging Sensor Selection","authors":"He Sun, Adrian V. Dalca, K. Bouman","doi":"10.1109/ICCP48838.2020.9105133","DOIUrl":"https://doi.org/10.1109/ICCP48838.2020.9105133","url":null,"abstract":"Optimized sensing is important for computational imaging in low-resource environments, when images must be recovered from severely limited measurements. In this paper, we propose a physics-constrained, fully differentiable, autoencoder that learns a probabilistic sensor-sampling strategy for optimized sensor design. The proposed method learns a system's preferred sampling distribution that characterizes the correlations between different sensor selections as a binary, fully-connected Ising model. The learned probabilistic model is achieved by using a Gibbs sampling inspired network architecture, and is trained end-to-end with a reconstruction network for efficient co-design. The proposed framework is applicable to sensor selection problems in a variety of computational imaging applications. In this paper, we demonstrate the approach in the context of a very-long-baseline-interferometry (VLBI) array design task, where sensor correlations and atmospheric noise present unique challenges. We demonstrate results broadly consistent with expectation, and draw attention to particular structures preferred in the telescope array geometry that can be leveraged to plan future observations and design array expansions.","PeriodicalId":406823,"journal":{"name":"2020 IEEE International Conference on Computational Photography (ICCP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130024600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
3D Face Reconstruction using Color Photometric Stereo with Uncalibrated Near Point Lights 使用未校准的近点光的彩色光度立体立体3D人脸重建
Pub Date : 2019-04-04 DOI: 10.1109/ICCP48838.2020.9105199
Z. Chen, Yu Ji, Mingyuan Zhou, S. B. Kang, Jingyi Yu
We present a new color photometric stereo (CPS) method that recovers high quality, detailed 3D face geometry in a single shot. Our system uses three uncalibrated near point lights of different colors and a single camera. For robust self-calibration of the light sources, we use 3D morphable model (3DMM) [1] and semantic segmentation of facial parts. For reconstruction, we address the inherent spectral ambiguity in color photometric stereo by incorporating albedo consensus, albedo similarity, and proxy prior into a unified framework. In this way, we jointly exploit multiple cues to resolve under-determinedness, without the need for spatial constancy of albedo. Experiments show that our new approach produces state-of-the-art results from single image with high-fidelity geometry that includes details such as wrinkles.
我们提出了一种新的彩色光度立体(CPS)方法,可以在单个镜头中恢复高质量,详细的3D面部几何形状。我们的系统使用三个未校准的不同颜色的近点光源和一个摄像机。为了实现光源的鲁棒自校准,我们使用了3D变形模型(3DMM)[1]和面部部位的语义分割。为了进行重建,我们将反照率一致性、反照率相似性和代理先验整合到一个统一的框架中,解决了彩色光度立体图像中固有的光谱模糊性。通过这种方式,我们共同利用多个线索来解决不确定性,而不需要反照率的空间恒定。实验表明,我们的新方法可以从包含皱纹等细节的高保真几何图形的单幅图像中产生最先进的结果。
{"title":"3D Face Reconstruction using Color Photometric Stereo with Uncalibrated Near Point Lights","authors":"Z. Chen, Yu Ji, Mingyuan Zhou, S. B. Kang, Jingyi Yu","doi":"10.1109/ICCP48838.2020.9105199","DOIUrl":"https://doi.org/10.1109/ICCP48838.2020.9105199","url":null,"abstract":"We present a new color photometric stereo (CPS) method that recovers high quality, detailed 3D face geometry in a single shot. Our system uses three uncalibrated near point lights of different colors and a single camera. For robust self-calibration of the light sources, we use 3D morphable model (3DMM) [1] and semantic segmentation of facial parts. For reconstruction, we address the inherent spectral ambiguity in color photometric stereo by incorporating albedo consensus, albedo similarity, and proxy prior into a unified framework. In this way, we jointly exploit multiple cues to resolve under-determinedness, without the need for spatial constancy of albedo. Experiments show that our new approach produces state-of-the-art results from single image with high-fidelity geometry that includes details such as wrinkles.","PeriodicalId":406823,"journal":{"name":"2020 IEEE International Conference on Computational Photography (ICCP)","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132697467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
2020 IEEE International Conference on Computational Photography (ICCP)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1