首页 > 最新文献

IPSJ Transactions on Computer Vision and Applications最新文献

英文 中文
Variable exposure time imaging for obtaining unblurred HDR images 可变曝光时间成像,以获得未模糊的HDR图像
Q1 Computer Science Pub Date : 2016-08-02 DOI: 10.1186/s41074-016-0005-0
Saori Uda, Fumihiko Sakaue, J. Sato
{"title":"Variable exposure time imaging for obtaining unblurred HDR images","authors":"Saori Uda, Fumihiko Sakaue, J. Sato","doi":"10.1186/s41074-016-0005-0","DOIUrl":"https://doi.org/10.1186/s41074-016-0005-0","url":null,"abstract":"","PeriodicalId":38957,"journal":{"name":"IPSJ Transactions on Computer Vision and Applications","volume":"8 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2016-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41074-016-0005-0","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"65775302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Multibody motion segmentation for an arbitrary number of independent motions 针对任意数量的独立运动进行多体运动分割
Q1 Computer Science Pub Date : 2016-08-02 DOI: 10.1186/s41074-016-0002-3
Yutaro Sako, Y. Sugaya
{"title":"Multibody motion segmentation for an arbitrary number of independent motions","authors":"Yutaro Sako, Y. Sugaya","doi":"10.1186/s41074-016-0002-3","DOIUrl":"https://doi.org/10.1186/s41074-016-0002-3","url":null,"abstract":"","PeriodicalId":38957,"journal":{"name":"IPSJ Transactions on Computer Vision and Applications","volume":"8 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2016-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41074-016-0002-3","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"65775221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Combining deep features for object detection at various scales: finding small birds in landscape images 结合深度特征进行不同尺度的目标检测:寻找景观图像中的小鸟
Q1 Computer Science Pub Date : 2016-08-02 DOI: 10.1186/s41074-016-0006-z
Akito Takeki, T. Trinh, Ryota Yoshihashi, Rei Kawakami, M. Iida, T. Naemura
{"title":"Combining deep features for object detection at various scales: finding small birds in landscape images","authors":"Akito Takeki, T. Trinh, Ryota Yoshihashi, Rei Kawakami, M. Iida, T. Naemura","doi":"10.1186/s41074-016-0006-z","DOIUrl":"https://doi.org/10.1186/s41074-016-0006-z","url":null,"abstract":"","PeriodicalId":38957,"journal":{"name":"IPSJ Transactions on Computer Vision and Applications","volume":"8 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2016-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41074-016-0006-z","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"65775313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Multiple fish tracking with an NACA airfoil model for collective behavior analysis 多鱼跟踪与NACA翼型模型的集体行为分析
Q1 Computer Science Pub Date : 2016-08-02 DOI: 10.1186/s41074-016-0004-1
Kei Terayama, H. Habe, M. Sakagami
{"title":"Multiple fish tracking with an NACA airfoil model for collective behavior analysis","authors":"Kei Terayama, H. Habe, M. Sakagami","doi":"10.1186/s41074-016-0004-1","DOIUrl":"https://doi.org/10.1186/s41074-016-0004-1","url":null,"abstract":"","PeriodicalId":38957,"journal":{"name":"IPSJ Transactions on Computer Vision and Applications","volume":"8 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2016-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41074-016-0004-1","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"65775289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Extrinsic Camera Calibration with Minimal Configuration Using Cornea Model and Equidistance Constraint 基于角膜模型和等距约束的最小配置摄像机外部标定
Q1 Computer Science Pub Date : 2016-01-01 DOI: 10.2197/ipsjtcva.8.20
Kosuke Takahashi, Dan Mikami, Mariko Isogawa, Akira Kojima
In this paper, we propose a novel algorithm to extrinsically calibrate a camera to a 3D reference object that is not directly visible from the camera. We use the spherical human cornea as a mirror and calibrate the extrinsic parameters from its reflection of the reference points. The key contribution of this paper is to present a cornea-reflectionbased calibration algorithm with minimal configuration; there are three reference points and one mirror pose. The proposed algorithm introduces two constraints. First constraint is that the cornea is virtually a sphere, which enables us to estimate the center of the cornea sphere from its projection. Second is the equidistance constraint, which enables us to estimate the 3D position of the reference point by assuming that the center of the camera and reference point are located the same distance from the center of the cornea sphere. We demonstrate the advantages of the proposed method with qualitative and quantitative evaluations using synthesized and real data.
在本文中,我们提出了一种新的算法来外部校准相机到一个三维参考对象,这是不直接从相机可见。我们利用人的球形角膜作为镜子,从它对参考点的反射来校准外部参数。本文的主要贡献是提出了一种基于最小配置的角膜反射校准算法;有三个参考点和一个镜像姿势。该算法引入了两个约束条件。第一个限制是角膜实际上是一个球体,这使我们能够从角膜球体的投影来估计它的中心。二是等距离约束,即假设相机中心和参考点距离角膜球中心的距离相同,从而估计参考点的三维位置。通过综合数据和真实数据的定性和定量评价,证明了该方法的优越性。
{"title":"Extrinsic Camera Calibration with Minimal Configuration Using Cornea Model and Equidistance Constraint","authors":"Kosuke Takahashi, Dan Mikami, Mariko Isogawa, Akira Kojima","doi":"10.2197/ipsjtcva.8.20","DOIUrl":"https://doi.org/10.2197/ipsjtcva.8.20","url":null,"abstract":"In this paper, we propose a novel algorithm to extrinsically calibrate a camera to a 3D reference object that is not directly visible from the camera. We use the spherical human cornea as a mirror and calibrate the extrinsic parameters from its reflection of the reference points. The key contribution of this paper is to present a cornea-reflectionbased calibration algorithm with minimal configuration; there are three reference points and one mirror pose. The proposed algorithm introduces two constraints. First constraint is that the cornea is virtually a sphere, which enables us to estimate the center of the cornea sphere from its projection. Second is the equidistance constraint, which enables us to estimate the 3D position of the reference point by assuming that the center of the camera and reference point are located the same distance from the center of the cornea sphere. We demonstrate the advantages of the proposed method with qualitative and quantitative evaluations using synthesized and real data.","PeriodicalId":38957,"journal":{"name":"IPSJ Transactions on Computer Vision and Applications","volume":"33 1","pages":"20-28"},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83718544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Sequential Monte-Carlo Based Road Region Segmentation Algorithm with Uniform Spatial Sampling 基于顺序蒙特卡罗的均匀空间采样道路区域分割算法
Q1 Computer Science Pub Date : 2016-01-01 DOI: 10.2197/ipsjtcva.8.1
Z. Procházka
Vision based road recognition and tracking are crucial tasks in a field of autonomous driving. Road recognition methods based on shape analysis of road region have the potential to overcome the limitations of traditional boundary based approaches, but a robust method for road region segmentation is the challenging issue. In our work, we treat the problem of road region segmentation as a classification task, where road pixels are classified by statistical decision rule based on the probability density function (pdf) of road features. This paper presents a new algorithm for the estimation of the pdf, based on sequential Monte-Carlo (SMC) method. The proposed algorithm is evaluated on data sets of three different types of images, and the results of evaluation show the effectiveness of the proposed method.
基于视觉的道路识别和跟踪是自动驾驶领域的关键任务。基于道路区域形状分析的道路识别方法有可能克服传统基于边界的道路识别方法的局限性,但如何实现道路区域的鲁棒分割是一个具有挑战性的问题。在我们的工作中,我们将道路区域分割问题视为一个分类任务,其中道路像素通过基于道路特征的概率密度函数(pdf)的统计决策规则进行分类。本文提出了一种基于序贯蒙特卡罗(SMC)方法的pdf估计新算法。在三种不同类型图像的数据集上对该算法进行了评价,评价结果表明了该算法的有效性。
{"title":"Sequential Monte-Carlo Based Road Region Segmentation Algorithm with Uniform Spatial Sampling","authors":"Z. Procházka","doi":"10.2197/ipsjtcva.8.1","DOIUrl":"https://doi.org/10.2197/ipsjtcva.8.1","url":null,"abstract":"Vision based road recognition and tracking are crucial tasks in a field of autonomous driving. Road recognition methods based on shape analysis of road region have the potential to overcome the limitations of traditional boundary based approaches, but a robust method for road region segmentation is the challenging issue. In our work, we treat the problem of road region segmentation as a classification task, where road pixels are classified by statistical decision rule based on the probability density function (pdf) of road features. This paper presents a new algorithm for the estimation of the pdf, based on sequential Monte-Carlo (SMC) method. The proposed algorithm is evaluated on data sets of three different types of images, and the results of evaluation show the effectiveness of the proposed method.","PeriodicalId":38957,"journal":{"name":"IPSJ Transactions on Computer Vision and Applications","volume":"116 1","pages":"1-10"},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79653993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Part-wise Geodesic Histogram Shape Descriptor for Unstructured Mesh Series Segmentation 面向非结构化网格序列分割的局部测地线直方图形状描述符
Q1 Computer Science Pub Date : 2016-01-01 DOI: 10.2197/ipsjtcva.8.29
T. Mukasa, S. Nobuhara, Tony Tung, T. Matsuyama
This paper presents a novel shape descriptor for topology-based segmentation of 3D video sequences. 3D video is a series of 3D meshes without temporal correspondences which benefit for applications including compression, motion analysis, and kinematic editing. In 3D video, both 3D mesh connectivities and the global surface topology can change frame by frame. This characteristic prevents from making accurate temporal correspondences through the entire 3D mesh series. To overcome this difficulty, we propose a two-step strategy which decomposes the entire sequence into a series of topologically coherent segments using our new shape descriptor, and then estimates temporal correspondences on a per-segment basis. As the result of acquiring temporal correspondences, we could extract rigid parts from the preprocessed 3D video segments to establish partial kinematic structures, and could integrate them into a single unified kinematic model which describes the entire kinematic motion in the 3D video sequence. We demonstrate the robustness and accuracy of the shape descriptor on real data which consist of large non-rigid motion and reconstruction errors.
提出了一种新的基于拓扑的三维视频序列分割的形状描述符。3D视频是一系列没有时间对应的3D网格,有利于压缩、运动分析和运动编辑等应用。在3D视频中,3D网格连通性和全局表面拓扑结构都可以逐帧改变。这个特性阻止了通过整个3D网格系列进行精确的时间对应。为了克服这一困难,我们提出了一种两步策略,即使用我们的新形状描述符将整个序列分解为一系列拓扑一致的片段,然后在每个片段的基础上估计时间对应。通过获取时间对应关系,我们可以从预处理后的三维视频片段中提取刚体部分,建立部分运动结构,并将其整合成一个统一的运动模型来描述三维视频序列中的整个运动。在具有较大非刚体运动和重构误差的实际数据上证明了该形状描述符的鲁棒性和准确性。
{"title":"Part-wise Geodesic Histogram Shape Descriptor for Unstructured Mesh Series Segmentation","authors":"T. Mukasa, S. Nobuhara, Tony Tung, T. Matsuyama","doi":"10.2197/ipsjtcva.8.29","DOIUrl":"https://doi.org/10.2197/ipsjtcva.8.29","url":null,"abstract":"This paper presents a novel shape descriptor for topology-based segmentation of 3D video sequences. 3D video is a series of 3D meshes without temporal correspondences which benefit for applications including compression, motion analysis, and kinematic editing. In 3D video, both 3D mesh connectivities and the global surface topology can change frame by frame. This characteristic prevents from making accurate temporal correspondences through the entire 3D mesh series. To overcome this difficulty, we propose a two-step strategy which decomposes the entire sequence into a series of topologically coherent segments using our new shape descriptor, and then estimates temporal correspondences on a per-segment basis. As the result of acquiring temporal correspondences, we could extract rigid parts from the preprocessed 3D video segments to establish partial kinematic structures, and could integrate them into a single unified kinematic model which describes the entire kinematic motion in the 3D video sequence. We demonstrate the robustness and accuracy of the shape descriptor on real data which consist of large non-rigid motion and reconstruction errors.","PeriodicalId":38957,"journal":{"name":"IPSJ Transactions on Computer Vision and Applications","volume":"29 1","pages":"29-39"},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77389773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mirror-based Camera Pose Estimation Using an Orthogonality Constraint 使用正交性约束的基于反光镜的相机姿态估计
Q1 Computer Science Pub Date : 2016-01-01 DOI: 10.2197/ipsjtcva.8.11
Kosuke Takahashi, S. Nobuhara, T. Matsuyama
This paper is aimed at employing mirrors to estimate relative posture and position of camera, i.e., extrinsic parameters, against a 3D reference object that is not directly visible from the camera. The key contribution of this paper is to propose a novel formulation of extrinsic camera calibration based on orthogonality constraint which should be satisfied by all families of mirror-reflections of a single reference object. This allows us to obtain a larger number of equations which contribute to make the calibration more robust. We demonstrate the advantages of the proposed method in comparison with a state-of-the-art by qualitative and quantitative evaluations using synthesized and real data.
本文的目的是利用反射镜来估计相机的相对姿态和位置,即外部参数,针对一个三维参考对象,不是直接从相机可见。本文的主要贡献在于提出了一种基于正交性约束的摄像机外部标定的新公式,该公式应满足单个参考物体的所有反射族。这使我们能够获得更多的方程,这些方程有助于使校准更加鲁棒。我们通过使用合成和真实数据进行定性和定量评估,与最先进的方法相比,证明了所提出方法的优点。
{"title":"Mirror-based Camera Pose Estimation Using an Orthogonality Constraint","authors":"Kosuke Takahashi, S. Nobuhara, T. Matsuyama","doi":"10.2197/ipsjtcva.8.11","DOIUrl":"https://doi.org/10.2197/ipsjtcva.8.11","url":null,"abstract":"This paper is aimed at employing mirrors to estimate relative posture and position of camera, i.e., extrinsic parameters, against a 3D reference object that is not directly visible from the camera. The key contribution of this paper is to propose a novel formulation of extrinsic camera calibration based on orthogonality constraint which should be satisfied by all families of mirror-reflections of a single reference object. This allows us to obtain a larger number of equations which contribute to make the calibration more robust. We demonstrate the advantages of the proposed method in comparison with a state-of-the-art by qualitative and quantitative evaluations using synthesized and real data.","PeriodicalId":38957,"journal":{"name":"IPSJ Transactions on Computer Vision and Applications","volume":"62 1","pages":"11-19"},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72804918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Temporally coherent disparity maps using CRFs with fast 4D filtering 时间相干视差地图使用快速四维滤波的CRFs
Q1 Computer Science Pub Date : 2015-11-01 DOI: 10.1186/s41074-016-0011-2
S. Bigdeli, Gregor Budweiser, Matthias Zwicker
{"title":"Temporally coherent disparity maps using CRFs with fast 4D filtering","authors":"S. Bigdeli, Gregor Budweiser, Matthias Zwicker","doi":"10.1186/s41074-016-0011-2","DOIUrl":"https://doi.org/10.1186/s41074-016-0011-2","url":null,"abstract":"","PeriodicalId":38957,"journal":{"name":"IPSJ Transactions on Computer Vision and Applications","volume":"8 1","pages":"1-14"},"PeriodicalIF":0.0,"publicationDate":"2015-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41074-016-0011-2","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"65774902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Co-occurrence context of the data-driven quantized local ternary patterns for visual recognition 用于视觉识别的数据驱动量化局部三元模式的共现上下文
Q1 Computer Science Pub Date : 2015-11-01 DOI: 10.1186/s41074-017-0017-4
X. Han, Yenwei Chen, Gang Xu
{"title":"Co-occurrence context of the data-driven quantized local ternary patterns for visual recognition","authors":"X. Han, Yenwei Chen, Gang Xu","doi":"10.1186/s41074-017-0017-4","DOIUrl":"https://doi.org/10.1186/s41074-017-0017-4","url":null,"abstract":"","PeriodicalId":38957,"journal":{"name":"IPSJ Transactions on Computer Vision and Applications","volume":"9 1","pages":"1-10"},"PeriodicalIF":0.0,"publicationDate":"2015-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41074-017-0017-4","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"65774914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IPSJ Transactions on Computer Vision and Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1