首页 > 最新文献

32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.最新文献

英文 中文
Vehicle detection approaches using the NVESD Sensor Fusion Testbed 使用NVESD传感器融合试验台的车辆检测方法
Pub Date : 2003-10-01 DOI: 10.1109/AIPR.2003.1284249
P. Perconti, J. Hilger, M. Loew
The US Army RDECOM CERDEC Night Vision & Electronic Sensors Directorate (NVESD) has a dynamic applied research program in sensor fusion for a wide variety of defense & defense related applications. This paper highlights efforts under the NVESD Sensor Fusion Testbed (SFTB) in the area of detection of moving vehicles with a network of image and acoustic sensors. A sensor data collection was designed and conducted using a variety of vehicles. Data from this collection included signature data of the vehicles as well as moving scenarios. Sensor fusion for detection and classification is performed at both the sensor level and the feature level, providing a basis for making tradeoffs between performance desired and resources required. Several classifier types are examined (parametric, nonparametric, learning). The combination of their decisions is used to make the final decision.
美国陆军RDECOM CERDEC夜视和电子传感器理事会(NVESD)在传感器融合方面有一个动态的应用研究计划,用于各种国防和国防相关应用。本文重点介绍了在NVESD传感器融合试验台(SFTB)下,基于图像和声学传感器网络的移动车辆检测领域所做的工作。设计并使用多种车辆进行传感器数据收集。来自该集合的数据包括车辆的特征数据以及移动场景。用于检测和分类的传感器融合在传感器级和特征级执行,为在期望的性能和所需的资源之间进行权衡提供了基础。研究了几种分类器类型(参数、非参数、学习)。他们的决定的组合被用来做出最终的决定。
{"title":"Vehicle detection approaches using the NVESD Sensor Fusion Testbed","authors":"P. Perconti, J. Hilger, M. Loew","doi":"10.1109/AIPR.2003.1284249","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284249","url":null,"abstract":"The US Army RDECOM CERDEC Night Vision & Electronic Sensors Directorate (NVESD) has a dynamic applied research program in sensor fusion for a wide variety of defense & defense related applications. This paper highlights efforts under the NVESD Sensor Fusion Testbed (SFTB) in the area of detection of moving vehicles with a network of image and acoustic sensors. A sensor data collection was designed and conducted using a variety of vehicles. Data from this collection included signature data of the vehicles as well as moving scenarios. Sensor fusion for detection and classification is performed at both the sensor level and the feature level, providing a basis for making tradeoffs between performance desired and resources required. Several classifier types are examined (parametric, nonparametric, learning). The combination of their decisions is used to make the final decision.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132681101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Neural network based skin color model for face detection 基于神经网络的人脸肤色检测模型
Pub Date : 2003-10-01 DOI: 10.1109/AIPR.2003.1284262
Ming-Jung Seow, Deepthi Valaparla, V. Asari
This paper presents a novel neural network based technique for face detection that eliminates limitations pertaining to the skin color variations among people. We propose to model the skin color in the three dimensional RGB space which is a color cube consisting of all the possible color combinations. Skin samples in images with varying lighting conditions, from the Old Dominion University skin database, are used for obtaining a skin color distribution. The primary color components of each plane of the color cube are fed to a three-layered network, trained using the backpropagation algorithm with the skin samples, to extract the skin regions from the planes and interpolate them so as to provide an optimum decision boundary and hence the positive skin samples for the skin classifier. The use of the color cube eliminates the difficulties of finding the non-skin part of training samples since the interpolated data is consider skin and rest of the color cube is consider non-skin. Subsequent face detection is aided by the color, geometry and motion information analyses of each frame in a video sequence. The performance of the new face detection technique has been tested with real-time data of size 320/spl times/240 frames from video sequences captured by a surveillance camera. It is observed that the network can differentiate skin and non-skin effectively while minimizing false detections to a large extent when compared with the existing techniques. In addition, it is seen that the network is capable of performing face detection in complex lighting and background environments.
本文提出了一种新的基于神经网络的人脸检测技术,消除了人与人之间肤色差异的限制。我们建议在三维RGB空间中建模皮肤颜色,该空间是由所有可能的颜色组合组成的颜色立方体。来自Old Dominion University皮肤数据库的不同光照条件下图像中的皮肤样本用于获得肤色分布。颜色立方体的每个平面的原色分量被馈送到一个三层网络中,使用皮肤样本的反向传播算法进行训练,从平面中提取皮肤区域并对其进行插值,从而为皮肤分类器提供最佳决策边界和阳性皮肤样本。颜色立方体的使用消除了寻找训练样本非皮肤部分的困难,因为插值的数据是考虑皮肤的,而颜色立方体的其余部分是考虑非皮肤的。随后的人脸检测是辅助的颜色,几何和运动信息分析的每一帧的视频序列。新的人脸检测技术的性能已经用监控摄像机捕获的视频序列的320/spl次/240帧的实时数据进行了测试。与现有技术相比,该网络可以有效区分皮肤和非皮肤,并在很大程度上减少误检。此外,可以看出该网络能够在复杂的照明和背景环境中进行人脸检测。
{"title":"Neural network based skin color model for face detection","authors":"Ming-Jung Seow, Deepthi Valaparla, V. Asari","doi":"10.1109/AIPR.2003.1284262","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284262","url":null,"abstract":"This paper presents a novel neural network based technique for face detection that eliminates limitations pertaining to the skin color variations among people. We propose to model the skin color in the three dimensional RGB space which is a color cube consisting of all the possible color combinations. Skin samples in images with varying lighting conditions, from the Old Dominion University skin database, are used for obtaining a skin color distribution. The primary color components of each plane of the color cube are fed to a three-layered network, trained using the backpropagation algorithm with the skin samples, to extract the skin regions from the planes and interpolate them so as to provide an optimum decision boundary and hence the positive skin samples for the skin classifier. The use of the color cube eliminates the difficulties of finding the non-skin part of training samples since the interpolated data is consider skin and rest of the color cube is consider non-skin. Subsequent face detection is aided by the color, geometry and motion information analyses of each frame in a video sequence. The performance of the new face detection technique has been tested with real-time data of size 320/spl times/240 frames from video sequences captured by a surveillance camera. It is observed that the network can differentiate skin and non-skin effectively while minimizing false detections to a large extent when compared with the existing techniques. In addition, it is seen that the network is capable of performing face detection in complex lighting and background environments.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128977940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 83
Real time face detection from color video stream based on PCA method 基于PCA的彩色视频流实时人脸检测
Pub Date : 2003-10-01 DOI: 10.1109/AIPR.2003.1284263
Rajkiran Gottumukkal, V. Asari
We present a face detection system capable of detection of faces in real time from a streaming color video. Currently this system is able to detect faces as long as both the eyes are visible in the image plane. Extracting skin color regions from a color image is the first step in this system. Skin color detection is used to segment regions of the image that correspond to face regions based on pixel color. Under normal illumination conditions, skin color takes small regions of the color space. By using this information, we can classify each pixel of the image as skin region or non-skin region. By scanning the skin regions, regions that do not have shape of a face are removed. Principle Component Analysis (PCA) is used to classify if a particular skin region is a face or a non-face. The PCA algorithm is trained for frontal view faces only. The system is tested with images captured by a surveillance camera in real time.
我们提出了一种能够从流媒体彩色视频中实时检测人脸的人脸检测系统。目前,该系统能够检测人脸,只要两只眼睛都在图像平面上可见。从彩色图像中提取肤色区域是该系统的第一步。皮肤颜色检测用于基于像素颜色分割图像中与人脸区域对应的区域。在正常光照条件下,肤色占据了色彩空间的一小块区域。利用这些信息,我们可以将图像的每个像素划分为皮肤区域或非皮肤区域。通过扫描皮肤区域,去除不具有面部形状的区域。主成分分析(PCA)是用来分类如果一个特定的皮肤区域是脸或非脸。PCA算法仅针对正面视图人脸进行训练。该系统通过监控摄像头实时捕获的图像进行了测试。
{"title":"Real time face detection from color video stream based on PCA method","authors":"Rajkiran Gottumukkal, V. Asari","doi":"10.1109/AIPR.2003.1284263","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284263","url":null,"abstract":"We present a face detection system capable of detection of faces in real time from a streaming color video. Currently this system is able to detect faces as long as both the eyes are visible in the image plane. Extracting skin color regions from a color image is the first step in this system. Skin color detection is used to segment regions of the image that correspond to face regions based on pixel color. Under normal illumination conditions, skin color takes small regions of the color space. By using this information, we can classify each pixel of the image as skin region or non-skin region. By scanning the skin regions, regions that do not have shape of a face are removed. Principle Component Analysis (PCA) is used to classify if a particular skin region is a face or a non-face. The PCA algorithm is trained for frontal view faces only. The system is tested with images captured by a surveillance camera in real time.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122372546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Multisensor & spectral image fusion & mining: from neural systems to applications 多传感器和光谱图像融合与挖掘:从神经系统到应用
Pub Date : 2003-10-01 DOI: 10.1109/AIPR.2003.1284242
D. Fay, R. Ivey, N. Bomberger, A. Waxman
We have continued development of a system for multisensor image fusion and interactive mining based on neural models of color vision processing, learning and pattern recognition. We pioneered this work while at MIT Lincoln Laboratory, initially for color fused night vision (low-light visible and uncooled thermal imagery) and later extended it to multispectral IR and 3D ladder. We also developed a proof-of-concept system for EO, IR, SAR fusion and mining. Over the last year we have generalized this approach and developed a user-friendly system integrated into a COTS exploitation environment known as ERDAS Imagine. In this paper, we have summarized the approach and the neural networks used, and demonstrate fusion and interactive mining (i.e., target learning and search) of low-light visible/SWIR/MWIR/LWIR night imagery, and IKONOS multispectral and high-resolution panchromatic imagery. In addition, we had demonstrated how target learning and search can be enabled over extended operating conditions by allowing training over multiple scenes. This has been illustrated for the detection of small boats in coastal waters using fused visible/MWIR/LWIR imagery.
我们继续开发了一个基于色彩视觉处理、学习和模式识别的神经模型的多传感器图像融合和交互式挖掘系统。我们在麻省理工学院林肯实验室开创了这项工作,最初用于彩色融合夜视(低光可见光和非冷却热图像),后来扩展到多光谱红外和3D阶梯。我们还开发了一个用于EO, IR, SAR融合和采矿的概念验证系统。在过去的一年里,我们推广了这种方法,并开发了一个用户友好的系统,集成到一个被称为ERDAS Imagine的COTS开发环境中。本文总结了低光可见光/SWIR/MWIR/LWIR夜间图像和IKONOS多光谱高分辨率全色图像的融合和交互挖掘(即目标学习和搜索)方法和所使用的神经网络。此外,我们还演示了如何通过允许在多个场景上进行训练来在扩展的操作条件下启用目标学习和搜索。这已用于使用融合可见光/中波红外/低波红外图像检测沿海水域的小船。
{"title":"Multisensor & spectral image fusion & mining: from neural systems to applications","authors":"D. Fay, R. Ivey, N. Bomberger, A. Waxman","doi":"10.1109/AIPR.2003.1284242","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284242","url":null,"abstract":"We have continued development of a system for multisensor image fusion and interactive mining based on neural models of color vision processing, learning and pattern recognition. We pioneered this work while at MIT Lincoln Laboratory, initially for color fused night vision (low-light visible and uncooled thermal imagery) and later extended it to multispectral IR and 3D ladder. We also developed a proof-of-concept system for EO, IR, SAR fusion and mining. Over the last year we have generalized this approach and developed a user-friendly system integrated into a COTS exploitation environment known as ERDAS Imagine. In this paper, we have summarized the approach and the neural networks used, and demonstrate fusion and interactive mining (i.e., target learning and search) of low-light visible/SWIR/MWIR/LWIR night imagery, and IKONOS multispectral and high-resolution panchromatic imagery. In addition, we had demonstrated how target learning and search can be enabled over extended operating conditions by allowing training over multiple scenes. This has been illustrated for the detection of small boats in coastal waters using fused visible/MWIR/LWIR imagery.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129216954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Registration of range data from unmanned aerial and ground vehicles 登记无人驾驶飞机和地面车辆的距离数据
Pub Date : 2003-10-01 DOI: 10.1109/AIPR.2003.1284247
Anthony Downs, R. Madhavan, T. Hong
In the research reported in this paper, we propose to overcome the unavailability of Global Positioning System (GPS) using combined information obtained from a scanning LADAR rangefinder on an Unmanned Ground Vehicle (UGV) and a LADAR mounted on an Unmanned Aerial Vehicle (UAV) that flies over the terrain being traversed. The approach to estimate and update the position of the UGV involves registering range data from the two LADARs using a combination of a feature-based registration method and a modified version of the well-known Iterative Closest Point (ICP) algorithm. Registration of range data thus guarantees an estimate of the vehicle's position even when only one of the vehicles has GPS information. Additionally, such registration over time (i.e., from sample to sample), enables position information to be maintained even when both vehicles can no longer maintain GPS contact. The approach has been validated by conducting systematic experiments on complex real-world data.
在本文的研究中,我们提出利用无人地面车辆(UGV)上的扫描雷达测距仪和安装在无人机(UAV)上的雷达所获得的综合信息来克服全球定位系统(GPS)的不可用性。估计和更新UGV位置的方法包括使用基于特征的配准方法和改进版本的迭代最近点(ICP)算法的组合来注册来自两台LADARs的距离数据。因此,即使只有一辆车拥有GPS信息,距离数据的登记也能保证对车辆位置的估计。此外,随着时间的推移(即从一个样本到另一个样本),即使两辆车不能再保持GPS联系,也可以保持位置信息。该方法已经在复杂的现实世界数据上进行了系统的实验验证。
{"title":"Registration of range data from unmanned aerial and ground vehicles","authors":"Anthony Downs, R. Madhavan, T. Hong","doi":"10.1109/AIPR.2003.1284247","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284247","url":null,"abstract":"In the research reported in this paper, we propose to overcome the unavailability of Global Positioning System (GPS) using combined information obtained from a scanning LADAR rangefinder on an Unmanned Ground Vehicle (UGV) and a LADAR mounted on an Unmanned Aerial Vehicle (UAV) that flies over the terrain being traversed. The approach to estimate and update the position of the UGV involves registering range data from the two LADARs using a combination of a feature-based registration method and a modified version of the well-known Iterative Closest Point (ICP) algorithm. Registration of range data thus guarantees an estimate of the vehicle's position even when only one of the vehicles has GPS information. Additionally, such registration over time (i.e., from sample to sample), enables position information to be maintained even when both vehicles can no longer maintain GPS contact. The approach has been validated by conducting systematic experiments on complex real-world data.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115885994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Personal authentication using feature points on finger and palmar creases 使用手指和手掌折痕上的特征点进行个人认证
Pub Date : 2003-10-01 DOI: 10.1109/AIPR.2003.1284285
J. Doi, M. Yamanaka
A new and practical method of reliable and real-time authentication is proposed. Finger geometry and feature extraction of the palmar flexion creases are integrated in a few numbers of discrete points for faster and robust processing. A video image of either palm, palm placed freely facing toward a near infrared video camera in front of a low-reflective board, is acquired. Fingers are brought together without any constraints. Discrete feature point involves intersection points of the three digital (finger) flexion creases on the four finger skeletal lines and intersection points of the major palmar flexion creases on the extended finger skeletal lines, and orientations of the creases at the points. These metrics define the feature vectors for matching. Matching results are perfect for 50 subjects so far. This point wise processing, extracting enough feature from non contacting video image, requiring no time-consumptive palm print image analysis, and requiring less than one second processing time, will contribute to a real-time and reliable authentication.
提出了一种实用的可靠实时认证新方法。手指几何和特征提取的掌纹屈曲折痕集成在几个离散的点,更快和鲁棒的处理。获取任意手掌的视频图像,将手掌自由地朝向低反射板前的近红外摄像机。手指不受任何约束地聚在一起。离散特征点包括指的四条指骨线上三个屈曲折痕的交点和延伸指骨线上掌部主要屈曲折痕的交点,以及这些点上的折痕方向。这些度量定义了匹配的特征向量。到目前为止,有50个科目的匹配结果非常完美。这种从非接触视频图像中提取足够特征的点智能处理,不需要耗时的掌纹图像分析,处理时间不到一秒,将有助于实时可靠的身份验证。
{"title":"Personal authentication using feature points on finger and palmar creases","authors":"J. Doi, M. Yamanaka","doi":"10.1109/AIPR.2003.1284285","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284285","url":null,"abstract":"A new and practical method of reliable and real-time authentication is proposed. Finger geometry and feature extraction of the palmar flexion creases are integrated in a few numbers of discrete points for faster and robust processing. A video image of either palm, palm placed freely facing toward a near infrared video camera in front of a low-reflective board, is acquired. Fingers are brought together without any constraints. Discrete feature point involves intersection points of the three digital (finger) flexion creases on the four finger skeletal lines and intersection points of the major palmar flexion creases on the extended finger skeletal lines, and orientations of the creases at the points. These metrics define the feature vectors for matching. Matching results are perfect for 50 subjects so far. This point wise processing, extracting enough feature from non contacting video image, requiring no time-consumptive palm print image analysis, and requiring less than one second processing time, will contribute to a real-time and reliable authentication.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129138207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Proceedings. 32nd Applied Imagery Pattern Recognition Workshop 第32届应用图像模式识别研讨会论文集
Pub Date : 1900-01-01 DOI: 10.1109/AIPR.2003.1284238
The following topics are dealt with: military applications; remote sensing; medical applications; data fusion using neural networks; visual learning in humans and machines; homeland security.
讨论下列主题:军事应用;遥感;医疗应用程序;基于神经网络的数据融合;人类和机器的视觉学习;国土安全。
{"title":"Proceedings. 32nd Applied Imagery Pattern Recognition Workshop","authors":"","doi":"10.1109/AIPR.2003.1284238","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284238","url":null,"abstract":"The following topics are dealt with: military applications; remote sensing; medical applications; data fusion using neural networks; visual learning in humans and machines; homeland security.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123243927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1