首页 > 最新文献

2023 6th International Conference on Pattern Recognition and Image Analysis (IPRIA)最新文献

英文 中文
New Texture Descriptor Based on Improved Orthogonal Difference Local Binary Pattern 基于改进正交差分局部二值模式的纹理描述子
Pub Date : 2023-02-14 DOI: 10.1109/IPRIA59240.2023.10147180
S. Fadaei, Pouya Hosseini, K. RahimiZadeh
Local descriptor plays an important role in Content-Based Image Retrieval (CBIR) and face recognition. Almost all local patterns are based on the relationship between neighboring pixels in a local area. The most famous local pattern is Local Binary Pattern (LBP), in which the patterns are defined based on the intensity difference between a central pixel and its neighboring in a $3times 3$ local window. Orthogonal Difference Local Binary Pattern (OLDBP) is an extended version of LBP which is introduced recently. In this paper, ODLBP is improved. In the proposed method each $3times 3$ local window is divided into two groups and then local patterns of each group are extracted and finally, the feature vector is provided by concatenating of groups patterns. To evaluate the proposed method, three datasets Yale, ORL and GT are used. Implementation results show the powerful of the proposed method comparing to ODLBP. The proposed method is more faster than the ODLBP while its precision and recall are slightly higher than the ODLBP method.
局部描述子在基于内容的图像检索(CBIR)和人脸识别中起着重要作用。几乎所有的局部模式都是基于局部区域内相邻像素之间的关系。最著名的局部模式是局部二进制模式(local Binary pattern, LBP),其模式是根据中心像素与其相邻像素在$3 × 3$局部窗口中的强度差来定义的。正交差分局部二值图(OLDBP)是近年来提出的LBP的扩展版本。本文对ODLBP进行了改进。在该方法中,将每个$3 × 3$局部窗口分成两组,然后提取每组的局部模式,最后通过组模式拼接提供特征向量。为了评估所提出的方法,使用了三个数据集Yale, ORL和GT。实现结果表明,与ODLBP相比,该方法具有较强的有效性。该方法的速度比ODLBP方法快,准确率和召回率略高于ODLBP方法。
{"title":"New Texture Descriptor Based on Improved Orthogonal Difference Local Binary Pattern","authors":"S. Fadaei, Pouya Hosseini, K. RahimiZadeh","doi":"10.1109/IPRIA59240.2023.10147180","DOIUrl":"https://doi.org/10.1109/IPRIA59240.2023.10147180","url":null,"abstract":"Local descriptor plays an important role in Content-Based Image Retrieval (CBIR) and face recognition. Almost all local patterns are based on the relationship between neighboring pixels in a local area. The most famous local pattern is Local Binary Pattern (LBP), in which the patterns are defined based on the intensity difference between a central pixel and its neighboring in a $3times 3$ local window. Orthogonal Difference Local Binary Pattern (OLDBP) is an extended version of LBP which is introduced recently. In this paper, ODLBP is improved. In the proposed method each $3times 3$ local window is divided into two groups and then local patterns of each group are extracted and finally, the feature vector is provided by concatenating of groups patterns. To evaluate the proposed method, three datasets Yale, ORL and GT are used. Implementation results show the powerful of the proposed method comparing to ODLBP. The proposed method is more faster than the ODLBP while its precision and recall are slightly higher than the ODLBP method.","PeriodicalId":109390,"journal":{"name":"2023 6th International Conference on Pattern Recognition and Image Analysis (IPRIA)","volume":"884 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132575676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Edge Detection Method Based on the Differences in Intensities of Rotating Kernel Borders 基于旋转核边界强度差异的边缘检测方法
Pub Date : 2023-02-14 DOI: 10.1109/IPRIA59240.2023.10147182
Reza Yazdi, Hassan Khotanlou, Elham Alighardash, Mohammad Zolfaghari
Edge detection is a traditional and fundamental task that is regarded as the forerunner of the most widely researched problems in computer vision. In this paper, we present a new robust edge detection method with real-time implementation potential. For edge extraction a 3*3 kernel employed. We obtain differences in intensities at various kernel locations in the suggested edge response function by examining the various 3*3 kernel entrance scenarios to the borders. Each window is divided into two “L”-shaped parts that are rotated before the differences between them are added. The proposed method produces a dense edge response map that can be fed into other methods, such as deep learning architectures. The proposed edge detector was compared to two tried-and-true edge detectors, yielding a compromised result.
边缘检测是一项传统的基础任务,被认为是计算机视觉中研究最广泛的问题的先导。本文提出了一种具有实时实现潜力的鲁棒边缘检测方法。对于边缘提取,使用3*3核。通过检查边界的各种3*3核入口场景,我们在建议的边缘响应函数中获得了不同核位置的强度差异。每个窗口被分成两个“L”形的部分,在添加它们之间的差异之前,它们是旋转的。提出的方法产生密集的边缘响应图,可以馈送到其他方法,如深度学习架构。将提出的边缘检测器与两个经过验证的边缘检测器进行比较,得出折衷的结果。
{"title":"Edge Detection Method Based on the Differences in Intensities of Rotating Kernel Borders","authors":"Reza Yazdi, Hassan Khotanlou, Elham Alighardash, Mohammad Zolfaghari","doi":"10.1109/IPRIA59240.2023.10147182","DOIUrl":"https://doi.org/10.1109/IPRIA59240.2023.10147182","url":null,"abstract":"Edge detection is a traditional and fundamental task that is regarded as the forerunner of the most widely researched problems in computer vision. In this paper, we present a new robust edge detection method with real-time implementation potential. For edge extraction a 3*3 kernel employed. We obtain differences in intensities at various kernel locations in the suggested edge response function by examining the various 3*3 kernel entrance scenarios to the borders. Each window is divided into two “L”-shaped parts that are rotated before the differences between them are added. The proposed method produces a dense edge response map that can be fed into other methods, such as deep learning architectures. The proposed edge detector was compared to two tried-and-true edge detectors, yielding a compromised result.","PeriodicalId":109390,"journal":{"name":"2023 6th International Conference on Pattern Recognition and Image Analysis (IPRIA)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134356528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-Supervised Dusty Image Enhancement Using Generative Adversarial Networks 基于生成对抗网络的自监督尘埃图像增强
Pub Date : 2023-02-14 DOI: 10.1109/IPRIA59240.2023.10147177
Mahsa Mohamadi, Ako Bartani, F. Tab
The outdoor images are usually contaminated by atmospheric phenomena, which have effects such as low contrast, and poor quality and visibility. As the resulting dust phenomena is increasing day by day, improving the quality of dusty images as per-processing is an important challenge. To address this challenge, we propose a self-supervised method based on generative adversarial network. The proposed framework consists of two generators master and supporter which are trained in joint form. The master and supporter generators are trained using synthetic and real dust images respectively which their labels are generated in the proposed framework. Due to lack of real-world dusty images and the weakness of synthetic dusty image in the depth, we use an effective learning mechanism in which the supporter helps the master to generate satisfactory dust-free images by learning restore depth of Image and transfer its knowledge to the master. The experimental results demonstrate that the proposed method performs favorably against the previous dusty image enhancement methods on benchmark real-world duty images.
室外图像通常受到大气现象的污染,产生对比度低、质量和能见度差等影响。随着扬尘现象的日益增多,提高扬尘图像的预处理质量是一个重要的挑战。为了解决这一挑战,我们提出了一种基于生成对抗网络的自监督方法。该框架由两个生成器、主机和支持器组成,它们以联合形式进行训练。主和支持生成器分别使用合成和真实的尘埃图像进行训练,并在所提出的框架中生成其标签。针对现实世界尘雾图像的缺乏以及人工合成尘雾图像在深度上的弱点,我们采用了一种有效的学习机制,即支持者通过学习恢复图像的深度,帮助主人生成满意的无尘图像,并将其知识传递给主人。实验结果表明,该方法在真实世界基准任务图像上的增强效果优于以往的粉尘图像增强方法。
{"title":"Self-Supervised Dusty Image Enhancement Using Generative Adversarial Networks","authors":"Mahsa Mohamadi, Ako Bartani, F. Tab","doi":"10.1109/IPRIA59240.2023.10147177","DOIUrl":"https://doi.org/10.1109/IPRIA59240.2023.10147177","url":null,"abstract":"The outdoor images are usually contaminated by atmospheric phenomena, which have effects such as low contrast, and poor quality and visibility. As the resulting dust phenomena is increasing day by day, improving the quality of dusty images as per-processing is an important challenge. To address this challenge, we propose a self-supervised method based on generative adversarial network. The proposed framework consists of two generators master and supporter which are trained in joint form. The master and supporter generators are trained using synthetic and real dust images respectively which their labels are generated in the proposed framework. Due to lack of real-world dusty images and the weakness of synthetic dusty image in the depth, we use an effective learning mechanism in which the supporter helps the master to generate satisfactory dust-free images by learning restore depth of Image and transfer its knowledge to the master. The experimental results demonstrate that the proposed method performs favorably against the previous dusty image enhancement methods on benchmark real-world duty images.","PeriodicalId":109390,"journal":{"name":"2023 6th International Conference on Pattern Recognition and Image Analysis (IPRIA)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121123625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Classification of Rice Leaf Diseases Using CNN-Based Pre-Trained Models and Transfer Learning 基于cnn预训练模型和迁移学习的水稻叶片病害分类
Pub Date : 2023-02-14 DOI: 10.1109/IPRIA59240.2023.10147178
Marjan Mavaddat, M. Naderan, Seyyed Enayatallah Alavi
In the past, diagnosing pests has been a very important and challenging task for farmers, and ocular detection methods with the help of phytosanitary specialists, were time consuming, costly, and associated with human error. Today, in modern agriculture, diagnostic softwares by artificial intelligence can be used by farmers themselves with little time and cost. On the other hand, because diseases and pests of plants, especially rice leaves, are of different intensities and are similar to each other, automatic detection methods are more accurate and have less error. In this paper, two transfer learning methods for diagnosing rice leaf disease are investigated. The first method uses the CNN-based output of a pre-trained model and an appropriate classifier is added. In the second method, freezing the bottom layers, fine-tuning the weights in the last layers of the pre-trained network, and adding the appropriate classifier to the model are proposed. For this purpose, seven CNN models have been designed and evaluated. Simulation results show that four of these networks as: VGG16 network with fine tuning the last two layers, Inceptionv3 with fine tuning the last 12 layers, Resnet152v2 with fine tuning the last 5 and 6 layers reach 100% accuracy and an f1-score of 1. In addition, fewer number of layers in VGG16 network with 2-layers fine tuning consumes less memory and has faster response time. Also, our paper has a higher accuracy and less training time than similar papers.
过去,诊断有害生物对农民来说是一项非常重要和具有挑战性的任务,而在植物检疫专家的帮助下,肉眼检测方法既耗时又昂贵,而且容易出现人为错误。今天,在现代农业中,人工智能的诊断软件可以由农民自己使用,而且时间和成本都很低。另一方面,由于植物特别是水稻叶片的病虫害强度不同,且彼此相似,因此自动检测方法更准确,误差更小。本文研究了两种用于水稻叶病诊断的迁移学习方法。第一种方法使用基于cnn的预训练模型输出,并添加适当的分类器。在第二种方法中,提出冻结底层,微调预训练网络最后一层的权重,并在模型中添加合适的分类器。为此,设计并评估了7个CNN模型。仿真结果表明,VGG16网络对最后两层进行了微调,Inceptionv3网络对最后12层进行了微调,Resnet152v2网络对最后5层和6层进行了微调,其中4种网络的准确率达到100%,f1得分为1。此外,在2层微调的VGG16网络中,层数更少,消耗的内存更少,响应时间更快。与同类论文相比,本文具有更高的准确率和更少的训练时间。
{"title":"Classification of Rice Leaf Diseases Using CNN-Based Pre-Trained Models and Transfer Learning","authors":"Marjan Mavaddat, M. Naderan, Seyyed Enayatallah Alavi","doi":"10.1109/IPRIA59240.2023.10147178","DOIUrl":"https://doi.org/10.1109/IPRIA59240.2023.10147178","url":null,"abstract":"In the past, diagnosing pests has been a very important and challenging task for farmers, and ocular detection methods with the help of phytosanitary specialists, were time consuming, costly, and associated with human error. Today, in modern agriculture, diagnostic softwares by artificial intelligence can be used by farmers themselves with little time and cost. On the other hand, because diseases and pests of plants, especially rice leaves, are of different intensities and are similar to each other, automatic detection methods are more accurate and have less error. In this paper, two transfer learning methods for diagnosing rice leaf disease are investigated. The first method uses the CNN-based output of a pre-trained model and an appropriate classifier is added. In the second method, freezing the bottom layers, fine-tuning the weights in the last layers of the pre-trained network, and adding the appropriate classifier to the model are proposed. For this purpose, seven CNN models have been designed and evaluated. Simulation results show that four of these networks as: VGG16 network with fine tuning the last two layers, Inceptionv3 with fine tuning the last 12 layers, Resnet152v2 with fine tuning the last 5 and 6 layers reach 100% accuracy and an f1-score of 1. In addition, fewer number of layers in VGG16 network with 2-layers fine tuning consumes less memory and has faster response time. Also, our paper has a higher accuracy and less training time than similar papers.","PeriodicalId":109390,"journal":{"name":"2023 6th International Conference on Pattern Recognition and Image Analysis (IPRIA)","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114313390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Facial Expression Recognition using Spatial Feature Extraction and Ensemble Deep Networks 基于空间特征提取和集成深度网络的面部表情识别
Pub Date : 2023-02-14 DOI: 10.1109/IPRIA59240.2023.10147196
E. Afshar, Hassan Khotanlou, Elham Alighardash
Researchers have shown that 55% of concepts are conveyed through facial emotion and only 7% are conveyed by words and sentences, so facial expression plays an important role in conveying concepts in human communications. In recent years, due to the improvement of artificial neural networks, many studies have been conducted related to facial expression recognition. This paper presents a method based on ensemble classification using convolutional neural networks to recognize facial emotions. The concatenation of spatial features with global features is used as a feature map for the classification stage in the committee network. Two committee networks are fed separately with LBP and raw images. After training the two committee networks, to classify the emotion, the maximum probability between the two networks is considered as the final output. The proposed method was applied and tested on the FER2013 dataset. Our proposed method is more accurate than many leading methods, and in competition with the successful model that has a more complex architecture and higher computational cost, it has been able to achieve acceptable results with a simple architecture.
研究表明,55%的概念是通过面部表情传达的,只有7%的概念是通过文字和句子传达的,因此面部表情在人类交流中对概念的传达起着重要的作用。近年来,由于人工神经网络的改进,人们对面部表情识别进行了很多相关的研究。提出了一种基于集成分类的卷积神经网络人脸情绪识别方法。在委员会网络中,空间特征与全局特征的连接被用作分类阶段的特征映射。两个委员会网络分别输入LBP和原始图像。在对两个委员会网络进行训练后,对情感进行分类,将两个网络之间的最大概率作为最终输出。该方法在FER2013数据集上进行了应用和测试。我们提出的方法比许多领先的方法更精确,并且在与结构更复杂、计算成本更高的成功模型的竞争中,它已经能够以简单的结构获得可接受的结果。
{"title":"Facial Expression Recognition using Spatial Feature Extraction and Ensemble Deep Networks","authors":"E. Afshar, Hassan Khotanlou, Elham Alighardash","doi":"10.1109/IPRIA59240.2023.10147196","DOIUrl":"https://doi.org/10.1109/IPRIA59240.2023.10147196","url":null,"abstract":"Researchers have shown that 55% of concepts are conveyed through facial emotion and only 7% are conveyed by words and sentences, so facial expression plays an important role in conveying concepts in human communications. In recent years, due to the improvement of artificial neural networks, many studies have been conducted related to facial expression recognition. This paper presents a method based on ensemble classification using convolutional neural networks to recognize facial emotions. The concatenation of spatial features with global features is used as a feature map for the classification stage in the committee network. Two committee networks are fed separately with LBP and raw images. After training the two committee networks, to classify the emotion, the maximum probability between the two networks is considered as the final output. The proposed method was applied and tested on the FER2013 dataset. Our proposed method is more accurate than many leading methods, and in competition with the successful model that has a more complex architecture and higher computational cost, it has been able to achieve acceptable results with a simple architecture.","PeriodicalId":109390,"journal":{"name":"2023 6th International Conference on Pattern Recognition and Image Analysis (IPRIA)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132124392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adversarial Attack by Limited Point Cloud Surface Modifications 有限点云表面修改的对抗性攻击
Pub Date : 2021-10-07 DOI: 10.1109/IPRIA59240.2023.10147168
Atrin Arya, Hanieh Naderi, S. Kasaei
Recent research has revealed that the security of deep neural networks that directly process 3D point clouds to classify objects can be threatened by adversarial samples. Al-though existing adversarial attack methods achieve high success rates, they do not restrict the point modifications enough to preserve the point cloud appearance. To overcome this shortcoming, two constraints are proposed. These include applying hard boundary constraints on the number of modified points and on the point perturbation norms. Due to the restrictive nature of the problem, the search space contains many local maxima. The proposed method addresses this issue by using a high step-size at the beginning of the algorithm to search the main surface of the point cloud fast and effectively. Then, in order to converge to the desired output, the step-size is gradually decreased. To evaluate the performance of the proposed method, it is run on the ModelNet40 and ScanObjectNN datasets by employing the state-of-the-art point cloud classification models; including PointNet, PointNet++, and DGCNN. The obtained results show that it can perform successful attacks and achieve state-of-the-art results by only a limited number of point modifications while preserving the appearance of the point cloud. Moreover, due to the effective search algorithm, it can perform successful attacks in just a few steps. Additionally, the proposed step-size scheduling algorithm shows an improvement of up to 14.5% when adopted by other methods as well. The proposed method also performs effectively against popular defense methods.
最近的研究表明,直接处理三维点云对物体进行分类的深度神经网络的安全性可能会受到对抗性样本的威胁。尽管现有的对抗性攻击方法取得了很高的成功率,但它们并没有限制点的修改以保持点云的外观。为了克服这一缺点,提出了两个约束条件。这包括对修改点的数目和对点扰动范数施加硬边界约束。由于问题的限制性,搜索空间包含许多局部极大值。该方法通过在算法开始时使用高步长来快速有效地搜索点云的主表面,从而解决了这一问题。然后,为了收敛到期望的输出,逐步减小步长。为了评估所提出方法的性能,采用最先进的点云分类模型,在ModelNet40和ScanObjectNN数据集上运行该方法;包括PointNet、pointnet++和DGCNN。实验结果表明,该算法在保留点云外观的前提下,只需少量的点修改,即可成功实施攻击,达到最先进的攻击效果。此外,由于有效的搜索算法,它可以在几个步骤内完成成功的攻击。此外,所提出的步长调度算法在与其他方法相结合时,效率提高了14.5%。该方法也能有效地对抗常用的防御方法。
{"title":"Adversarial Attack by Limited Point Cloud Surface Modifications","authors":"Atrin Arya, Hanieh Naderi, S. Kasaei","doi":"10.1109/IPRIA59240.2023.10147168","DOIUrl":"https://doi.org/10.1109/IPRIA59240.2023.10147168","url":null,"abstract":"Recent research has revealed that the security of deep neural networks that directly process 3D point clouds to classify objects can be threatened by adversarial samples. Al-though existing adversarial attack methods achieve high success rates, they do not restrict the point modifications enough to preserve the point cloud appearance. To overcome this shortcoming, two constraints are proposed. These include applying hard boundary constraints on the number of modified points and on the point perturbation norms. Due to the restrictive nature of the problem, the search space contains many local maxima. The proposed method addresses this issue by using a high step-size at the beginning of the algorithm to search the main surface of the point cloud fast and effectively. Then, in order to converge to the desired output, the step-size is gradually decreased. To evaluate the performance of the proposed method, it is run on the ModelNet40 and ScanObjectNN datasets by employing the state-of-the-art point cloud classification models; including PointNet, PointNet++, and DGCNN. The obtained results show that it can perform successful attacks and achieve state-of-the-art results by only a limited number of point modifications while preserving the appearance of the point cloud. Moreover, due to the effective search algorithm, it can perform successful attacks in just a few steps. Additionally, the proposed step-size scheduling algorithm shows an improvement of up to 14.5% when adopted by other methods as well. The proposed method also performs effectively against popular defense methods.","PeriodicalId":109390,"journal":{"name":"2023 6th International Conference on Pattern Recognition and Image Analysis (IPRIA)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130178807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
2023 6th International Conference on Pattern Recognition and Image Analysis (IPRIA)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1