首页 > 最新文献

2023 6th International Conference on Pattern Recognition and Image Analysis (IPRIA)最新文献

英文 中文
Hippocampus segmentation in MR brain images using learned fuzzy mask and U-Net 基于学习模糊掩模和U-Net的MR脑图像海马分割
Pub Date : 2023-02-14 DOI: 10.1109/IPRIA59240.2023.10147188
Alireza Sadeghi, Hassan Khutanlou
The hippocampus is an important part of the human brain that is damaged in some diseases such as Alzheimer's, schizophrenia, and epilepsy. This paper presents a new method in hippocampus segmentation which is applicable in the early diagnosis of mentioned diseases. This method has introduced a two-section model to detect the hippocampus region in brain MR images. In the first section, the location of the hippocampus is roughly detected using a U-Net neural network model, and then a fuzzy mask is created around the detected area using a fuzzy function. In the second section, this mask is applied to the brain images and a U-Net neural network is used to segment these masked images, which finally predicts the location of the hippocampus. The main advantage and idea of this method is the use of a pre-trained fuzzy mask, which increases the quality of segmentation. The proposed method in this research was trained and tested using the HARP dataset, which contains 135 T1-weighted MRI volumes and the proposed model reached 0.95 dice in the best case.
海马体是人类大脑的一个重要部分,在阿尔茨海默氏症、精神分裂症和癫痫等一些疾病中受损。本文提出了一种新的海马分割方法,可用于上述疾病的早期诊断。该方法引入了两段模型来检测脑磁共振图像中的海马区。在第一部分中,使用U-Net神经网络模型粗略检测海马的位置,然后使用模糊函数在检测区域周围创建模糊掩模。在第二部分中,将该掩模应用于大脑图像,并使用U-Net神经网络对这些掩模图像进行分割,最终预测海马的位置。该方法的主要优点和思想是使用预训练的模糊掩模,提高了分割质量。本研究中提出的方法使用包含135个t1加权MRI体积的HARP数据集进行训练和测试,在最佳情况下提出的模型达到0.95 dice。
{"title":"Hippocampus segmentation in MR brain images using learned fuzzy mask and U-Net","authors":"Alireza Sadeghi, Hassan Khutanlou","doi":"10.1109/IPRIA59240.2023.10147188","DOIUrl":"https://doi.org/10.1109/IPRIA59240.2023.10147188","url":null,"abstract":"The hippocampus is an important part of the human brain that is damaged in some diseases such as Alzheimer's, schizophrenia, and epilepsy. This paper presents a new method in hippocampus segmentation which is applicable in the early diagnosis of mentioned diseases. This method has introduced a two-section model to detect the hippocampus region in brain MR images. In the first section, the location of the hippocampus is roughly detected using a U-Net neural network model, and then a fuzzy mask is created around the detected area using a fuzzy function. In the second section, this mask is applied to the brain images and a U-Net neural network is used to segment these masked images, which finally predicts the location of the hippocampus. The main advantage and idea of this method is the use of a pre-trained fuzzy mask, which increases the quality of segmentation. The proposed method in this research was trained and tested using the HARP dataset, which contains 135 T1-weighted MRI volumes and the proposed model reached 0.95 dice in the best case.","PeriodicalId":109390,"journal":{"name":"2023 6th International Conference on Pattern Recognition and Image Analysis (IPRIA)","volume":"29 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134035191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep perceptual similarity and Quality Assessment 深度感知相似度与质量评价
Pub Date : 2023-02-14 DOI: 10.1109/IPRIA59240.2023.10147170
Alireza Khatami, Ahmad Mahmoudi-Aznaveh
Measuring the perceptual similarity between two images is a long-standing problem. This assessment should mimic human judgments. Considering the complexity of the human visual system, it is challenging to model human perception. On the other hand, the recent low-level vision task approaches, mostly based on supervised deep learning, require an appropriate loss for the backward pass. The per-pixel loss, such as MSE and MAE, between the output of the network and the ground-truth images were among the first choices. More complicated and common similarity measures in which the error is computed in a hand-designed feature space are also employed. Furthermore, Deep Perceptual Similarity (DPS) metrics, where the similarity is measured in the deep feature space, also have promising results. This feature can be selected from a pre-trained or optimized model for the task at hand. Recently many studies have been conducted to thoroughly investigate DPS. In this research, we provide an in-depth analysis of the pros and cons of DPS in assessing the full reference quality assessment. In addition, to compare different similarity measures, we propose a metric which aggregates various desired factors. Based on our experiment, it can be concluded that perceptual similarity is not directly related to classification accuracy. It is discovered that the outliers mostly contain high-frequency elements. The code and complete outcomes described in results, can be found on: https://github.com/Alireza-Khatami/PerceptualQuality
测量两幅图像之间的感知相似性是一个长期存在的问题。这种评估应该模仿人类的判断。考虑到人类视觉系统的复杂性,对人类感知进行建模是一项挑战。另一方面,最近的低层次视觉任务方法,主要基于监督深度学习,需要对向后传递进行适当的损失。网络输出和真实图像之间的每像素损失(如MSE和MAE)是首选。更复杂和常见的相似度度量,其中误差是在手工设计的特征空间中计算的。此外,在深度特征空间中测量相似性的深度感知相似度(DPS)指标也有很好的结果。此特征可以从针对手头任务的预训练或优化模型中选择。最近进行了许多研究,以彻底调查DPS。在本研究中,我们深入分析了DPS在评估全参考文献质量评估中的利弊。此外,为了比较不同的相似性度量,我们提出了一个聚合各种期望因素的度量。根据我们的实验,可以得出感知相似度与分类准确率没有直接关系的结论。研究发现,异常值大多含有高频元素。结果中描述的代码和完整结果可以在https://github.com/Alireza-Khatami/PerceptualQuality上找到
{"title":"Deep perceptual similarity and Quality Assessment","authors":"Alireza Khatami, Ahmad Mahmoudi-Aznaveh","doi":"10.1109/IPRIA59240.2023.10147170","DOIUrl":"https://doi.org/10.1109/IPRIA59240.2023.10147170","url":null,"abstract":"Measuring the perceptual similarity between two images is a long-standing problem. This assessment should mimic human judgments. Considering the complexity of the human visual system, it is challenging to model human perception. On the other hand, the recent low-level vision task approaches, mostly based on supervised deep learning, require an appropriate loss for the backward pass. The per-pixel loss, such as MSE and MAE, between the output of the network and the ground-truth images were among the first choices. More complicated and common similarity measures in which the error is computed in a hand-designed feature space are also employed. Furthermore, Deep Perceptual Similarity (DPS) metrics, where the similarity is measured in the deep feature space, also have promising results. This feature can be selected from a pre-trained or optimized model for the task at hand. Recently many studies have been conducted to thoroughly investigate DPS. In this research, we provide an in-depth analysis of the pros and cons of DPS in assessing the full reference quality assessment. In addition, to compare different similarity measures, we propose a metric which aggregates various desired factors. Based on our experiment, it can be concluded that perceptual similarity is not directly related to classification accuracy. It is discovered that the outliers mostly contain high-frequency elements. The code and complete outcomes described in results, can be found on: https://github.com/Alireza-Khatami/PerceptualQuality","PeriodicalId":109390,"journal":{"name":"2023 6th International Conference on Pattern Recognition and Image Analysis (IPRIA)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126049493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-Supervised Dusty Image Enhancement Using Generative Adversarial Networks 基于生成对抗网络的自监督尘埃图像增强
Pub Date : 2023-02-14 DOI: 10.1109/IPRIA59240.2023.10147177
Mahsa Mohamadi, Ako Bartani, F. Tab
The outdoor images are usually contaminated by atmospheric phenomena, which have effects such as low contrast, and poor quality and visibility. As the resulting dust phenomena is increasing day by day, improving the quality of dusty images as per-processing is an important challenge. To address this challenge, we propose a self-supervised method based on generative adversarial network. The proposed framework consists of two generators master and supporter which are trained in joint form. The master and supporter generators are trained using synthetic and real dust images respectively which their labels are generated in the proposed framework. Due to lack of real-world dusty images and the weakness of synthetic dusty image in the depth, we use an effective learning mechanism in which the supporter helps the master to generate satisfactory dust-free images by learning restore depth of Image and transfer its knowledge to the master. The experimental results demonstrate that the proposed method performs favorably against the previous dusty image enhancement methods on benchmark real-world duty images.
室外图像通常受到大气现象的污染,产生对比度低、质量和能见度差等影响。随着扬尘现象的日益增多,提高扬尘图像的预处理质量是一个重要的挑战。为了解决这一挑战,我们提出了一种基于生成对抗网络的自监督方法。该框架由两个生成器、主机和支持器组成,它们以联合形式进行训练。主和支持生成器分别使用合成和真实的尘埃图像进行训练,并在所提出的框架中生成其标签。针对现实世界尘雾图像的缺乏以及人工合成尘雾图像在深度上的弱点,我们采用了一种有效的学习机制,即支持者通过学习恢复图像的深度,帮助主人生成满意的无尘图像,并将其知识传递给主人。实验结果表明,该方法在真实世界基准任务图像上的增强效果优于以往的粉尘图像增强方法。
{"title":"Self-Supervised Dusty Image Enhancement Using Generative Adversarial Networks","authors":"Mahsa Mohamadi, Ako Bartani, F. Tab","doi":"10.1109/IPRIA59240.2023.10147177","DOIUrl":"https://doi.org/10.1109/IPRIA59240.2023.10147177","url":null,"abstract":"The outdoor images are usually contaminated by atmospheric phenomena, which have effects such as low contrast, and poor quality and visibility. As the resulting dust phenomena is increasing day by day, improving the quality of dusty images as per-processing is an important challenge. To address this challenge, we propose a self-supervised method based on generative adversarial network. The proposed framework consists of two generators master and supporter which are trained in joint form. The master and supporter generators are trained using synthetic and real dust images respectively which their labels are generated in the proposed framework. Due to lack of real-world dusty images and the weakness of synthetic dusty image in the depth, we use an effective learning mechanism in which the supporter helps the master to generate satisfactory dust-free images by learning restore depth of Image and transfer its knowledge to the master. The experimental results demonstrate that the proposed method performs favorably against the previous dusty image enhancement methods on benchmark real-world duty images.","PeriodicalId":109390,"journal":{"name":"2023 6th International Conference on Pattern Recognition and Image Analysis (IPRIA)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121123625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Classification of Rice Leaf Diseases Using CNN-Based Pre-Trained Models and Transfer Learning 基于cnn预训练模型和迁移学习的水稻叶片病害分类
Pub Date : 2023-02-14 DOI: 10.1109/IPRIA59240.2023.10147178
Marjan Mavaddat, M. Naderan, Seyyed Enayatallah Alavi
In the past, diagnosing pests has been a very important and challenging task for farmers, and ocular detection methods with the help of phytosanitary specialists, were time consuming, costly, and associated with human error. Today, in modern agriculture, diagnostic softwares by artificial intelligence can be used by farmers themselves with little time and cost. On the other hand, because diseases and pests of plants, especially rice leaves, are of different intensities and are similar to each other, automatic detection methods are more accurate and have less error. In this paper, two transfer learning methods for diagnosing rice leaf disease are investigated. The first method uses the CNN-based output of a pre-trained model and an appropriate classifier is added. In the second method, freezing the bottom layers, fine-tuning the weights in the last layers of the pre-trained network, and adding the appropriate classifier to the model are proposed. For this purpose, seven CNN models have been designed and evaluated. Simulation results show that four of these networks as: VGG16 network with fine tuning the last two layers, Inceptionv3 with fine tuning the last 12 layers, Resnet152v2 with fine tuning the last 5 and 6 layers reach 100% accuracy and an f1-score of 1. In addition, fewer number of layers in VGG16 network with 2-layers fine tuning consumes less memory and has faster response time. Also, our paper has a higher accuracy and less training time than similar papers.
过去,诊断有害生物对农民来说是一项非常重要和具有挑战性的任务,而在植物检疫专家的帮助下,肉眼检测方法既耗时又昂贵,而且容易出现人为错误。今天,在现代农业中,人工智能的诊断软件可以由农民自己使用,而且时间和成本都很低。另一方面,由于植物特别是水稻叶片的病虫害强度不同,且彼此相似,因此自动检测方法更准确,误差更小。本文研究了两种用于水稻叶病诊断的迁移学习方法。第一种方法使用基于cnn的预训练模型输出,并添加适当的分类器。在第二种方法中,提出冻结底层,微调预训练网络最后一层的权重,并在模型中添加合适的分类器。为此,设计并评估了7个CNN模型。仿真结果表明,VGG16网络对最后两层进行了微调,Inceptionv3网络对最后12层进行了微调,Resnet152v2网络对最后5层和6层进行了微调,其中4种网络的准确率达到100%,f1得分为1。此外,在2层微调的VGG16网络中,层数更少,消耗的内存更少,响应时间更快。与同类论文相比,本文具有更高的准确率和更少的训练时间。
{"title":"Classification of Rice Leaf Diseases Using CNN-Based Pre-Trained Models and Transfer Learning","authors":"Marjan Mavaddat, M. Naderan, Seyyed Enayatallah Alavi","doi":"10.1109/IPRIA59240.2023.10147178","DOIUrl":"https://doi.org/10.1109/IPRIA59240.2023.10147178","url":null,"abstract":"In the past, diagnosing pests has been a very important and challenging task for farmers, and ocular detection methods with the help of phytosanitary specialists, were time consuming, costly, and associated with human error. Today, in modern agriculture, diagnostic softwares by artificial intelligence can be used by farmers themselves with little time and cost. On the other hand, because diseases and pests of plants, especially rice leaves, are of different intensities and are similar to each other, automatic detection methods are more accurate and have less error. In this paper, two transfer learning methods for diagnosing rice leaf disease are investigated. The first method uses the CNN-based output of a pre-trained model and an appropriate classifier is added. In the second method, freezing the bottom layers, fine-tuning the weights in the last layers of the pre-trained network, and adding the appropriate classifier to the model are proposed. For this purpose, seven CNN models have been designed and evaluated. Simulation results show that four of these networks as: VGG16 network with fine tuning the last two layers, Inceptionv3 with fine tuning the last 12 layers, Resnet152v2 with fine tuning the last 5 and 6 layers reach 100% accuracy and an f1-score of 1. In addition, fewer number of layers in VGG16 network with 2-layers fine tuning consumes less memory and has faster response time. Also, our paper has a higher accuracy and less training time than similar papers.","PeriodicalId":109390,"journal":{"name":"2023 6th International Conference on Pattern Recognition and Image Analysis (IPRIA)","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114313390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Facial Expression Recognition using Spatial Feature Extraction and Ensemble Deep Networks 基于空间特征提取和集成深度网络的面部表情识别
Pub Date : 2023-02-14 DOI: 10.1109/IPRIA59240.2023.10147196
E. Afshar, Hassan Khotanlou, Elham Alighardash
Researchers have shown that 55% of concepts are conveyed through facial emotion and only 7% are conveyed by words and sentences, so facial expression plays an important role in conveying concepts in human communications. In recent years, due to the improvement of artificial neural networks, many studies have been conducted related to facial expression recognition. This paper presents a method based on ensemble classification using convolutional neural networks to recognize facial emotions. The concatenation of spatial features with global features is used as a feature map for the classification stage in the committee network. Two committee networks are fed separately with LBP and raw images. After training the two committee networks, to classify the emotion, the maximum probability between the two networks is considered as the final output. The proposed method was applied and tested on the FER2013 dataset. Our proposed method is more accurate than many leading methods, and in competition with the successful model that has a more complex architecture and higher computational cost, it has been able to achieve acceptable results with a simple architecture.
研究表明,55%的概念是通过面部表情传达的,只有7%的概念是通过文字和句子传达的,因此面部表情在人类交流中对概念的传达起着重要的作用。近年来,由于人工神经网络的改进,人们对面部表情识别进行了很多相关的研究。提出了一种基于集成分类的卷积神经网络人脸情绪识别方法。在委员会网络中,空间特征与全局特征的连接被用作分类阶段的特征映射。两个委员会网络分别输入LBP和原始图像。在对两个委员会网络进行训练后,对情感进行分类,将两个网络之间的最大概率作为最终输出。该方法在FER2013数据集上进行了应用和测试。我们提出的方法比许多领先的方法更精确,并且在与结构更复杂、计算成本更高的成功模型的竞争中,它已经能够以简单的结构获得可接受的结果。
{"title":"Facial Expression Recognition using Spatial Feature Extraction and Ensemble Deep Networks","authors":"E. Afshar, Hassan Khotanlou, Elham Alighardash","doi":"10.1109/IPRIA59240.2023.10147196","DOIUrl":"https://doi.org/10.1109/IPRIA59240.2023.10147196","url":null,"abstract":"Researchers have shown that 55% of concepts are conveyed through facial emotion and only 7% are conveyed by words and sentences, so facial expression plays an important role in conveying concepts in human communications. In recent years, due to the improvement of artificial neural networks, many studies have been conducted related to facial expression recognition. This paper presents a method based on ensemble classification using convolutional neural networks to recognize facial emotions. The concatenation of spatial features with global features is used as a feature map for the classification stage in the committee network. Two committee networks are fed separately with LBP and raw images. After training the two committee networks, to classify the emotion, the maximum probability between the two networks is considered as the final output. The proposed method was applied and tested on the FER2013 dataset. Our proposed method is more accurate than many leading methods, and in competition with the successful model that has a more complex architecture and higher computational cost, it has been able to achieve acceptable results with a simple architecture.","PeriodicalId":109390,"journal":{"name":"2023 6th International Conference on Pattern Recognition and Image Analysis (IPRIA)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132124392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adversarial Attack by Limited Point Cloud Surface Modifications 有限点云表面修改的对抗性攻击
Pub Date : 2021-10-07 DOI: 10.1109/IPRIA59240.2023.10147168
Atrin Arya, Hanieh Naderi, S. Kasaei
Recent research has revealed that the security of deep neural networks that directly process 3D point clouds to classify objects can be threatened by adversarial samples. Al-though existing adversarial attack methods achieve high success rates, they do not restrict the point modifications enough to preserve the point cloud appearance. To overcome this shortcoming, two constraints are proposed. These include applying hard boundary constraints on the number of modified points and on the point perturbation norms. Due to the restrictive nature of the problem, the search space contains many local maxima. The proposed method addresses this issue by using a high step-size at the beginning of the algorithm to search the main surface of the point cloud fast and effectively. Then, in order to converge to the desired output, the step-size is gradually decreased. To evaluate the performance of the proposed method, it is run on the ModelNet40 and ScanObjectNN datasets by employing the state-of-the-art point cloud classification models; including PointNet, PointNet++, and DGCNN. The obtained results show that it can perform successful attacks and achieve state-of-the-art results by only a limited number of point modifications while preserving the appearance of the point cloud. Moreover, due to the effective search algorithm, it can perform successful attacks in just a few steps. Additionally, the proposed step-size scheduling algorithm shows an improvement of up to 14.5% when adopted by other methods as well. The proposed method also performs effectively against popular defense methods.
最近的研究表明,直接处理三维点云对物体进行分类的深度神经网络的安全性可能会受到对抗性样本的威胁。尽管现有的对抗性攻击方法取得了很高的成功率,但它们并没有限制点的修改以保持点云的外观。为了克服这一缺点,提出了两个约束条件。这包括对修改点的数目和对点扰动范数施加硬边界约束。由于问题的限制性,搜索空间包含许多局部极大值。该方法通过在算法开始时使用高步长来快速有效地搜索点云的主表面,从而解决了这一问题。然后,为了收敛到期望的输出,逐步减小步长。为了评估所提出方法的性能,采用最先进的点云分类模型,在ModelNet40和ScanObjectNN数据集上运行该方法;包括PointNet、pointnet++和DGCNN。实验结果表明,该算法在保留点云外观的前提下,只需少量的点修改,即可成功实施攻击,达到最先进的攻击效果。此外,由于有效的搜索算法,它可以在几个步骤内完成成功的攻击。此外,所提出的步长调度算法在与其他方法相结合时,效率提高了14.5%。该方法也能有效地对抗常用的防御方法。
{"title":"Adversarial Attack by Limited Point Cloud Surface Modifications","authors":"Atrin Arya, Hanieh Naderi, S. Kasaei","doi":"10.1109/IPRIA59240.2023.10147168","DOIUrl":"https://doi.org/10.1109/IPRIA59240.2023.10147168","url":null,"abstract":"Recent research has revealed that the security of deep neural networks that directly process 3D point clouds to classify objects can be threatened by adversarial samples. Al-though existing adversarial attack methods achieve high success rates, they do not restrict the point modifications enough to preserve the point cloud appearance. To overcome this shortcoming, two constraints are proposed. These include applying hard boundary constraints on the number of modified points and on the point perturbation norms. Due to the restrictive nature of the problem, the search space contains many local maxima. The proposed method addresses this issue by using a high step-size at the beginning of the algorithm to search the main surface of the point cloud fast and effectively. Then, in order to converge to the desired output, the step-size is gradually decreased. To evaluate the performance of the proposed method, it is run on the ModelNet40 and ScanObjectNN datasets by employing the state-of-the-art point cloud classification models; including PointNet, PointNet++, and DGCNN. The obtained results show that it can perform successful attacks and achieve state-of-the-art results by only a limited number of point modifications while preserving the appearance of the point cloud. Moreover, due to the effective search algorithm, it can perform successful attacks in just a few steps. Additionally, the proposed step-size scheduling algorithm shows an improvement of up to 14.5% when adopted by other methods as well. The proposed method also performs effectively against popular defense methods.","PeriodicalId":109390,"journal":{"name":"2023 6th International Conference on Pattern Recognition and Image Analysis (IPRIA)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130178807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
2023 6th International Conference on Pattern Recognition and Image Analysis (IPRIA)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1