首页 > 最新文献

IEEE Applied Imagery Pattern Recognition Workshop : [proceedings]. IEEE Applied Imagery Pattern Recognition Workshop最新文献

英文 中文
Axon and Myelin Sheath Segmentation in Electron Microscopy Images using Meta Learning. 电子显微镜图像中轴突和髓鞘的元学习分割。
Nguyen P Nguyen, Stephanie Lopez, Catherine L Smith, Teresa E Lever, Nicole L Nichols, Filiz Bunyak

Various neurological diseases affect the morphology of myelinated axons. Quantitative analysis of these structures and changes occurring due to neurodegeneration or neuroregeneration is of great importance for characterization of disease state and treatment response. This paper proposes a robust, meta-learning based pipeline for segmentation of axons and surrounding myelin sheaths in electron microscopy images. This is the first step towards computation of electron microscopy related bio-markers of hypoglossal nerve degeneration/regeneration. This segmentation task is challenging due to large variations in morphology and texture of myelinated axons at different levels of degeneration and very limited availability of annotated data. To overcome these difficulties, the proposed pipeline uses a meta learning-based training strategy and a U-net like encoder decoder deep neural network. Experiments on unseen test data collected at different magnification levels (i.e, trained on 500X and 1200X images, and tested on 250X and 2500X images) showed improved segmentation performance by 5% to 7% compared to a regularly trained, comparable deep learning network.

多种神经系统疾病影响髓系轴突的形态。定量分析这些结构和由于神经变性或神经再生而发生的变化对于表征疾病状态和治疗反应具有重要意义。本文提出了一种鲁棒的、基于元学习的管道,用于电子显微镜图像中轴突和周围髓鞘的分割。这是计算舌下神经退化/再生的电子显微镜相关生物标志物的第一步。由于不同程度退化的髓鞘轴突的形态和质地存在很大差异,并且注释数据的可用性非常有限,因此分割任务具有挑战性。为了克服这些困难,提出的管道使用基于元学习的训练策略和类似U-net的编码器解码器深度神经网络。在不同放大率下收集的未见过的测试数据上进行的实验(即在500X和1200X图像上进行训练,在250X和2500X图像上进行测试)表明,与常规训练的可比深度学习网络相比,分割性能提高了5%到7%。
{"title":"Axon and Myelin Sheath Segmentation in Electron Microscopy Images using Meta Learning.","authors":"Nguyen P Nguyen,&nbsp;Stephanie Lopez,&nbsp;Catherine L Smith,&nbsp;Teresa E Lever,&nbsp;Nicole L Nichols,&nbsp;Filiz Bunyak","doi":"10.1109/aipr57179.2022.10092238","DOIUrl":"https://doi.org/10.1109/aipr57179.2022.10092238","url":null,"abstract":"<p><p>Various neurological diseases affect the morphology of myelinated axons. Quantitative analysis of these structures and changes occurring due to neurodegeneration or neuroregeneration is of great importance for characterization of disease state and treatment response. This paper proposes a robust, meta-learning based pipeline for segmentation of axons and surrounding myelin sheaths in electron microscopy images. This is the first step towards computation of electron microscopy related bio-markers of hypoglossal nerve degeneration/regeneration. This segmentation task is challenging due to large variations in morphology and texture of myelinated axons at different levels of degeneration and very limited availability of annotated data. To overcome these difficulties, the proposed pipeline uses a meta learning-based training strategy and a U-net like encoder decoder deep neural network. Experiments on unseen test data collected at different magnification levels (i.e, trained on 500X and 1200X images, and tested on 250X and 2500X images) showed improved segmentation performance by 5% to 7% compared to a regularly trained, comparable deep learning network.</p>","PeriodicalId":73278,"journal":{"name":"IEEE Applied Imagery Pattern Recognition Workshop : [proceedings]. IEEE Applied Imagery Pattern Recognition Workshop","volume":"2022 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10197949/pdf/nihms-1895752.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9509527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Deep Learning-Based Cell Detection and Extraction in Thin Blood Smears for Malaria Diagnosis. 基于深度学习的细胞检测与提取,用于疟疾诊断的薄血涂片。
Deniz Kavzak Ufuktepe, Feng Yang, Yasmin M Kassim, Hang Yu, Richard J Maude, Kannappan Palaniappan, Stefan Jaeger

Malaria is a major health threat caused by Plasmodium parasites that infect the red blood cells. Two predominant types of Plasmodium parasites are Plasmodium vivax (P. vivax) and Plasmodium falciparum (P. falciparum). Diagnosis of malaria typically involves visual microscopy examination of blood smears for malaria parasites. This is a tedious, error-prone visual inspection task requiring microscopy expertise which is often lacking in resource-poor settings. To address these problems, attempts have been made in recent years to automate malaria diagnosis using machine learning approaches. Several challenges need to be met for a machine learning approach to be successful in malaria diagnosis. Microscopy images acquired at different sites often vary in color, contrast, and consistency caused by different smear preparation and staining methods. Moreover, touching and overlapping cells complicate the red blood cell detection process, which can lead to inaccurate blood cell counts and thus incorrect parasitemia calculations. In this work, we propose a red blood cell detection and extraction framework to enable processing and analysis of single cells for follow-up processes like counting infected cells or identifying parasite species in thin blood smears. This framework consists of two modules: a cell detection module and a cell extraction module. The cell detection module trains a modified Channel-wise Feature Pyramid Network for Medicine (CFPNet-M) deep learning network that takes the green channel of the image and the color-deconvolution processed image as inputs, and learns a truncated distance transform image of cell annotations. CFPNet-M is chosen due to its low resource requirements, while the distance transform allows achieving more accurate cell counts for dense cells. Once the cells are detected by the network, the cell extraction module is used to extract single cells from the original image and count the number of cells. Our preliminary results based on 193 patients (including 148 P. Falciparum infected patients, and 45 uninfected patients) show that our framework achieves cell count accuracy of 92.2%.

疟疾是由感染红血球的疟原虫引起的主要健康威胁。两种主要的疟原虫是间日疟原虫(间日疟)和恶性疟原虫(恶性疟)。疟疾的诊断通常需要用肉眼显微镜检查血液涂片上的疟原虫。这是一项繁琐、容易出错的目视检查任务,需要显微镜专业技术,而在资源匮乏的环境中往往缺乏这种技术。为了解决这些问题,近年来人们尝试利用机器学习方法实现疟疾诊断自动化。机器学习方法要在疟疾诊断中取得成功,需要应对几个挑战。由于涂片制备和染色方法不同,在不同地点获取的显微镜图像在颜色、对比度和一致性上往往存在差异。此外,细胞接触和重叠也会使红细胞检测过程复杂化,从而导致血细胞计数不准确,进而导致寄生虫血症计算错误。在这项工作中,我们提出了一个红细胞检测和提取框架,以便处理和分析单个细胞,进行后续处理,如计数感染细胞或识别薄血涂片中的寄生虫种类。该框架由两个模块组成:细胞检测模块和细胞提取模块。细胞检测模块训练一个改进的医学通道式特征金字塔网络(CFPNet-M)深度学习网络,该网络将图像的绿色通道和颜色去卷积处理后的图像作为输入,并学习细胞注释的截断距离变换图像。之所以选择 CFPNet-M,是因为它对资源的要求较低,而距离变换则可以实现更精确的密集细胞计数。网络检测到细胞后,细胞提取模块将用于从原始图像中提取单个细胞并计算细胞数量。基于 193 名患者(包括 148 名疟原虫感染患者和 45 名未感染患者)的初步结果显示,我们的框架实现了 92.2% 的细胞计数准确率。
{"title":"Deep Learning-Based Cell Detection and Extraction in Thin Blood Smears for Malaria Diagnosis.","authors":"Deniz Kavzak Ufuktepe, Feng Yang, Yasmin M Kassim, Hang Yu, Richard J Maude, Kannappan Palaniappan, Stefan Jaeger","doi":"10.1109/AIPR52630.2021.9762109","DOIUrl":"10.1109/AIPR52630.2021.9762109","url":null,"abstract":"<p><p>Malaria is a major health threat caused by Plasmodium parasites that infect the red blood cells. Two predominant types of Plasmodium parasites are <i>Plasmodium vivax</i> (<i>P</i>. <i>vivax</i>) and <i>Plasmodium falciparum</i> (<i>P</i>. <i>falciparum</i>). Diagnosis of malaria typically involves visual microscopy examination of blood smears for malaria parasites. This is a tedious, error-prone visual inspection task requiring microscopy expertise which is often lacking in resource-poor settings. To address these problems, attempts have been made in recent years to automate malaria diagnosis using machine learning approaches. Several challenges need to be met for a machine learning approach to be successful in malaria diagnosis. Microscopy images acquired at different sites often vary in color, contrast, and consistency caused by different smear preparation and staining methods. Moreover, touching and overlapping cells complicate the red blood cell detection process, which can lead to inaccurate blood cell counts and thus incorrect parasitemia calculations. In this work, we propose a red blood cell detection and extraction framework to enable processing and analysis of single cells for follow-up processes like counting infected cells or identifying parasite species in thin blood smears. This framework consists of two modules: a cell detection module and a cell extraction module. The cell detection module trains a modified Channel-wise Feature Pyramid Network for Medicine (CFPNet-M) deep learning network that takes the green channel of the image and the color-deconvolution processed image as inputs, and learns a truncated distance transform image of cell annotations. CFPNet-M is chosen due to its low resource requirements, while the distance transform allows achieving more accurate cell counts for dense cells. Once the cells are detected by the network, the cell extraction module is used to extract single cells from the original image and count the number of cells. Our preliminary results based on 193 patients (including 148 <i>P</i>. <i>Falciparum</i> infected patients, and 45 uninfected patients) show that our framework achieves cell count accuracy of 92.2%.</p>","PeriodicalId":73278,"journal":{"name":"IEEE Applied Imagery Pattern Recognition Workshop : [proceedings]. IEEE Applied Imagery Pattern Recognition Workshop","volume":"2021 ","pages":"9762109"},"PeriodicalIF":0.0,"publicationDate":"2021-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7613898/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10722590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Patch-Based Semantic Segmentation for Detecting Arterioles and Venules in Epifluorescence Imagery. 基于补丁的荧光图像小动脉和小静脉语义分割。
Yasmin M Kassim, Olga V Glinskii, Vladislav V Glinsky, Virginia H Huxley, Kannappan Palaniappan

Segmentation and quantification of microvasculature structures are the main steps toward studying microvasculature remodeling. The proposed patch based semantic architecture enables accurate segmentation for the challenging epifluorescence microscopy images. Our pixel-based fast semantic network trained on random patches from different epifluorescence images to learn how to discriminate between vessels versus nonvessels pixels. The proposed semantic vessel network (SVNet) relies on understanding the morphological structure of the thin vessels in the patches rather than considering the whole image as input to speed up the training process and to maintain the clarity of thin structures. Experimental results on our ovariectomized - ovary removed (OVX) - mice dura mater epifluorescence microscopy images shows promising results in both arteriole and venule part. We compared our results with different segmentation methods such as local, global thresholding, matched based filter approaches and related state of the art deep learning networks. Our overall accuracy (> 98%) outperforms all the methods including our previous work (VNet). [1].

微血管结构的分割和量化是研究微血管重构的主要步骤。提出的基于补丁的语义架构能够对具有挑战性的荧光显微镜图像进行准确的分割。我们基于像素的快速语义网络对来自不同荧光图像的随机斑块进行训练,学习如何区分血管和非血管像素。本文提出的语义血管网络(SVNet)依赖于理解斑块中细血管的形态结构,而不是将整个图像作为输入,以加快训练过程并保持薄结构的清晰度。对去卵巢小鼠硬脑膜荧光显微成像的实验结果显示,小动脉和小静脉部分均有良好的结果。我们将结果与不同的分割方法进行了比较,如局部分割、全局阈值分割、基于匹配的过滤方法和相关的最先进的深度学习网络。我们的总体准确率(> 98%)优于所有方法,包括我们之前的工作(VNet)。[1]。
{"title":"Patch-Based Semantic Segmentation for Detecting Arterioles and Venules in Epifluorescence Imagery.","authors":"Yasmin M Kassim,&nbsp;Olga V Glinskii,&nbsp;Vladislav V Glinsky,&nbsp;Virginia H Huxley,&nbsp;Kannappan Palaniappan","doi":"10.1109/aipr.2018.8707387","DOIUrl":"https://doi.org/10.1109/aipr.2018.8707387","url":null,"abstract":"<p><p>Segmentation and quantification of microvasculature structures are the main steps toward studying microvasculature remodeling. The proposed patch based semantic architecture enables accurate segmentation for the challenging epifluorescence microscopy images. Our pixel-based fast semantic network trained on random patches from different epifluorescence images to learn how to discriminate between vessels versus nonvessels pixels. The proposed semantic vessel network (SVNet) relies on understanding the morphological structure of the thin vessels in the patches rather than considering the whole image as input to speed up the training process and to maintain the clarity of thin structures. Experimental results on our ovariectomized - ovary removed (OVX) - mice dura mater epifluorescence microscopy images shows promising results in both arteriole and venule part. We compared our results with different segmentation methods such as local, global thresholding, matched based filter approaches and related state of the art deep learning networks. Our overall accuracy (> 98%) outperforms all the methods including our previous work (VNet). [1].</p>","PeriodicalId":73278,"journal":{"name":"IEEE Applied Imagery Pattern Recognition Workshop : [proceedings]. IEEE Applied Imagery Pattern Recognition Workshop","volume":"2018 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/aipr.2018.8707387","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37699059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The National Library of Medicine Pill Image Recognition Challenge: An Initial Report. 国家药物图书馆药丸图像识别挑战:初步报告。
Ziv Yaniv, Jessica Faruque, Sally Howe, Kathel Dunn, David Sharlip, Andrew Bond, Pablo Perillan, Olivier Bodenreider, Michael J Ackerman, Terry S Yoo

In January 2016 the U.S. National Library of Medicine announced a challenge competition calling for the development and discovery of high-quality algorithms and software that rank how well consumer images of prescription pills match reference images of pills in its authoritative RxIMAGE collection. This challenge was motivated by the need to easily identify unknown prescription pills both by healthcare personnel and the general public. Potential benefits of this capability include confirmation of the pill in settings where the documentation and medication have been separated, such as in a disaster or emergency; and confirmation of a pill when the prescribed medication changes from brand to generic, or for any other reason the shape and color of the pill change. The data for the competition consisted of two types of images, high quality macro photographs, reference images, and consumer quality photographs of the quality we expect users of a proposed application to acquire. A training dataset consisting of 2000 reference images and 5000 corresponding consumer quality images acquired from 1000 pills was provided to challenge participants. A second dataset acquired from 1000 pills with similar distributions of shape and color was reserved as a segregated testing set. Challenge submissions were required to produce a ranking of the reference images, given a consumer quality image as input. Determination of the winning teams was done using the mean average precision quality metric, with the three winners obtaining mean average precision scores of 0.27, 0.09, and 0.08. In the retrieval results, the correct image was amongst the top five ranked images 43%, 12%, and 11% of the time, out of 5000 query/consumer images. This is an initial promising step towards development of an NLM software system and application-programming interface facilitating pill identification. The training dataset will continue to be freely available online at: http://pir.nlm.nih.gov/challenge/submission.html.

2016年1月,美国国家医学图书馆宣布了一项挑战竞赛,呼吁开发和发现高质量的算法和软件,对消费者处方药物图像与权威RxIMAGE收藏中的参考药物图像进行排名。这一挑战的动机是卫生保健人员和公众都需要容易地识别未知的处方药。这种能力的潜在好处包括在文件和药物分离的情况下确认药丸,例如在灾难或紧急情况下;当处方药物从品牌药变成普通药时,或者由于任何其他原因,药丸的形状和颜色发生了变化。比赛的数据由两种类型的图像组成,高质量的微距照片,参考图像,以及我们期望拟议应用程序的用户获得的质量的消费者质量照片。提供了一个由2000张参考图像和5000张从1000颗药丸中获得的相应消费者质量图像组成的训练数据集来挑战参与者。从1000颗形状和颜色分布相似的药片中获得的第二个数据集被保留为一个分离的测试集。挑战赛的参赛作品需要产生一个参考图像的排名,并给出一张消费者质量的图像作为输入。使用平均精度质量度量来确定获胜团队,三个获胜者的平均精度得分分别为0.27,0.09和0.08。在检索结果中,在5000张查询/消费者图像中,正确的图像分别有43%、12%和11%的时间位于排名前五的图像中。这是朝着NLM软件系统和应用程序编程接口的发展迈出的有希望的第一步。训练数据集将继续免费在线提供:http://pir.nlm.nih.gov/challenge/submission.html。
{"title":"The National Library of Medicine Pill Image Recognition Challenge: An Initial Report.","authors":"Ziv Yaniv,&nbsp;Jessica Faruque,&nbsp;Sally Howe,&nbsp;Kathel Dunn,&nbsp;David Sharlip,&nbsp;Andrew Bond,&nbsp;Pablo Perillan,&nbsp;Olivier Bodenreider,&nbsp;Michael J Ackerman,&nbsp;Terry S Yoo","doi":"10.1109/AIPR.2016.8010584","DOIUrl":"https://doi.org/10.1109/AIPR.2016.8010584","url":null,"abstract":"<p><p>In January 2016 the U.S. National Library of Medicine announced a challenge competition calling for the development and discovery of high-quality algorithms and software that rank how well consumer images of prescription pills match reference images of pills in its authoritative RxIMAGE collection. This challenge was motivated by the need to easily identify unknown prescription pills both by healthcare personnel and the general public. Potential benefits of this capability include confirmation of the pill in settings where the documentation and medication have been separated, such as in a disaster or emergency; and confirmation of a pill when the prescribed medication changes from brand to generic, or for any other reason the shape and color of the pill change. The data for the competition consisted of two types of images, high quality macro photographs, reference images, and consumer quality photographs of the quality we expect users of a proposed application to acquire. A training dataset consisting of 2000 reference images and 5000 corresponding consumer quality images acquired from 1000 pills was provided to challenge participants. A second dataset acquired from 1000 pills with similar distributions of shape and color was reserved as a segregated testing set. Challenge submissions were required to produce a ranking of the reference images, given a consumer quality image as input. Determination of the winning teams was done using the mean average precision quality metric, with the three winners obtaining mean average precision scores of 0.27, 0.09, and 0.08. In the retrieval results, the correct image was amongst the top five ranked images 43%, 12%, and 11% of the time, out of 5000 query/consumer images. This is an initial promising step towards development of an NLM software system and application-programming interface facilitating pill identification. The training dataset will continue to be freely available online at: http://pir.nlm.nih.gov/challenge/submission.html.</p>","PeriodicalId":73278,"journal":{"name":"IEEE Applied Imagery Pattern Recognition Workshop : [proceedings]. IEEE Applied Imagery Pattern Recognition Workshop","volume":"2016 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/AIPR.2016.8010584","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36182079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Confocal Vessel Structure Segmentation with Optimized Feature Bank and Random Forests. 基于优化特征库和随机森林的共聚焦血管结构分割。
Yasmin M Kassim, V B Surya Prasath, Olga V Glinskii, Vladislav V Glinsky, Virginia H Huxley, Kannappan Palaniappan

In this paper, we consider confocal microscopy based vessel segmentation with optimized features and random forest classification. By utilizing multi-scale vessel-specific features tuned to capture curvilinear structures such as Frobenius norm of the Hessian eigenvalues, Laplacian of Gaussians (LoG), oriented second derivative, line detector and intensity masked with LoG scale map. we obtain better segmentation results in challenging imaging conditions. We obtain binary segmentations using random forest classifier trained on physiologists marked ground-truth. Experimental results on mice dura mater confocal microscopy vessel segmentations indicate that we obtain better results compared to global segmentation approaches.

在本文中,我们考虑了基于共聚焦显微镜的血管分割与优化特征和随机森林分类。通过利用多尺度容器特定特征来捕捉曲线结构,如Hessian特征值的Frobenius范数、Laplacian of gaussian (LoG)、定向二阶导数、线检测器和LoG尺度图掩码强度。我们在具有挑战性的成像条件下获得了更好的分割结果。我们使用随机森林分类器进行二值分割。小鼠硬脑膜共聚焦显微血管分割实验结果表明,与全局分割方法相比,我们获得了更好的结果。
{"title":"Confocal Vessel Structure Segmentation with Optimized Feature Bank and Random Forests.","authors":"Yasmin M Kassim,&nbsp;V B Surya Prasath,&nbsp;Olga V Glinskii,&nbsp;Vladislav V Glinsky,&nbsp;Virginia H Huxley,&nbsp;Kannappan Palaniappan","doi":"10.1109/AIPR.2016.8010580","DOIUrl":"https://doi.org/10.1109/AIPR.2016.8010580","url":null,"abstract":"<p><p>In this paper, we consider confocal microscopy based vessel segmentation with optimized features and random forest classification. By utilizing multi-scale vessel-specific features tuned to capture curvilinear structures such as Frobenius norm of the Hessian eigenvalues, Laplacian of Gaussians (LoG), oriented second derivative, line detector and intensity masked with LoG scale map. we obtain better segmentation results in challenging imaging conditions. We obtain binary segmentations using random forest classifier trained on physiologists marked ground-truth. Experimental results on mice dura mater confocal microscopy vessel segmentations indicate that we obtain better results compared to global segmentation approaches.</p>","PeriodicalId":73278,"journal":{"name":"IEEE Applied Imagery Pattern Recognition Workshop : [proceedings]. IEEE Applied Imagery Pattern Recognition Workshop","volume":"2016 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/AIPR.2016.8010580","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35565305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
期刊
IEEE Applied Imagery Pattern Recognition Workshop : [proceedings]. IEEE Applied Imagery Pattern Recognition Workshop
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1