首页 > 最新文献

2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)最新文献

英文 中文
Estimation of multiple fiber orientations using nonconvex regularized spherical deconvolution 基于非凸正则球面反褶积的多纤维方向估计
C. Chu, Zi-Xiang Kuai, Yuemin M. Zhu
In diffusion magnetic resonance imaging, the fiber tractography generally desires the estimation of intravoxel multiple fiber orientations (MFOs) with high accuracy and reliability. In general, spherical deconvolution (SD) based methods have many advantages for MFOs estimation. However, these methods are lowly immune to noise. To cope with this problem, regularization techniques were introduced in SD-based methods to reduce noise artifacts. But, the regularizers were often defined as a convex function to make the model resolving simpler, which limits their effect of regularization. In this work, we introduce a nonconvex regularizer in the Richardson-Lucy based SD framework for estimating MFOs. The results on synthetic phantom and physical phantom images demonstrate that the proposed method is superior to existing SD-based methods in terms of mean angular errors, edge preservation and computation time.
在扩散磁共振成像中,纤维束束成像通常要求体内多纤维取向(MFOs)的估计具有较高的精度和可靠性。一般来说,基于球面反褶积(SD)的方法在mfo估计中具有许多优点。然而,这些方法对噪声的免疫力较低。为了解决这个问题,在基于sd的方法中引入了正则化技术来减少噪声伪影。但是,为了简化模型求解,正则化器通常被定义为一个凸函数,这限制了它们的正则化效果。在这项工作中,我们在基于Richardson-Lucy的SD框架中引入了一个非凸正则化器来估计mfo。在合成幻影和物理幻影图像上的实验结果表明,该方法在平均角度误差、边缘保持和计算时间等方面都优于现有的基于sd的方法。
{"title":"Estimation of multiple fiber orientations using nonconvex regularized spherical deconvolution","authors":"C. Chu, Zi-Xiang Kuai, Yuemin M. Zhu","doi":"10.1109/CISP-BMEI.2017.8302190","DOIUrl":"https://doi.org/10.1109/CISP-BMEI.2017.8302190","url":null,"abstract":"In diffusion magnetic resonance imaging, the fiber tractography generally desires the estimation of intravoxel multiple fiber orientations (MFOs) with high accuracy and reliability. In general, spherical deconvolution (SD) based methods have many advantages for MFOs estimation. However, these methods are lowly immune to noise. To cope with this problem, regularization techniques were introduced in SD-based methods to reduce noise artifacts. But, the regularizers were often defined as a convex function to make the model resolving simpler, which limits their effect of regularization. In this work, we introduce a nonconvex regularizer in the Richardson-Lucy based SD framework for estimating MFOs. The results on synthetic phantom and physical phantom images demonstrate that the proposed method is superior to existing SD-based methods in terms of mean angular errors, edge preservation and computation time.","PeriodicalId":6474,"journal":{"name":"2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"26 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89296377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on image retrieval based on the convolutional neural network 基于卷积神经网络的图像检索研究
Chaoyi Chen, Xiaoqi Li, Bin Zhang
The development of the Internet has led to the accumulation of a large number of images in various databases. People are eager to find useful information in these databases which stimulate the development of image retrieval technologies. In this paper, we mainly study image retrieval based on the convolutional neural network. The study is divided into four parts to explore characteristics of convolution neural networks used in image retrieval. The first part introduces the structure of the convolutional neural network and the method of extracting features from images. The second part compares the effects of different similarity measures on retrieval accuracy. The third part studies the way to speed up retrieval. We use PCA to reduce feature dimensions and draw a line chart of dimension and accuracy. Then we analyze the reason why the change of accuracy rate is divided into two stages: ascending first and descending later. The fourth part studies the way to increase retrieval accuracy. We compare the retrieval accuracy before and after fine-tuning and analyze the reasons for that. In the end, we sum up the whole text and summarize key points that we should consider when designing an image retrieval system based on the convolutional neural network.
互联网的发展使得大量的图像在各种数据库中积累。人们渴望在这些数据库中找到有用的信息,这刺激了图像检索技术的发展。本文主要研究了基于卷积神经网络的图像检索。本研究分为四个部分来探讨卷积神经网络在图像检索中的特点。第一部分介绍了卷积神经网络的结构和图像特征提取方法。第二部分比较了不同相似度量对检索精度的影响。第三部分研究了加快检索速度的途径。利用主成分分析法对特征维数进行降维,绘制出维数与精度的折线图。然后分析了准确率变化分为先上升后下降两个阶段的原因。第四部分研究了提高检索准确率的途径。比较了调优前后的检索精度,并分析了调优前后的原因。最后,对全文进行了总结,总结了基于卷积神经网络的图像检索系统设计应考虑的关键问题。
{"title":"Research on image retrieval based on the convolutional neural network","authors":"Chaoyi Chen, Xiaoqi Li, Bin Zhang","doi":"10.1109/CISP-BMEI.2017.8301988","DOIUrl":"https://doi.org/10.1109/CISP-BMEI.2017.8301988","url":null,"abstract":"The development of the Internet has led to the accumulation of a large number of images in various databases. People are eager to find useful information in these databases which stimulate the development of image retrieval technologies. In this paper, we mainly study image retrieval based on the convolutional neural network. The study is divided into four parts to explore characteristics of convolution neural networks used in image retrieval. The first part introduces the structure of the convolutional neural network and the method of extracting features from images. The second part compares the effects of different similarity measures on retrieval accuracy. The third part studies the way to speed up retrieval. We use PCA to reduce feature dimensions and draw a line chart of dimension and accuracy. Then we analyze the reason why the change of accuracy rate is divided into two stages: ascending first and descending later. The fourth part studies the way to increase retrieval accuracy. We compare the retrieval accuracy before and after fine-tuning and analyze the reasons for that. In the end, we sum up the whole text and summarize key points that we should consider when designing an image retrieval system based on the convolutional neural network.","PeriodicalId":6474,"journal":{"name":"2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"19 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89556541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Remote sensing image categorization with domain adaptation-based convolution neural network 基于域自适应的卷积神经网络遥感图像分类
Yiyou Guo, H. Huo, T. Fang
With the increasing application of high-resolution remote sensing image, image categorization becomes a more and more important technique. Recently, Convolution Neural Network (CNN) has been widely used in various computer vision tasks, for instance, generic image recognition, object detection and image segmentation. A key factor which influences the performance of CNN is the large quantity of the training images. However, it is hard to obtain large amounts of high-resolution quality images while domain adaptation can be adopted in solving this issue. As a result, in this work, we exploit domain adaptation-based CNN into high-resolution image classification task. Experiments are carried out on a latest large remote sensing image benchmark dataset. Extensive results prove the effectiveness of the proposed model.
随着高分辨率遥感图像应用的不断增加,图像分类技术变得越来越重要。近年来,卷积神经网络(CNN)被广泛应用于各种计算机视觉任务中,如通用图像识别、目标检测和图像分割。影响CNN性能的一个关键因素是大量的训练图像。然而,难以获得大量高分辨率的高质量图像,而采用域自适应可以解决这一问题。因此,在本工作中,我们将基于域自适应的CNN应用到高分辨率图像分类任务中。在最新的大型遥感图像基准数据集上进行了实验。大量的实验结果证明了该模型的有效性。
{"title":"Remote sensing image categorization with domain adaptation-based convolution neural network","authors":"Yiyou Guo, H. Huo, T. Fang","doi":"10.1109/CISP-BMEI.2017.8302032","DOIUrl":"https://doi.org/10.1109/CISP-BMEI.2017.8302032","url":null,"abstract":"With the increasing application of high-resolution remote sensing image, image categorization becomes a more and more important technique. Recently, Convolution Neural Network (CNN) has been widely used in various computer vision tasks, for instance, generic image recognition, object detection and image segmentation. A key factor which influences the performance of CNN is the large quantity of the training images. However, it is hard to obtain large amounts of high-resolution quality images while domain adaptation can be adopted in solving this issue. As a result, in this work, we exploit domain adaptation-based CNN into high-resolution image classification task. Experiments are carried out on a latest large remote sensing image benchmark dataset. Extensive results prove the effectiveness of the proposed model.","PeriodicalId":6474,"journal":{"name":"2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"1 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86754127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CNNs based multi-modality classification for AD diagnosis 基于cnn的AD诊断多模态分类
D. Cheng, Manhua Liu
Accurate and early diagnosis of Alzheimer's disease (AD) plays a significant part for the patient care and development of future treatment. Magnetic Resonance Image (MRI) and Positron Emission Tomography (PET) neuroimages are effective modalities that can help physicians to diagnose AD. In past few years, machine-learning algorithm have been widely studied on the analyses for multi-modality neuroimages in quantitation evaluation and computer-aided-diagnosis (CAD) of AD. Most existing methods extract the hand-craft features after image preprocessing such as registration, segmentation and feature extraction, and then train a classifier to distinguish AD from other groups. This paper proposes to construct multi-level convolutional neural networks (CNNs) to gradually learn and combine the multi-modality features for AD classification using MRI and PET images. First, the deep 3D-CNNs are constructed to transform the whole brain information into compact high-level features for each modality. Then, a 2D CNNs is cascaded to ensemble the high-level features for image classification. The proposed method can automatically learn the generic features from MRI and PET imaging data for AD classification. No rigid image registration and segmentation are performed on the brain images. Our proposed method is evaluated on the baseline MRI and PET images from Alzheimer's Disease Neuroimaging Initiative (ADNI) database on 193 subjects including 93 Alzheimer's disease (AD) subjects and 100 normal controls (NC) subjects. Experimental results and comparison show that the proposed method achieves an accuracy of 89.64% for classification of AD vs. NC, demonstrating the promising classification performance.
准确和早期诊断阿尔茨海默病(AD)对患者的护理和未来治疗的发展具有重要意义。磁共振成像(MRI)和正电子发射断层扫描(PET)神经图像是帮助医生诊断AD的有效方法。近年来,机器学习算法在AD定量评价和计算机辅助诊断(CAD)中的多模态神经图像分析方面得到了广泛的研究。现有的方法大多是先对图像进行配准、分割、特征提取等预处理,提取手工特征,然后训练分类器将AD与其他类别区分开来。本文提出构建多层卷积神经网络(cnn),逐步学习并结合多模态特征对MRI和PET图像进行AD分类。首先,构建深度3d - cnn,将整个大脑信息转化为每个模态的紧凑高级特征。然后,将二维cnn级联,集成图像分类的高级特征。该方法可以自动从MRI和PET成像数据中学习共性特征,用于AD分类。对脑图像不进行严格的配准和分割。我们提出的方法在来自阿尔茨海默病神经影像学倡议(ADNI)数据库的193名受试者的基线MRI和PET图像上进行了评估,其中包括93名阿尔茨海默病(AD)受试者和100名正常对照(NC)受试者。实验结果和对比表明,该方法对AD和NC的分类准确率达到89.64%,显示出良好的分类性能。
{"title":"CNNs based multi-modality classification for AD diagnosis","authors":"D. Cheng, Manhua Liu","doi":"10.1109/CISP-BMEI.2017.8302281","DOIUrl":"https://doi.org/10.1109/CISP-BMEI.2017.8302281","url":null,"abstract":"Accurate and early diagnosis of Alzheimer's disease (AD) plays a significant part for the patient care and development of future treatment. Magnetic Resonance Image (MRI) and Positron Emission Tomography (PET) neuroimages are effective modalities that can help physicians to diagnose AD. In past few years, machine-learning algorithm have been widely studied on the analyses for multi-modality neuroimages in quantitation evaluation and computer-aided-diagnosis (CAD) of AD. Most existing methods extract the hand-craft features after image preprocessing such as registration, segmentation and feature extraction, and then train a classifier to distinguish AD from other groups. This paper proposes to construct multi-level convolutional neural networks (CNNs) to gradually learn and combine the multi-modality features for AD classification using MRI and PET images. First, the deep 3D-CNNs are constructed to transform the whole brain information into compact high-level features for each modality. Then, a 2D CNNs is cascaded to ensemble the high-level features for image classification. The proposed method can automatically learn the generic features from MRI and PET imaging data for AD classification. No rigid image registration and segmentation are performed on the brain images. Our proposed method is evaluated on the baseline MRI and PET images from Alzheimer's Disease Neuroimaging Initiative (ADNI) database on 193 subjects including 93 Alzheimer's disease (AD) subjects and 100 normal controls (NC) subjects. Experimental results and comparison show that the proposed method achieves an accuracy of 89.64% for classification of AD vs. NC, demonstrating the promising classification performance.","PeriodicalId":6474,"journal":{"name":"2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"26 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86773303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 57
Performance evaluation of frequent pattern mining algorithms using web log data for web usage mining 基于web日志数据的频繁模式挖掘算法的性能评价
Yonas Gashaw, Fang Liu
In today's information era, the Internet is a powerful platform as the data repository that plays a great role in storing, sharing, and retrieve information for knowledge discovery. However, as there are countless, dynamic, and significant growth of data, web users face big problems in terms of the relevant information required. Consequently, poor information precision and retrieval are part of the hottest recent research areas in today's world. Despite the voluminous of information resided on the web, valuable informative knowledge could possibly be discovered with the application of advanced data mining techniques. Association rule mining, as a technique in data mining, is one way to discover frequent patterns from various data sources. In this paper, three of the foremost association rule mining algorithms used for frequent pattern discovering namely, Eclat, Apriori, and FP-Growth examined on three sets of transactional databases devised from server access log file. The comparison is made both in execution time and memory usage aspects. Unlike most previous research works, findings, in this paper, reveal that each of the algorithms has their own appropriateness and specificities that can best fit depending on the data size and support parameter thresholds.
在当今的信息时代,互联网作为一个强大的数据存储平台,在信息的存储、共享、检索和知识发现等方面发挥着巨大的作用。然而,由于数据数量庞大、动态且显著增长,网络用户在获取所需的相关信息方面面临着很大的问题。因此,低信息精度和检索是当今世界最近最热门的研究领域之一。尽管网络上存在大量的信息,但应用先进的数据挖掘技术可能会发现有价值的信息知识。关联规则挖掘作为数据挖掘中的一种技术,是从各种数据源中发现频繁模式的一种方法。在本文中,研究了用于频繁模式发现的三个最重要的关联规则挖掘算法,即Eclat、Apriori和FP-Growth,并对从服务器访问日志文件设计的三组事务数据库进行了研究。在执行时间和内存使用方面进行了比较。与大多数先前的研究工作不同,本文的研究结果表明,每种算法都有自己的适当性和特异性,可以根据数据大小和支持参数阈值进行最佳拟合。
{"title":"Performance evaluation of frequent pattern mining algorithms using web log data for web usage mining","authors":"Yonas Gashaw, Fang Liu","doi":"10.1109/CISP-BMEI.2017.8302317","DOIUrl":"https://doi.org/10.1109/CISP-BMEI.2017.8302317","url":null,"abstract":"In today's information era, the Internet is a powerful platform as the data repository that plays a great role in storing, sharing, and retrieve information for knowledge discovery. However, as there are countless, dynamic, and significant growth of data, web users face big problems in terms of the relevant information required. Consequently, poor information precision and retrieval are part of the hottest recent research areas in today's world. Despite the voluminous of information resided on the web, valuable informative knowledge could possibly be discovered with the application of advanced data mining techniques. Association rule mining, as a technique in data mining, is one way to discover frequent patterns from various data sources. In this paper, three of the foremost association rule mining algorithms used for frequent pattern discovering namely, Eclat, Apriori, and FP-Growth examined on three sets of transactional databases devised from server access log file. The comparison is made both in execution time and memory usage aspects. Unlike most previous research works, findings, in this paper, reveal that each of the algorithms has their own appropriateness and specificities that can best fit depending on the data size and support parameter thresholds.","PeriodicalId":6474,"journal":{"name":"2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"43 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85884300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Statolith-based species identification methods for ommastrephidae species 基于statolite的巨蝮科物种鉴定方法
Z. Fang, Xinjun Chen
Statoliths are a pair of calcareous structures which can provide biological and ecological information for cephalopods. Understanding their shape will help us know the taxonomy of cephalopods, even to the species level. Ommastrephes bartramii, Dosidicus gigas and Illex argentinus are chosen to compare the shape of their statolith as a means of species identification because of their ecological importance to the marine ecosystems. The results show that D. gigas has a relatively large sized statolith and I. argentinus has the smallest. The four main parts of the statolith (dorsal dome, lateral dome, rostrum and wing) in the different species were diverse and distinguishable. The traditional method effectively separated the three species of statolith with high classification rates (92.0%–100%) by six morphology variables. The outline method produced a relatively low classification rate (73.4%–94.2%) using six harmonic numbers and stepwise discriminant analysis (SDA). The result in this study demonstrates that traditional method would achieve a better performance when the species are not so closely related phylogenetically, and outline method is more suitable for the statolith identification at the genus level. It is necessary to compare other cephalopod statoliths by different methods and find a suitable one.
静纹石是一对钙质结构,可为头足类动物提供生物学和生态学信息。了解它们的形状将有助于我们了解头足类动物的分类,甚至是物种的分类。由于它们对海洋生态系统的生态重要性,选择了Ommastrephes bartramii, Dosidicus gigas和Illex argentinus来比较它们的statstatite形状作为物种鉴定的手段。结果表明,巨蛙的石纹较大,阿根廷石纹最小。不同种属的静石岩的4个主要部分(背穹窿、侧穹窿、喙部和翼部)具有多样性和可区分性。传统方法通过6个形态变量有效地分离出3种statolite,分类率高达92.0% ~ 100%。轮廓法采用6次谐波数和逐步判别分析(SDA),分类率较低(73.4% ~ 94.2%)。本研究结果表明,当物种系统亲缘关系不密切时,传统方法能获得更好的鉴定效果,而轮廓法更适合于属水平的statolite鉴定。有必要用不同的方法比较其他头足类的静态石,找到合适的一种。
{"title":"Statolith-based species identification methods for ommastrephidae species","authors":"Z. Fang, Xinjun Chen","doi":"10.1109/CISP-BMEI.2017.8302015","DOIUrl":"https://doi.org/10.1109/CISP-BMEI.2017.8302015","url":null,"abstract":"Statoliths are a pair of calcareous structures which can provide biological and ecological information for cephalopods. Understanding their shape will help us know the taxonomy of cephalopods, even to the species level. Ommastrephes bartramii, Dosidicus gigas and Illex argentinus are chosen to compare the shape of their statolith as a means of species identification because of their ecological importance to the marine ecosystems. The results show that D. gigas has a relatively large sized statolith and I. argentinus has the smallest. The four main parts of the statolith (dorsal dome, lateral dome, rostrum and wing) in the different species were diverse and distinguishable. The traditional method effectively separated the three species of statolith with high classification rates (92.0%–100%) by six morphology variables. The outline method produced a relatively low classification rate (73.4%–94.2%) using six harmonic numbers and stepwise discriminant analysis (SDA). The result in this study demonstrates that traditional method would achieve a better performance when the species are not so closely related phylogenetically, and outline method is more suitable for the statolith identification at the genus level. It is necessary to compare other cephalopod statoliths by different methods and find a suitable one.","PeriodicalId":6474,"journal":{"name":"2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"34 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86379579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Fusion algorithm of infrared and visible images based on joint bilateral filter 基于联合双边滤波的红外与可见光图像融合算法
Hua Cai, Guangqiu Chen, Zhi Liu, Z. Geng
In view of the problem of the less correlative infrared(IFR) and visible(VI) images, a fusion methodology using joint bilateral filter (JBF) in the domain of non-subsampled contourlet transform (NSCT) is put forword. First, the images to be fused are divided into some sub-bands by NSCT. Then, the local window energy and the coefficient absolute value is regarded as the activity measure of approximate and detail sub-bands respectively. Decision maps are obtained by selecting max activity measure. Source images are regarded as the guided images and decision maps are used as input images in JBF. After filtering operation by JBF, the output images are treated as weight maps. The sub-band coefficients are fused by weighted average algorithm. Finally, the fused sub-bands are composed into a fused image by the inverse NSCT. The experiments on the IFR and VI image are carried out. For the assessment of fusion results, subjective and objective assessment methods are adopted. The results show that the proposed methodology can get better performance than some classical fusion method existed in the published literatures.
针对红外(IFR)图像与可见光(VI)图像相关性不强的问题,提出了一种基于联合双边滤波器(JBF)的非下采样轮廓波变换(NSCT)融合方法。首先,利用NSCT将待融合图像划分为若干子带;然后,将局部窗口能量和系数绝对值分别作为近似子带和细节子带的活度度量。通过选择最大活动测度得到决策图。在JBF中,源图像作为引导图像,决策图作为输入图像。输出图像经过JBF滤波处理后作为权重图处理。子带系数采用加权平均算法进行融合。最后,通过逆NSCT将融合子带合成融合图像。在IFR和VI图像上进行了实验。对于融合结果的评价,采用了主观和客观两种评价方法。结果表明,该方法比已有的经典融合方法具有更好的性能。
{"title":"Fusion algorithm of infrared and visible images based on joint bilateral filter","authors":"Hua Cai, Guangqiu Chen, Zhi Liu, Z. Geng","doi":"10.1109/CISP-BMEI.2017.8302023","DOIUrl":"https://doi.org/10.1109/CISP-BMEI.2017.8302023","url":null,"abstract":"In view of the problem of the less correlative infrared(IFR) and visible(VI) images, a fusion methodology using joint bilateral filter (JBF) in the domain of non-subsampled contourlet transform (NSCT) is put forword. First, the images to be fused are divided into some sub-bands by NSCT. Then, the local window energy and the coefficient absolute value is regarded as the activity measure of approximate and detail sub-bands respectively. Decision maps are obtained by selecting max activity measure. Source images are regarded as the guided images and decision maps are used as input images in JBF. After filtering operation by JBF, the output images are treated as weight maps. The sub-band coefficients are fused by weighted average algorithm. Finally, the fused sub-bands are composed into a fused image by the inverse NSCT. The experiments on the IFR and VI image are carried out. For the assessment of fusion results, subjective and objective assessment methods are adopted. The results show that the proposed methodology can get better performance than some classical fusion method existed in the published literatures.","PeriodicalId":6474,"journal":{"name":"2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"4 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78917270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A projection matrix-based geometric calibration algorithm in CBCT system 一种基于投影矩阵的CBCT系统几何标定算法
Juan Gao, Yunsong Qi, Changhui Jiang, N. Zhang, Zhanli Hu
Accurate geometric parameter information for cone-beam CT (CBCT) systems is crucial for high-quality image reconstruction. To validate the performance of the algorithms for geometric parameter extraction and projection matrix computation, this paper presents test calibration methods based on a computer simulation. We simulate the projection operation on a calibration phantom using Visual C++ and obtain the center of projection images through an approach based on least squares and genetic algorithm using Matlab programs. To verify the performance of the presented geometric calibration algorithm for projection matrix computation and geometric parameter extraction, CBCT consisting of a flat-panel detector is simulated and the un-calibration reconstructed image is compared with the reconstructed images of the calibration method in this paper. The extracted geometric parameters from the calculated projection matrix are very close to the input values. Compared with the uncorrected reconstructed image, the corrected reconstructed image significantly reduces many artifacts. Experimental results reveal that the presented method is robust and accurate, and can suppress undesirable artifacts of reconstructed images which caused by misaligned scanner geometry.
圆锥束CT (cone-beam CT, CBCT)系统精确的几何参数信息是实现高质量图像重建的关键。为了验证几何参数提取和投影矩阵计算算法的性能,本文提出了基于计算机仿真的测试标定方法。利用visualc++在标定模体上模拟投影操作,并利用Matlab程序采用基于最小二乘法和遗传算法的方法获得投影图像的中心。为了验证所提出的几何校正算法在投影矩阵计算和几何参数提取方面的性能,对由平板探测器组成的CBCT进行了仿真,并将未校正的重建图像与校正方法的重建图像进行了比较。从计算的投影矩阵中提取的几何参数与输入值非常接近。与未经校正的重建图像相比,校正后的重建图像明显减少了许多伪影。实验结果表明,该方法具有较好的鲁棒性和准确性,并能有效地抑制由扫描仪几何形状不一致引起的重构图像伪影。
{"title":"A projection matrix-based geometric calibration algorithm in CBCT system","authors":"Juan Gao, Yunsong Qi, Changhui Jiang, N. Zhang, Zhanli Hu","doi":"10.1109/CISP-BMEI.2017.8302251","DOIUrl":"https://doi.org/10.1109/CISP-BMEI.2017.8302251","url":null,"abstract":"Accurate geometric parameter information for cone-beam CT (CBCT) systems is crucial for high-quality image reconstruction. To validate the performance of the algorithms for geometric parameter extraction and projection matrix computation, this paper presents test calibration methods based on a computer simulation. We simulate the projection operation on a calibration phantom using Visual C++ and obtain the center of projection images through an approach based on least squares and genetic algorithm using Matlab programs. To verify the performance of the presented geometric calibration algorithm for projection matrix computation and geometric parameter extraction, CBCT consisting of a flat-panel detector is simulated and the un-calibration reconstructed image is compared with the reconstructed images of the calibration method in this paper. The extracted geometric parameters from the calculated projection matrix are very close to the input values. Compared with the uncorrected reconstructed image, the corrected reconstructed image significantly reduces many artifacts. Experimental results reveal that the presented method is robust and accurate, and can suppress undesirable artifacts of reconstructed images which caused by misaligned scanner geometry.","PeriodicalId":6474,"journal":{"name":"2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"314 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78938248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Chinese named entity recognition using modified conditional random field on postal address 基于邮政地址的改进条件随机场的中文命名实体识别
Wenqiao Sun
Named entity recognition(NER) has been studied for a long time as more and more researches about the embedding, neural network model and some others systems like Language Model have developed quickly. However, as these systems rely heavily on domain-specific knowledge and it can hardly acquires much data about Chinese postal addresses, Chinese Named entity recognition(CNER) task on postal address has developed slowly. In this paper, we use a modified Conditional Random Field(CRF) model to solve a CNER task on a postal address corpus. Since there has little data about Chinese postal addresses and parts of which are incomplete sentences, we utilize the known, useful, clearer semantics words and sentences to our model as the additional features. We make three experiments to evaluate our system which obtains good performance and it shows that our modified algorithm performs better than other traditional algorithms when processing postal addresses.
随着嵌入、神经网络模型和语言模型等系统的研究越来越多,命名实体识别(NER)的研究已经进行了很长时间。然而,由于这些系统严重依赖于特定领域的知识,难以获取大量的中文邮政地址数据,因此,针对邮政地址的中文命名实体识别(CNER)任务发展缓慢。在本文中,我们使用一个改进的条件随机场(CRF)模型来解决邮政地址语料库上的CNER任务。由于中国邮政地址的数据很少,而且其中部分是不完整的句子,因此我们使用已知的、有用的、语义更清晰的词和句子作为我们模型的附加特征。通过三次实验对系统进行了评价,取得了良好的性能,表明改进后的算法在处理邮政地址时的性能优于其他传统算法。
{"title":"Chinese named entity recognition using modified conditional random field on postal address","authors":"Wenqiao Sun","doi":"10.1109/CISP-BMEI.2017.8302311","DOIUrl":"https://doi.org/10.1109/CISP-BMEI.2017.8302311","url":null,"abstract":"Named entity recognition(NER) has been studied for a long time as more and more researches about the embedding, neural network model and some others systems like Language Model have developed quickly. However, as these systems rely heavily on domain-specific knowledge and it can hardly acquires much data about Chinese postal addresses, Chinese Named entity recognition(CNER) task on postal address has developed slowly. In this paper, we use a modified Conditional Random Field(CRF) model to solve a CNER task on a postal address corpus. Since there has little data about Chinese postal addresses and parts of which are incomplete sentences, we utilize the known, useful, clearer semantics words and sentences to our model as the additional features. We make three experiments to evaluate our system which obtains good performance and it shows that our modified algorithm performs better than other traditional algorithms when processing postal addresses.","PeriodicalId":6474,"journal":{"name":"2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"22 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79149410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Improved prediction of short exons via multiscale products 通过多尺度产物改进短外显子的预测
Guishan Zhang, Xiaolei Zhang, Guocheng Pan, Yangjiang Yu, Yaowen Chen
Exon is an important functional region of eukaryotic DNA sequence. Prediction of exons can help to understand the structure and function of protein. However, the issue of finding an efficient technique to detect the numbers and locations of short coding sequences automatically is an unsolved problem. In this work, a short exon prediction method based on multiscale products in B-spline wavelet domain is proposed. The proposed wavelet denoising and multiscale products-based technique (WDMP) for short exons prediction have the following three features. (1) A wavelet package denoising method is applied to smooth the DNA numerical sequences. (2) A new B-spline wavelet function is designed to extract the exon features in multiscale domain, so the effect of window length is avoided. In addition, this wavelet has a higher degree of freedom for curve design. (3) We multiply the adjacent coefficients to exploit the high inter-scale correlation of the exon data, while these correlation features are used to separate the exon signals from background noise. Compared with four well-known model-independent methods, case studies demonstrate that the proposed WDMP method helps to improve the prediction accuracy of short exons significantly.
外显子是真核生物DNA序列的重要功能区域。外显子的预测有助于了解蛋白质的结构和功能。然而,如何有效地自动检测短编码序列的数量和位置一直是一个亟待解决的问题。本文提出了一种基于b样条小波域多尺度积的短外显子预测方法。提出的小波去噪和基于多尺度积的短外显子预测技术具有以下三个特点。(1)采用小波包去噪方法对DNA数值序列进行平滑处理。(2)设计了一种新的b样条小波函数来提取多尺度域的外显子特征,避免了窗口长度的影响。此外,该小波对曲线设计具有较高的自由度。(3)将相邻系数相乘,利用外显子数据的高尺度间相关性,利用这些相关性特征将外显子信号从背景噪声中分离出来。实例研究表明,与四种已知的模型无关的方法相比,WDMP方法显著提高了短外显子的预测精度。
{"title":"Improved prediction of short exons via multiscale products","authors":"Guishan Zhang, Xiaolei Zhang, Guocheng Pan, Yangjiang Yu, Yaowen Chen","doi":"10.1109/CISP-BMEI.2017.8302225","DOIUrl":"https://doi.org/10.1109/CISP-BMEI.2017.8302225","url":null,"abstract":"Exon is an important functional region of eukaryotic DNA sequence. Prediction of exons can help to understand the structure and function of protein. However, the issue of finding an efficient technique to detect the numbers and locations of short coding sequences automatically is an unsolved problem. In this work, a short exon prediction method based on multiscale products in B-spline wavelet domain is proposed. The proposed wavelet denoising and multiscale products-based technique (WDMP) for short exons prediction have the following three features. (1) A wavelet package denoising method is applied to smooth the DNA numerical sequences. (2) A new B-spline wavelet function is designed to extract the exon features in multiscale domain, so the effect of window length is avoided. In addition, this wavelet has a higher degree of freedom for curve design. (3) We multiply the adjacent coefficients to exploit the high inter-scale correlation of the exon data, while these correlation features are used to separate the exon signals from background noise. Compared with four well-known model-independent methods, case studies demonstrate that the proposed WDMP method helps to improve the prediction accuracy of short exons significantly.","PeriodicalId":6474,"journal":{"name":"2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)","volume":"39 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79223982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1