首页 > 最新文献

2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA)最新文献

英文 中文
A Convolutional Neural Network for Automatic Analysis of Aerial Imagery 基于卷积神经网络的航空图像自动分析
F. Maire, Luis Mejías Alvarez, A. Hodgson
This paper introduces a new method to automate the detection of marine species in aerial imagery using a Machine Learning approach. Our proposed system has at its core, a convolutional neural network. We compare this trainable classifier to a handcrafted classifier based on color features, entropy and shape analysis. Experiments demonstrate that the convolutional neural network outperforms the handcrafted solution. We also introduce a negative training example-selection method for situations where the original training set consists of a collection of labeled images in which the objects of interest (positive examples) have been marked by a bounding box. We show that picking random rectangles from the background is not necessarily the best way to generate useful negative examples with respect to learning.
本文介绍了一种利用机器学习方法自动检测航空图像中海洋物种的新方法。我们提出的系统的核心是一个卷积神经网络。我们将这种可训练分类器与基于颜色特征、熵和形状分析的手工分类器进行比较。实验表明,卷积神经网络优于手工解决方案。我们还引入了一种负训练样例选择方法,用于原始训练集由一组标记图像组成的情况,其中感兴趣的对象(正例)已被一个边界框标记。我们表明,从背景中随机选取矩形不一定是生成有用的负例的最佳方法。
{"title":"A Convolutional Neural Network for Automatic Analysis of Aerial Imagery","authors":"F. Maire, Luis Mejías Alvarez, A. Hodgson","doi":"10.1109/DICTA.2014.7008084","DOIUrl":"https://doi.org/10.1109/DICTA.2014.7008084","url":null,"abstract":"This paper introduces a new method to automate the detection of marine species in aerial imagery using a Machine Learning approach. Our proposed system has at its core, a convolutional neural network. We compare this trainable classifier to a handcrafted classifier based on color features, entropy and shape analysis. Experiments demonstrate that the convolutional neural network outperforms the handcrafted solution. We also introduce a negative training example-selection method for situations where the original training set consists of a collection of labeled images in which the objects of interest (positive examples) have been marked by a bounding box. We show that picking random rectangles from the background is not necessarily the best way to generate useful negative examples with respect to learning.","PeriodicalId":146695,"journal":{"name":"2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131076431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
A Multiple Features Distance Preserving (MFDP) Model for Saliency Detection 一种基于多特征距离保持的显著性检测模型
Dongyan Guo, Jian Zhang, Min Xu, Xiangjian He, Minxian Li, Chunxia Zhao
Playing a vital role, saliency has been widely applied for various image analysis tasks, such as content-aware image retargeting, image retrieval and object detection. It is generally accepted that saliency detection can benefit from the integration of multiple visual features. However, most of the existing literatures fuse multiple features at saliency map level without considering cross-feature information, i.e. generate a saliency map based on several maps computed from an individual feature. In this paper, we propose a Multiple Feature Distance Preserving (MFDP) model to seamlessly integrate multiple visual features through an alternative optimization process. Our method outperforms the state-of-the-arts methods on saliency detection. Saliency detected by our method is further cooperated with seam carving algorithm and significantly improves the performance on image retargeting.
显著性在内容感知图像重定位、图像检索和目标检测等各种图像分析任务中发挥着至关重要的作用。人们普遍认为,显著性检测可以受益于多个视觉特征的整合。然而,现有的文献大多在显著性图层面融合多个特征,而没有考虑交叉特征信息,即由单个特征计算的多个地图生成显著性图。在本文中,我们提出了一种多特征距离保持(MFDP)模型,通过一种替代优化过程无缝集成多个视觉特征。我们的方法在显著性检测方面优于最先进的方法。该方法将显著性检测与接缝雕刻算法进一步配合,显著提高了图像重定位的性能。
{"title":"A Multiple Features Distance Preserving (MFDP) Model for Saliency Detection","authors":"Dongyan Guo, Jian Zhang, Min Xu, Xiangjian He, Minxian Li, Chunxia Zhao","doi":"10.1109/DICTA.2014.7008087","DOIUrl":"https://doi.org/10.1109/DICTA.2014.7008087","url":null,"abstract":"Playing a vital role, saliency has been widely applied for various image analysis tasks, such as content-aware image retargeting, image retrieval and object detection. It is generally accepted that saliency detection can benefit from the integration of multiple visual features. However, most of the existing literatures fuse multiple features at saliency map level without considering cross-feature information, i.e. generate a saliency map based on several maps computed from an individual feature. In this paper, we propose a Multiple Feature Distance Preserving (MFDP) model to seamlessly integrate multiple visual features through an alternative optimization process. Our method outperforms the state-of-the-arts methods on saliency detection. Saliency detected by our method is further cooperated with seam carving algorithm and significantly improves the performance on image retargeting.","PeriodicalId":146695,"journal":{"name":"2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116363160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Supervised Latent Dirichlet Allocation Models for Efficient Activity Representation 有效活动表示的监督潜Dirichlet分配模型
Sabanadesan Umakanthan, S. Denman, C. Fookes, S. Sridharan
Local spatio-temporal features with a Bag-of-visual words model is a popular approach used in human action recognition. Bag-of-features methods suffer from several challenges such as extracting appropriate appearance and motion features from videos, converting extracted features appropriate for classification and designing a suitable classification framework. In this paper we address the problem of efficiently representing the extracted features for classification to improve the overall performance. We introduce two generative supervised topic models, maximum entropy discrimination LDA (MedLDA) and class- specific simplex LDA (css-LDA), to encode the raw features suitable for discriminative SVM based classification. Unsupervised LDA models disconnect topic discovery from the classification task, hence yield poor results compared to the baseline Bag-of-words framework. On the other hand supervised LDA techniques learn the topic structure by considering the class labels and improve the recognition accuracy significantly. MedLDA maximizes likelihood and within class margins using max-margin techniques and yields a sparse highly discriminative topic structure; while in css-LDA separate class specific topics are learned instead of common set of topics across the entire dataset. In our representation first topics are learned and then each video is represented as a topic proportion vector, i.e. it can be comparable to a histogram of topics. Finally SVM classification is done on the learned topic proportion vector. We demonstrate the efficiency of the above two representation techniques through the experiments carried out in two popular datasets. Experimental results demonstrate significantly improved performance compared to the baseline Bag-of-features framework which uses kmeans to construct histogram of words from the feature vectors.
基于视觉词袋模型的局部时空特征是一种常用的人体动作识别方法。feature bag -of-feature方法面临着从视频中提取合适的外观和运动特征、将提取的特征转换为适合分类的特征以及设计合适的分类框架等挑战。在本文中,我们解决了有效地表示提取的特征进行分类的问题,以提高整体性能。引入最大熵判别LDA (MedLDA)和类特定单纯形LDA (css-LDA)两种生成式监督主题模型,对适合于判别支持向量机分类的原始特征进行编码。无监督LDA模型将主题发现与分类任务分离开来,因此与基线词袋框架相比产生较差的结果。另一方面,监督式LDA技术通过考虑类标签来学习主题结构,显著提高了识别准确率。MedLDA使用最大边界技术最大化似然和类边界,并产生稀疏的高度判别主题结构;而在css-LDA中,单独的类特定主题被学习,而不是整个数据集的公共主题集。在我们的表示中,首先学习主题,然后将每个视频表示为主题比例向量,即它可以与主题的直方图相比较。最后对学习到的主题比例向量进行SVM分类。我们通过在两个流行的数据集上进行的实验证明了上述两种表示技术的效率。实验结果表明,与使用kmeans从特征向量构建单词直方图的基线特征袋框架相比,性能有显著提高。
{"title":"Supervised Latent Dirichlet Allocation Models for Efficient Activity Representation","authors":"Sabanadesan Umakanthan, S. Denman, C. Fookes, S. Sridharan","doi":"10.1109/DICTA.2014.7008130","DOIUrl":"https://doi.org/10.1109/DICTA.2014.7008130","url":null,"abstract":"Local spatio-temporal features with a Bag-of-visual words model is a popular approach used in human action recognition. Bag-of-features methods suffer from several challenges such as extracting appropriate appearance and motion features from videos, converting extracted features appropriate for classification and designing a suitable classification framework. In this paper we address the problem of efficiently representing the extracted features for classification to improve the overall performance. We introduce two generative supervised topic models, maximum entropy discrimination LDA (MedLDA) and class- specific simplex LDA (css-LDA), to encode the raw features suitable for discriminative SVM based classification. Unsupervised LDA models disconnect topic discovery from the classification task, hence yield poor results compared to the baseline Bag-of-words framework. On the other hand supervised LDA techniques learn the topic structure by considering the class labels and improve the recognition accuracy significantly. MedLDA maximizes likelihood and within class margins using max-margin techniques and yields a sparse highly discriminative topic structure; while in css-LDA separate class specific topics are learned instead of common set of topics across the entire dataset. In our representation first topics are learned and then each video is represented as a topic proportion vector, i.e. it can be comparable to a histogram of topics. Finally SVM classification is done on the learned topic proportion vector. We demonstrate the efficiency of the above two representation techniques through the experiments carried out in two popular datasets. Experimental results demonstrate significantly improved performance compared to the baseline Bag-of-features framework which uses kmeans to construct histogram of words from the feature vectors.","PeriodicalId":146695,"journal":{"name":"2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"169 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128616051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Novel Multi-Modal Image Registration Method Based on Corners 一种基于角点的多模态图像配准方法
Guohua Lv, S. Teng, Guojun Lu
This paper presents a novel method for registering multi-modal images, based on corners. The proposed method is motivated by the fact that large content differences are likely to occur in multi-modal images. Unlike traditional multi-modal image registration methods that utilize intensities or gradients for feature representation, we propose to use curvatures of corners. Moreover, a novel local descriptor called Distribution of Edge Pixels Along Contour (DEPAC) is proposed to represent the neighborhood of corners. Curvature and DEPAC similarities are combined in our method to improve the registration accuracy. Using a set of benchmark multi-modal images and multi-modal microscopic images, we demonstrate that our proposed method outperforms an existing state-of-the-art image registration method.
提出了一种基于角点的多模态图像配准方法。该方法的动机是在多模态图像中可能出现较大的内容差异。与传统的多模态图像配准方法利用强度或梯度进行特征表示不同,我们建议使用角的曲率。此外,提出了一种新的局部描述符——边缘像素沿轮廓分布(DEPAC)来表示角的邻域。该方法将曲率和DEPAC相似度相结合,提高了配准精度。使用一组基准多模态图像和多模态显微图像,我们证明了我们提出的方法优于现有的最先进的图像配准方法。
{"title":"A Novel Multi-Modal Image Registration Method Based on Corners","authors":"Guohua Lv, S. Teng, Guojun Lu","doi":"10.1109/DICTA.2014.7008090","DOIUrl":"https://doi.org/10.1109/DICTA.2014.7008090","url":null,"abstract":"This paper presents a novel method for registering multi-modal images, based on corners. The proposed method is motivated by the fact that large content differences are likely to occur in multi-modal images. Unlike traditional multi-modal image registration methods that utilize intensities or gradients for feature representation, we propose to use curvatures of corners. Moreover, a novel local descriptor called Distribution of Edge Pixels Along Contour (DEPAC) is proposed to represent the neighborhood of corners. Curvature and DEPAC similarities are combined in our method to improve the registration accuracy. Using a set of benchmark multi-modal images and multi-modal microscopic images, we demonstrate that our proposed method outperforms an existing state-of-the-art image registration method.","PeriodicalId":146695,"journal":{"name":"2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131087533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The W-Penalty and Its Application to Alpha Matting with Sparse Labels w惩罚及其在带稀疏标签的Alpha抠图中的应用
Stephen Tierney, Junbin Gao, Yi Guo
Alpha matting is an ill-posed problem, as such the user must supply dense partial labels for an acceptable solution to be reached. Unfortunately this labelling can be time consuming. In this paper we introduce the w-penalty function, which when incorporated into existing matting techniques allows users to supply extremely sparse input. The formulated objective function encourages driving matte values to 0 and 1. The experiments demonstrate the proposed model outperforms the state-of-the-art KNN matting algorithm. MATLAB code for our proposed method is freely available in the MatteKit package.
Alpha抠图是一个病态问题,因此用户必须提供密集的部分标签才能得到一个可接受的解决方案。不幸的是,这种标签可能很耗时。在本文中,我们引入了w惩罚函数,当将其与现有的抠图技术结合时,用户可以提供极其稀疏的输入。制定的目标函数鼓励将哑光值驱动到0和1。实验表明,该模型优于目前最先进的KNN抠图算法。我们提出的方法的MATLAB代码可以在MatteKit包中免费获得。
{"title":"The W-Penalty and Its Application to Alpha Matting with Sparse Labels","authors":"Stephen Tierney, Junbin Gao, Yi Guo","doi":"10.1109/DICTA.2014.7008132","DOIUrl":"https://doi.org/10.1109/DICTA.2014.7008132","url":null,"abstract":"Alpha matting is an ill-posed problem, as such the user must supply dense partial labels for an acceptable solution to be reached. Unfortunately this labelling can be time consuming. In this paper we introduce the w-penalty function, which when incorporated into existing matting techniques allows users to supply extremely sparse input. The formulated objective function encourages driving matte values to 0 and 1. The experiments demonstrate the proposed model outperforms the state-of-the-art KNN matting algorithm. MATLAB code for our proposed method is freely available in the MatteKit package.","PeriodicalId":146695,"journal":{"name":"2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125464789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiple Features Based Low-Contrast Infrared Ship Image Segmentation Using Fuzzy Inference System 基于模糊推理系统的多特征低对比度红外舰船图像分割
Tao Wang, X. Bai, Yu Zhang
Infrared (IR) ship image segmentation is a challenging task due to defects of IR images, such as low-contrast, sea clutters, noises and etc. Aiming to solve this problem, we propose a multiple features based IR ship image segmentation method using fuzzy inference system (FIS). Because of complexness of the low-contrast IR image, the ship target cannot be segmented by only one kind of feature. Thus we extract multiple features from IR image to sufficiently represent the ship target. As the FIS can well handle the uncertainty of IR image and express expert knowledge with fuzzy rules, multiple features are input to FIS, then the ship target can be simply extracted from the output of FIS. In this paper, the proposed method is implemented as follows. Firstly, intensity is chosen as the first input of FIS, because it is fundamental feature of ship target in IR image. Secondly, the spatial feature is constructed through saliency detection, region growing and morphology processing, which is used to represent spatial constrain of ship target region. Thirdly, the multiple features are fuzzified with adaptive methods and prior knowledge. Fourthly, the fuzzified features are well combined through FIS, according to the fuzzy rules based on expert knowledge. Finally, the intact ship target segmentation can be simply extracted through the output of the FIS. Experimental results show that our method can effectively extracts the complete and precise ship targets from the low-contrast IR ship images. Moreover, our method performs better than other existed segmentation methods.
由于红外图像存在对比度低、杂波、噪声等缺陷,对红外舰船图像进行分割是一项具有挑战性的任务。针对这一问题,提出了一种基于模糊推理系统(FIS)的多特征红外舰船图像分割方法。由于低对比度红外图像的复杂性,仅用一种特征无法对舰船目标进行分割。因此,我们从红外图像中提取多种特征以充分表征舰船目标。由于FIS能够很好地处理红外图像的不确定性,并用模糊规则表达专家知识,将多个特征输入到FIS中,然后从FIS的输出中简单地提取出舰船目标。在本文中,所提出的方法实现如下。首先,选择强度作为FIS的第一输入,因为强度是红外图像中舰船目标的基本特征;其次,通过显著性检测、区域生长和形态学处理构建空间特征,用来表示舰船目标区域的空间约束;第三,采用自适应方法和先验知识对多个特征进行模糊化。第四,根据基于专家知识的模糊规则,通过FIS将模糊化特征很好地结合起来。最后,通过FIS的输出可以简单地提取完整的舰船目标分割。实验结果表明,该方法能有效地从低对比度红外舰船图像中提取出完整、精确的舰船目标。此外,该方法的分割性能优于现有的分割方法。
{"title":"Multiple Features Based Low-Contrast Infrared Ship Image Segmentation Using Fuzzy Inference System","authors":"Tao Wang, X. Bai, Yu Zhang","doi":"10.1109/DICTA.2014.7008117","DOIUrl":"https://doi.org/10.1109/DICTA.2014.7008117","url":null,"abstract":"Infrared (IR) ship image segmentation is a challenging task due to defects of IR images, such as low-contrast, sea clutters, noises and etc. Aiming to solve this problem, we propose a multiple features based IR ship image segmentation method using fuzzy inference system (FIS). Because of complexness of the low-contrast IR image, the ship target cannot be segmented by only one kind of feature. Thus we extract multiple features from IR image to sufficiently represent the ship target. As the FIS can well handle the uncertainty of IR image and express expert knowledge with fuzzy rules, multiple features are input to FIS, then the ship target can be simply extracted from the output of FIS. In this paper, the proposed method is implemented as follows. Firstly, intensity is chosen as the first input of FIS, because it is fundamental feature of ship target in IR image. Secondly, the spatial feature is constructed through saliency detection, region growing and morphology processing, which is used to represent spatial constrain of ship target region. Thirdly, the multiple features are fuzzified with adaptive methods and prior knowledge. Fourthly, the fuzzified features are well combined through FIS, according to the fuzzy rules based on expert knowledge. Finally, the intact ship target segmentation can be simply extracted through the output of the FIS. Experimental results show that our method can effectively extracts the complete and precise ship targets from the low-contrast IR ship images. Moreover, our method performs better than other existed segmentation methods.","PeriodicalId":146695,"journal":{"name":"2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"52 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124310811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A High-Precision Registration Method Based on Auxiliary Sphere Targets 一种基于辅助球目标的高精度配准方法
Junhui Huang, Zhao Wang, Weihua Bao, Jianmin Gao
High-precision data registration is a key to ensure the high-precision three-dimensional profile measurement. In order to register the featureless surface or the surface with non-overlapping data, this paper proposes a new registration method based on auxiliary sphere targets. Combined with ICP algorithm the new method adopts the sphere targets for fitting continuous spheres and providing spherical constraint. The continuous spheres provide the registration features and more accurate corresponding point pairs for high-precision registration. The spherical constraint restricts the non-overlapping measurement point clouds to be aligned to the sphere targets. Simulation and experiments verify the effectiveness of the proposed method.
高精度数据配准是保证高精度三维轮廓测量的关键。针对无特征曲面和无重叠曲面的配准问题,提出了一种基于辅助球目标的配准方法。该方法结合ICP算法,采用球面目标拟合连续球面,并提供球面约束。连续球体为高精度配准提供了配准特性和更精确的对应点对。球形约束约束非重叠测点云必须对准球形目标。仿真和实验验证了该方法的有效性。
{"title":"A High-Precision Registration Method Based on Auxiliary Sphere Targets","authors":"Junhui Huang, Zhao Wang, Weihua Bao, Jianmin Gao","doi":"10.1109/DICTA.2014.7008085","DOIUrl":"https://doi.org/10.1109/DICTA.2014.7008085","url":null,"abstract":"High-precision data registration is a key to ensure the high-precision three-dimensional profile measurement. In order to register the featureless surface or the surface with non-overlapping data, this paper proposes a new registration method based on auxiliary sphere targets. Combined with ICP algorithm the new method adopts the sphere targets for fitting continuous spheres and providing spherical constraint. The continuous spheres provide the registration features and more accurate corresponding point pairs for high-precision registration. The spherical constraint restricts the non-overlapping measurement point clouds to be aligned to the sphere targets. Simulation and experiments verify the effectiveness of the proposed method.","PeriodicalId":146695,"journal":{"name":"2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126306999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Partial Fingerprint Matching through Region-Based Similarity 基于区域相似度的部分指纹匹配
Omid Zanganeh, B. Srinivasan, Nandita Bhattacharjee
Despite advances in fingerprint matching, partial/incomplete/fragmentary fingerprint recognition remains a challenging task. While miniaturization of fingerprint scanners limits the capture of only part of the fingerprint, there is also special interest in processing latent fingerprints which are likely to be partial and of low quality. Partial fingerprints do not include all the structures available in a full fingerprint, hence a suitable matching technique which is independent of specific fingerprint features is required. Common fingerprint recognition methods are based on fingerprint minutiae which do not perform well when applied to low quality images and might not even be suitable for partial fingerprint recognition. To overcome this drawback, in this research, a region-based fingerprint recognition method is proposed in which the fingerprints are compared in a pixel- wise manner by computing their correlation coefficient. Therefore, all the attributes of the fingerprint contribute in the matching decision. Such a technique is promising to accurately recognise a partial fingerprint as well as a full fingerprint compared to the minutiae-based fingerprint recognition methods.The proposed method is based on simple but effective metrics that has been defined to compute local similarities which is then combined into a global score such that it is less affected by distribution skew of the local similarities. Extensive experiments over Fingerprint Verification Competition (FVC) data set proves the superiority of the proposed method compared to other techniques in literature.
尽管指纹匹配技术不断进步,但部分/不完整/碎片指纹识别仍然是一项具有挑战性的任务。虽然指纹扫描仪的小型化限制了只能捕获部分指纹,但对可能是部分和低质量的潜在指纹的处理也有特别的兴趣。部分指纹不包括完整指纹的所有结构,因此需要一种独立于特定指纹特征的匹配技术。常用的指纹识别方法是基于指纹细节的,当应用于低质量图像时表现不佳,甚至可能不适合部分指纹识别。为了克服这一缺点,本研究提出了一种基于区域的指纹识别方法,该方法通过计算指纹的相关系数,以像素为单位对指纹进行比较。因此,指纹的所有属性都对匹配决策有贡献。与基于细节的指纹识别方法相比,该技术有望准确识别部分指纹和完整指纹。所提出的方法是基于简单而有效的度量来计算局部相似度,然后将其组合成一个全局得分,使其受局部相似度分布倾斜的影响较小。在指纹验证竞争(FVC)数据集上的大量实验证明了该方法与文献中其他技术相比的优越性。
{"title":"Partial Fingerprint Matching through Region-Based Similarity","authors":"Omid Zanganeh, B. Srinivasan, Nandita Bhattacharjee","doi":"10.1109/DICTA.2014.7008121","DOIUrl":"https://doi.org/10.1109/DICTA.2014.7008121","url":null,"abstract":"Despite advances in fingerprint matching, partial/incomplete/fragmentary fingerprint recognition remains a challenging task. While miniaturization of fingerprint scanners limits the capture of only part of the fingerprint, there is also special interest in processing latent fingerprints which are likely to be partial and of low quality. Partial fingerprints do not include all the structures available in a full fingerprint, hence a suitable matching technique which is independent of specific fingerprint features is required. Common fingerprint recognition methods are based on fingerprint minutiae which do not perform well when applied to low quality images and might not even be suitable for partial fingerprint recognition. To overcome this drawback, in this research, a region-based fingerprint recognition method is proposed in which the fingerprints are compared in a pixel- wise manner by computing their correlation coefficient. Therefore, all the attributes of the fingerprint contribute in the matching decision. Such a technique is promising to accurately recognise a partial fingerprint as well as a full fingerprint compared to the minutiae-based fingerprint recognition methods.The proposed method is based on simple but effective metrics that has been defined to compute local similarities which is then combined into a global score such that it is less affected by distribution skew of the local similarities. Extensive experiments over Fingerprint Verification Competition (FVC) data set proves the superiority of the proposed method compared to other techniques in literature.","PeriodicalId":146695,"journal":{"name":"2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125725490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Robust Visual Tracking via Rank-Constrained Sparse Learning 基于秩约束稀疏学习的鲁棒视觉跟踪
B. Bozorgtabar, Roland Göcke
In this paper, we present an improved low-rank sparse learning method for particle filter based visual tracking, which we denote as rank-constrained sparse learning. Since each particle can be sparsely represented by a linear combination of the bases from an adaptive dictionary, we exploit the underlying structure between particles by constraining the rank of particle sparse representations jointly over the adaptive dictionary. Besides utilising a common structure among particles, the proposed tracker also suggests the most discriminative features for particle representations using an additional feature selection module employed in the proposed objective function. Furthermore, we present an efficient way to solve this learning problem by connecting the low-rank structure extracted from particles to a simpler learning problem in the devised discriminative subspace. The suggested way improves the overall computational complexity for the high-dimensional particle candidates. Finally, in order to achieve a more robust tracker, we augment the sparse representation of particles with adaptive weights, which indicate similarity between candidates and the dictionary templates. The proposed approach is extensively evaluated on the VOT 2013 visual tracking evaluation platform including 16 challenging sequences. Experimental results compared to state-of-the-art methods show the robustness and effectiveness of the proposed tracker.
本文提出了一种改进的基于粒子滤波的视觉跟踪低秩稀疏学习方法,我们将其称为秩约束稀疏学习。由于每个粒子可以由自适应字典中的基的线性组合稀疏表示,我们通过约束粒子稀疏表示在自适应字典上的秩来利用粒子之间的底层结构。除了利用粒子之间的共同结构外,所提出的跟踪器还使用所提出的目标函数中使用的附加特征选择模块为粒子表示提供最具区别性的特征。此外,我们提出了一种有效的方法,将从粒子中提取的低秩结构连接到设计的判别子空间中的更简单的学习问题。该方法提高了高维候选粒子的整体计算复杂度。最后,为了获得更鲁棒的跟踪器,我们用自适应权值来增强粒子的稀疏表示,这表明候选粒子与字典模板之间的相似性。该方法在VOT 2013视觉跟踪评估平台上进行了广泛的评估,包括16个具有挑战性的序列。实验结果表明,该跟踪器具有较好的鲁棒性和有效性。
{"title":"Robust Visual Tracking via Rank-Constrained Sparse Learning","authors":"B. Bozorgtabar, Roland Göcke","doi":"10.1109/DICTA.2014.7008129","DOIUrl":"https://doi.org/10.1109/DICTA.2014.7008129","url":null,"abstract":"In this paper, we present an improved low-rank sparse learning method for particle filter based visual tracking, which we denote as rank-constrained sparse learning. Since each particle can be sparsely represented by a linear combination of the bases from an adaptive dictionary, we exploit the underlying structure between particles by constraining the rank of particle sparse representations jointly over the adaptive dictionary. Besides utilising a common structure among particles, the proposed tracker also suggests the most discriminative features for particle representations using an additional feature selection module employed in the proposed objective function. Furthermore, we present an efficient way to solve this learning problem by connecting the low-rank structure extracted from particles to a simpler learning problem in the devised discriminative subspace. The suggested way improves the overall computational complexity for the high-dimensional particle candidates. Finally, in order to achieve a more robust tracker, we augment the sparse representation of particles with adaptive weights, which indicate similarity between candidates and the dictionary templates. The proposed approach is extensively evaluated on the VOT 2013 visual tracking evaluation platform including 16 challenging sequences. Experimental results compared to state-of-the-art methods show the robustness and effectiveness of the proposed tracker.","PeriodicalId":146695,"journal":{"name":"2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127353013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Semi-Quantitative Analysis Model with Parabolic Modelling for DCE-MRI Sequences of Prostate 前列腺DCE-MRI序列的抛物线半定量分析模型
G. Samarasinghe, A. Sowmya, D. Moses
Dynamic Contrast Enhanced Magnetic Resonance Resonance Imaging (DCE-MRI), also called perfusion Magnetic Resonance Imaging, is an advanced Magnetic Resonance Imaging (MRI) modality used in non-invasive diagnosis of Prostate Cancer. In this paper we propose a novel semi-quantitative model to represent perfusion behaviour of 3-dimensional prostate voxels in DCE-MRI sequences based on parametric evaluation of parabolic polynomials. Perfusion data of each prostate voxel is modelled on to a best fit parabolic function using second order non-linear regression. Then a single parameter is derived using geometric parameters of the parabola to represent the amount and rapidity of signal intensity enhancement of the voxel against the contrast enhancement agent. Finally prostate voxels are classified using k-means clustering based on the parameter derived by the proposed model. A qualitative evaluation was performed and the classification results represented as graphical summarizations of perfusion MR data for 70 axial DCE-MRI slices of 10 patients by an expert radiologist. The results show that the proposed semi- quantitative model and the parameter derived from the model have the potential to be used in manual observations or in Computer- Aided Diagnosis (CAD) systems for prostate cancer recognition.
动态对比增强磁共振成像(DCE-MRI),也称为灌注磁共振成像,是一种先进的磁共振成像(MRI)方式,用于前列腺癌的非侵入性诊断。在本文中,我们提出了一种新的半定量模型来表示DCE-MRI序列中三维前列腺体素的灌注行为,该模型基于抛物线多项式的参数评估。每个前列腺体素的灌注数据使用二阶非线性回归建模为最佳拟合抛物线函数。然后,利用抛物线的几何参数推导出单个参数来表示对比增强剂对体素的信号强度增强的数量和速度。最后,基于该模型导出的参数,使用k-means聚类对前列腺体素进行分类。对10例患者的70张轴向DCE-MRI切片进行定性评价,并将分类结果用图形化总结表示为灌注MR数据。结果表明,所提出的半定量模型和由该模型导出的参数具有应用于人工观测或计算机辅助诊断(CAD)系统的潜力。
{"title":"A Semi-Quantitative Analysis Model with Parabolic Modelling for DCE-MRI Sequences of Prostate","authors":"G. Samarasinghe, A. Sowmya, D. Moses","doi":"10.1109/DICTA.2014.7008092","DOIUrl":"https://doi.org/10.1109/DICTA.2014.7008092","url":null,"abstract":"Dynamic Contrast Enhanced Magnetic Resonance Resonance Imaging (DCE-MRI), also called perfusion Magnetic Resonance Imaging, is an advanced Magnetic Resonance Imaging (MRI) modality used in non-invasive diagnosis of Prostate Cancer. In this paper we propose a novel semi-quantitative model to represent perfusion behaviour of 3-dimensional prostate voxels in DCE-MRI sequences based on parametric evaluation of parabolic polynomials. Perfusion data of each prostate voxel is modelled on to a best fit parabolic function using second order non-linear regression. Then a single parameter is derived using geometric parameters of the parabola to represent the amount and rapidity of signal intensity enhancement of the voxel against the contrast enhancement agent. Finally prostate voxels are classified using k-means clustering based on the parameter derived by the proposed model. A qualitative evaluation was performed and the classification results represented as graphical summarizations of perfusion MR data for 70 axial DCE-MRI slices of 10 patients by an expert radiologist. The results show that the proposed semi- quantitative model and the parameter derived from the model have the potential to be used in manual observations or in Computer- Aided Diagnosis (CAD) systems for prostate cancer recognition.","PeriodicalId":146695,"journal":{"name":"2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121675350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1