首页 > 最新文献

2009 Digital Image Computing: Techniques and Applications最新文献

英文 中文
Multi-projective Parameter Estimation for Sets of Homogeneous Matrices 齐次矩阵集的多投影参数估计
Pub Date : 2009-12-01 DOI: 10.1109/DICTA.2009.27
W. Chojnacki, R. Hill, A. Hengel, M. Brooks
A number of problems in computer vision require the estimation of a set of matrices, each of which is defined only up to an individual scale factor and represents the parameters of a separate model, under the assumption that the models are intrinsically interconnected. One example of such a set is a family of fundamental matrices sharing an infinite homography. Here an approach is presented to estimating a general set of interdependent matrices defined to within separate scales. The input data is assumed to consist of individually estimated matrices for particular models, which when considered collectively may fail to satisfy the constraints representing the inter-model relationships. Two cost functions are proposed for upgrading, via optimisation, the data of this sort to a collection of matrices satisfying the inter- model constraints. One of these functions incorporates error covariances. Each function is invariant to any change of scale for the input estimates. The proposed approach is applied to the particular problem of estimating a set of fundamental matrices of the form of the example set above. Experimental results are given which demonstrate the effectiveness of the approach.
计算机视觉中的许多问题需要对一组矩阵进行估计,每个矩阵只定义一个单独的比例因子,并表示一个单独模型的参数,假设这些模型本质上是相互关联的。这种集合的一个例子是一组基本矩阵共享一个无限单应。这里提出了一种方法来估计在单独尺度内定义的相互依赖矩阵的一般集合。假设输入数据由特定模型的单独估计的矩阵组成,当将它们放在一起考虑时,可能无法满足表示模型间关系的约束。提出了两个代价函数,通过优化将这类数据升级为满足模型间约束的矩阵集合。其中一个函数包含误差协方差。每个函数对输入估计的任何尺度变化都是不变的。所提出的方法被应用于估计上述示例集形式的一组基本矩阵的特殊问题。实验结果表明了该方法的有效性。
{"title":"Multi-projective Parameter Estimation for Sets of Homogeneous Matrices","authors":"W. Chojnacki, R. Hill, A. Hengel, M. Brooks","doi":"10.1109/DICTA.2009.27","DOIUrl":"https://doi.org/10.1109/DICTA.2009.27","url":null,"abstract":"A number of problems in computer vision require the estimation of a set of matrices, each of which is defined only up to an individual scale factor and represents the parameters of a separate model, under the assumption that the models are intrinsically interconnected. One example of such a set is a family of fundamental matrices sharing an infinite homography. Here an approach is presented to estimating a general set of interdependent matrices defined to within separate scales. The input data is assumed to consist of individually estimated matrices for particular models, which when considered collectively may fail to satisfy the constraints representing the inter-model relationships. Two cost functions are proposed for upgrading, via optimisation, the data of this sort to a collection of matrices satisfying the inter- model constraints. One of these functions incorporates error covariances. Each function is invariant to any change of scale for the input estimates. The proposed approach is applied to the particular problem of estimating a set of fundamental matrices of the form of the example set above. Experimental results are given which demonstrate the effectiveness of the approach.","PeriodicalId":277395,"journal":{"name":"2009 Digital Image Computing: Techniques and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121040376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Self Occlusions and Graph Based Edge Measurement Schemes for Visual Tracking Applications 视觉跟踪应用的自遮挡和基于图形的边缘测量方案
Pub Date : 2009-12-01 DOI: 10.1109/DICTA.2009.74
Andrew W. B. Smith, B. Lovell
The success of visual tracking systems is highly dependent upon the effectiveness of the measurement function used to evaluate the likelihood of a hypothesized object state. Generative tracking algorithms attempt to find the global and other local maxima of these measurement functions. As such, designing measurement functions which have a small number of local maxima is highly desirable. Edge based measurements are an integral component of most measurement functions. Graph based methods are commonly used for image segmentation, and more recently have been applied to visual tracking problems. When self occlusions are present, it is necessary to find the shortest path across a graph when the weights of some graph vertices are unknown. In this paper, treatments are given for handling object self occlusions in graph based edge measurement methods. Experiments are performed to test the effect that each of these treatments has on the accuracy and number of modes in the observational likelihood.
视觉跟踪系统的成功高度依赖于用于评估假设对象状态可能性的测量函数的有效性。生成跟踪算法试图找到这些测量函数的全局和其他局部最大值。因此,设计具有少量局部最大值的测量函数是非常可取的。基于边缘的测量是大多数测量功能的一个组成部分。基于图的方法通常用于图像分割,最近已应用于视觉跟踪问题。当存在自遮挡时,当一些图顶点的权值未知时,需要找到穿过图的最短路径。本文给出了基于图的边缘测量方法中对物体自身遮挡的处理方法。进行实验来检验每一种处理对观测似然中模式的准确性和数量的影响。
{"title":"Self Occlusions and Graph Based Edge Measurement Schemes for Visual Tracking Applications","authors":"Andrew W. B. Smith, B. Lovell","doi":"10.1109/DICTA.2009.74","DOIUrl":"https://doi.org/10.1109/DICTA.2009.74","url":null,"abstract":"The success of visual tracking systems is highly dependent upon the effectiveness of the measurement function used to evaluate the likelihood of a hypothesized object state. Generative tracking algorithms attempt to find the global and other local maxima of these measurement functions. As such, designing measurement functions which have a small number of local maxima is highly desirable. Edge based measurements are an integral component of most measurement functions. Graph based methods are commonly used for image segmentation, and more recently have been applied to visual tracking problems. When self occlusions are present, it is necessary to find the shortest path across a graph when the weights of some graph vertices are unknown. In this paper, treatments are given for handling object self occlusions in graph based edge measurement methods. Experiments are performed to test the effect that each of these treatments has on the accuracy and number of modes in the observational likelihood.","PeriodicalId":277395,"journal":{"name":"2009 Digital Image Computing: Techniques and Applications","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127224513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Refining Local 3D Feature Matching through Geometric Consistency for Robust Biometric Recognition 基于几何一致性的局部三维特征匹配鲁棒性生物特征识别
Pub Date : 2009-12-01 DOI: 10.1109/DICTA.2009.87
S. Islam, Rowan Davies
Local features are gaining popularity due to their robustness to occlusion and other variations such as minor deformation. However, using local features for recognition of biometric traits, which are generally highly similar, can produce large numbers of false matches. To increase recognition performance, we propose to eliminate some incorrect matches using a simple form geometric consistency, and some associated similarity measures. The performance of the approach is evaluated on different datasets and compared with some previous approaches. We obtain an improvement from 81.60% to 92.77% in rank-1 ear identification on the University of Notre Dame Biometric Database, the largest publicly available profile database from the University of Notre Dame with 415 subjects.
局部特征由于其对遮挡和其他变化(如轻微变形)的鲁棒性而越来越受欢迎。然而,使用局部特征来识别生物特征通常是高度相似的,可能会产生大量的错误匹配。为了提高识别性能,我们建议使用简单形式的几何一致性和一些相关的相似性度量来消除一些不正确的匹配。在不同的数据集上评估了该方法的性能,并与之前的一些方法进行了比较。我们在圣母大学生物特征数据库(University of Notre Dame Biometric Database)上获得了从81.60%提高到92.77%的秩1耳朵识别,该数据库是圣母大学最大的公开资料数据库,共有415名受试者。
{"title":"Refining Local 3D Feature Matching through Geometric Consistency for Robust Biometric Recognition","authors":"S. Islam, Rowan Davies","doi":"10.1109/DICTA.2009.87","DOIUrl":"https://doi.org/10.1109/DICTA.2009.87","url":null,"abstract":"Local features are gaining popularity due to their robustness to occlusion and other variations such as minor deformation. However, using local features for recognition of biometric traits, which are generally highly similar, can produce large numbers of false matches. To increase recognition performance, we propose to eliminate some incorrect matches using a simple form geometric consistency, and some associated similarity measures. The performance of the approach is evaluated on different datasets and compared with some previous approaches. We obtain an improvement from 81.60% to 92.77% in rank-1 ear identification on the University of Notre Dame Biometric Database, the largest publicly available profile database from the University of Notre Dame with 415 subjects.","PeriodicalId":277395,"journal":{"name":"2009 Digital Image Computing: Techniques and Applications","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116094712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A Low Complexity Algorithm for Global Motion Parameter Estimation Targeting Hardware Implementation 一种低复杂度的全局运动参数估计算法
Pub Date : 2009-12-01 DOI: 10.1109/DICTA.2009.11
Md. Nazmul Haque, Moyuresh Biswas, M. Pickering, M. Frater
Now-a-days image alignment is one of the most widely used techniques in computer vision. Image alignment has many applications in fields as diverse as video surveillance, computer vision, medical imaging, and video coding. The estimation of an objects’ motion is a key step in image alignment. In this paper, we will present a low-complexity algorithm for estimation of motion parameters. Most of the motion parameters estimation algorithms are carried out with a precision of 8 bits per pixel; here we propose an algorithm using only 1 bit per pixel resulting in lower complexity. The proposed method includes a technique for calculating the gradient of the sum-of-squared difference (SSD) using XOR operations instead of multiplication. Experimental results show that the proposed method compares favorably with registration using the full precision of the input images.
如今,图像对齐是计算机视觉中应用最广泛的技术之一。图像对齐在视频监控、计算机视觉、医学成像和视频编码等领域有许多应用。物体运动的估计是图像对齐的关键步骤。在本文中,我们将提出一种低复杂度的运动参数估计算法。大多数运动参数估计算法以每像素8位的精度进行;在这里,我们提出了一种算法,每像素只使用1比特,从而降低了复杂性。所提出的方法包括使用异或运算而不是乘法来计算平方和差(SSD)梯度的技术。实验结果表明,该方法与充分利用输入图像精度的配准方法相比具有较好的优越性。
{"title":"A Low Complexity Algorithm for Global Motion Parameter Estimation Targeting Hardware Implementation","authors":"Md. Nazmul Haque, Moyuresh Biswas, M. Pickering, M. Frater","doi":"10.1109/DICTA.2009.11","DOIUrl":"https://doi.org/10.1109/DICTA.2009.11","url":null,"abstract":"Now-a-days image alignment is one of the most widely used techniques in computer vision. Image alignment has many applications in fields as diverse as video surveillance, computer vision, medical imaging, and video coding. The estimation of an objects’ motion is a key step in image alignment. In this paper, we will present a low-complexity algorithm for estimation of motion parameters. Most of the motion parameters estimation algorithms are carried out with a precision of 8 bits per pixel; here we propose an algorithm using only 1 bit per pixel resulting in lower complexity. The proposed method includes a technique for calculating the gradient of the sum-of-squared difference (SSD) using XOR operations instead of multiplication. Experimental results show that the proposed method compares favorably with registration using the full precision of the input images.","PeriodicalId":277395,"journal":{"name":"2009 Digital Image Computing: Techniques and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129580826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combined Time Domain and Spectral Domain Data Compression for Fast Multispectral Imagery Updating 结合时域和谱域数据压缩的多光谱图像快速更新
Pub Date : 2009-12-01 DOI: 10.1109/DICTA.2009.54
Md. Al Mamun, X. Jia, M. Ryan
The transmission of remote sensed images across communication paths is becoming a very expensive process because of the recent advances towards the satellite technologies that enable to download of terabytes of data every day. Image compression is an option for reducing the number of bits in transmission and various compression techniques have been developed; including predictive coding, transform coding and vector quantization. However, most techniques perform data compression within a data set. In this paper, we assume that the user has already received previous data and needs to update that only. A combined time domain and spectral domain data compression scheme is proposed. Change detection between the two dates is first performed followed by separate modelling of changed and non changed data relationship for one band in order to transmit them more efficiently. The rest of bands are transmitted by the prediction from band to band, since they are highly correlated. The developed scheme is illustrated with a subset of Landsat ETM data recorded over Canberra, Australia, in 2000 and 2001.
通过通信路径传输遥感图像正成为一个非常昂贵的过程,因为卫星技术最近取得了进展,每天可以下载数兆字节的数据。图像压缩是减少传输中比特数的一种选择,并且已经开发了各种压缩技术;包括预测编码、变换编码和矢量量化。然而,大多数技术在数据集中执行数据压缩。在本文中,我们假设用户已经接收到以前的数据,并且只需要更新这些数据。提出了一种时域和谱域数据联合压缩方案。首先进行两个日期之间的变化检测,然后对一个波段的变化和未变化数据关系进行单独建模,以便更有效地传输它们。其余的波段由于高度相关,通过预测从一个波段传输到另一个波段。开发的方案用2000年和2001年在澳大利亚堪培拉记录的Landsat ETM数据子集来说明。
{"title":"Combined Time Domain and Spectral Domain Data Compression for Fast Multispectral Imagery Updating","authors":"Md. Al Mamun, X. Jia, M. Ryan","doi":"10.1109/DICTA.2009.54","DOIUrl":"https://doi.org/10.1109/DICTA.2009.54","url":null,"abstract":"The transmission of remote sensed images across communication paths is becoming a very expensive process because of the recent advances towards the satellite technologies that enable to download of terabytes of data every day. Image compression is an option for reducing the number of bits in transmission and various compression techniques have been developed; including predictive coding, transform coding and vector quantization. However, most techniques perform data compression within a data set. In this paper, we assume that the user has already received previous data and needs to update that only. A combined time domain and spectral domain data compression scheme is proposed. Change detection between the two dates is first performed followed by separate modelling of changed and non changed data relationship for one band in order to transmit them more efficiently. The rest of bands are transmitted by the prediction from band to band, since they are highly correlated. The developed scheme is illustrated with a subset of Landsat ETM data recorded over Canberra, Australia, in 2000 and 2001.","PeriodicalId":277395,"journal":{"name":"2009 Digital Image Computing: Techniques and Applications","volume":"308 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116335223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Content-Based Video Retrieval (CBVR) System for CCTV Surveillance Videos 基于内容的CCTV监控视频检索系统
Pub Date : 2009-12-01 DOI: 10.1109/DICTA.2009.36
Yan Yang, B. Lovell, F. Dadgostar
The inherent nature of image and video and its multi-dimension data space makes its processing and interpretation a very complex task, normally requiring considerable processing power. Moreover, understanding the meaning of video content and storing it in a fast searchable and readable form, requires taking advantage of image processing methods, which when running them on a video stream per query, would not be cost effective, and in some cases is quite impossible due to time restrictions. Hence, to speed up the search process, storing video and its extracted meta-data together is desired. The storage model itself is one of the challenges in this context, as based on the current CCTV technology; it is estimated to require a petabyte size data management system. This estimate however, is expected to grow rapidly as current advances in video recording devices are leading to higher resolution sensors, and larger frame size. On the other hand, the increasing demand for object tracking on video streams has invoked the research on Content-Based Image Retrieval (CBIR) and Content-Based Video Retrieval (CBVR). In this paper, we present the design and implementation of a framework and a data model for CCTV surveillance videos on RDBMS which provides the functions of a surveillance monitoring system, with a tagging structure for event detection. On account of some recent results, we believe this is a promising direction for surveillance video search in comparison to the existing solutions.
图像和视频的固有性质及其多维数据空间使其处理和解释成为一项非常复杂的任务,通常需要相当大的处理能力。此外,理解视频内容的含义并将其存储为可快速搜索和可读的形式,需要利用图像处理方法,当每个查询在视频流上运行它们时,这将不具有成本效益,并且在某些情况下由于时间限制是完全不可能的。因此,为了加快搜索过程,需要将视频及其提取的元数据存储在一起。在这种情况下,存储模式本身就是一个挑战,因为基于当前的CCTV技术;估计需要一个pb大小的数据管理系统。然而,这一估计预计将迅速增长,因为目前视频记录设备的进步正在导致更高分辨率的传感器和更大的帧尺寸。另一方面,视频流对象跟踪的需求不断增长,引发了基于内容的图像检索(CBIR)和基于内容的视频检索(CBVR)的研究。在本文中,我们提出了一个基于RDBMS的CCTV监控视频的框架和数据模型的设计和实现,该模型提供了一个监控系统的功能,并具有用于事件检测的标记结构。鉴于最近的一些结果,我们相信与现有的解决方案相比,这是一个有希望的监控视频搜索方向。
{"title":"Content-Based Video Retrieval (CBVR) System for CCTV Surveillance Videos","authors":"Yan Yang, B. Lovell, F. Dadgostar","doi":"10.1109/DICTA.2009.36","DOIUrl":"https://doi.org/10.1109/DICTA.2009.36","url":null,"abstract":"The inherent nature of image and video and its multi-dimension data space makes its processing and interpretation a very complex task, normally requiring considerable processing power. Moreover, understanding the meaning of video content and storing it in a fast searchable and readable form, requires taking advantage of image processing methods, which when running them on a video stream per query, would not be cost effective, and in some cases is quite impossible due to time restrictions. Hence, to speed up the search process, storing video and its extracted meta-data together is desired. The storage model itself is one of the challenges in this context, as based on the current CCTV technology; it is estimated to require a petabyte size data management system. This estimate however, is expected to grow rapidly as current advances in video recording devices are leading to higher resolution sensors, and larger frame size. On the other hand, the increasing demand for object tracking on video streams has invoked the research on Content-Based Image Retrieval (CBIR) and Content-Based Video Retrieval (CBVR). In this paper, we present the design and implementation of a framework and a data model for CCTV surveillance videos on RDBMS which provides the functions of a surveillance monitoring system, with a tagging structure for event detection. On account of some recent results, we believe this is a promising direction for surveillance video search in comparison to the existing solutions.","PeriodicalId":277395,"journal":{"name":"2009 Digital Image Computing: Techniques and Applications","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124047457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Paper Fingerprinting Using alpha-Masked Image Matching 基于蒙面图像匹配的纸张指纹识别
Pub Date : 2009-12-01 DOI: 10.1109/DICTA.2009.82
Tuan Q. Pham, S. Perry, P. Fletcher
In this paper, we examine the problem of authenticating paper media using the unique fiber structure of each piece of paper (the so-called "paper fingerprint"). In particular, we look at methods to authenticate paper media when text has been printed over the authentication zone. We show how alpha-masked correlation [Fitch05] can be applied to this problem and develop a modification to alpha-masked correlation that is more closely matched to the requirements of this problem and produces an improvement in performance. We also investigate two methods of pixel inpainting to remove printed text or marks from the authentication zone and allow ordinary correlation to be performed. We show that these methods can perform as well as alpha-masked correlation. Finally two methods of improving the robustness to forgery are investigated.
在本文中,我们研究了使用每张纸的独特纤维结构(所谓的“纸指纹”)来验证纸媒体的问题。特别是,我们将研究在身份验证区域上打印文本时对纸质媒体进行身份验证的方法。我们展示了如何将alpha-掩模相关[Fitch05]应用于此问题,并开发了对alpha-掩模相关的修改,该修改更符合该问题的要求,并提高了性能。我们还研究了两种从认证区域中去除印刷文本或标记并允许进行普通关联的像素绘制方法。我们证明这些方法可以表现得和α掩模相关一样好。最后研究了两种提高系统抗伪造鲁棒性的方法。
{"title":"Paper Fingerprinting Using alpha-Masked Image Matching","authors":"Tuan Q. Pham, S. Perry, P. Fletcher","doi":"10.1109/DICTA.2009.82","DOIUrl":"https://doi.org/10.1109/DICTA.2009.82","url":null,"abstract":"In this paper, we examine the problem of authenticating paper media using the unique fiber structure of each piece of paper (the so-called \"paper fingerprint\"). In particular, we look at methods to authenticate paper media when text has been printed over the authentication zone. We show how alpha-masked correlation [Fitch05] can be applied to this problem and develop a modification to alpha-masked correlation that is more closely matched to the requirements of this problem and produces an improvement in performance. We also investigate two methods of pixel inpainting to remove printed text or marks from the authentication zone and allow ordinary correlation to be performed. We show that these methods can perform as well as alpha-masked correlation. Finally two methods of improving the robustness to forgery are investigated.","PeriodicalId":277395,"journal":{"name":"2009 Digital Image Computing: Techniques and Applications","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127904883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Applying Sum and Max Product Algorithms of Belief Propagation to 3D Shape Matching and Registration 信念传播的和与最大积算法在三维形状匹配配准中的应用
Pub Date : 2009-12-01 DOI: 10.1109/DICTA.2009.70
Pengdong Xiao, N. Barnes, P. Lieby, T. Caetano
3D shape matching based on meshed surfaces can be formulated as an energy function minimisation problem under a Markov random field (MRF) framework. However, to solve such a global optimisation problem is NP-hard. So research mainly focuses on approximation algorithms. One of the best known is belief propagation (BP), which has shown success in early vision and many other practical applications. In this paper, we investigate the application of both sum and max product algorithms of belief propagation to 3D shape matching. We also apply the 3D shape matching results to a 3D registration problem.
基于网格曲面的三维形状匹配可表述为马尔科夫随机场框架下的能量函数最小化问题。然而,要解决这样一个全局优化问题是np困难的。因此研究主要集中在近似算法上。其中最著名的是信念传播(BP),它在早期视觉和许多其他实际应用中取得了成功。本文研究了信念传播的和算法和最大积算法在三维形状匹配中的应用。我们还将三维形状匹配结果应用于三维配准问题。
{"title":"Applying Sum and Max Product Algorithms of Belief Propagation to 3D Shape Matching and Registration","authors":"Pengdong Xiao, N. Barnes, P. Lieby, T. Caetano","doi":"10.1109/DICTA.2009.70","DOIUrl":"https://doi.org/10.1109/DICTA.2009.70","url":null,"abstract":"3D shape matching based on meshed surfaces can be formulated as an energy function minimisation problem under a Markov random field (MRF) framework. However, to solve such a global optimisation problem is NP-hard. So research mainly focuses on approximation algorithms. One of the best known is belief propagation (BP), which has shown success in early vision and many other practical applications. In this paper, we investigate the application of both sum and max product algorithms of belief propagation to 3D shape matching. We also apply the 3D shape matching results to a 3D registration problem.","PeriodicalId":277395,"journal":{"name":"2009 Digital Image Computing: Techniques and Applications","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128811067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Greedy Approximation of Kernel PCA by Minimizing the Mapping Error 最小化映射误差的核主成分分析贪心逼近
Pub Date : 2009-12-01 DOI: 10.1109/DICTA.2009.57
Peng Cheng, W. Li, P. Ogunbona
In this paper we propose a new kernel PCA (KPCA) speed-up algorithm that aims to find a reduced KPCA to approximate the kernel mapping. The algorithm works by greedily choosing a subset of the training samples that minimizes the mean square error of the kernel mapping between the original KPCA and the reduced KPCA. Experimental results have shown that the proposed algorithm is more efficient in computation and effective with lower mapping errors than previous algorithms.
本文提出了一种新的核主成分分析(KPCA)加速算法,该算法旨在找到一个简化的核主成分分析来近似核映射。该算法的工作原理是,贪婪地选择训练样本的一个子集,使原始KPCA和简化后的KPCA之间的核映射的均方误差最小化。实验结果表明,该算法具有较好的计算效率和较低的映射误差。
{"title":"Greedy Approximation of Kernel PCA by Minimizing the Mapping Error","authors":"Peng Cheng, W. Li, P. Ogunbona","doi":"10.1109/DICTA.2009.57","DOIUrl":"https://doi.org/10.1109/DICTA.2009.57","url":null,"abstract":"In this paper we propose a new kernel PCA (KPCA) speed-up algorithm that aims to find a reduced KPCA to approximate the kernel mapping. The algorithm works by greedily choosing a subset of the training samples that minimizes the mean square error of the kernel mapping between the original KPCA and the reduced KPCA. Experimental results have shown that the proposed algorithm is more efficient in computation and effective with lower mapping errors than previous algorithms.","PeriodicalId":277395,"journal":{"name":"2009 Digital Image Computing: Techniques and Applications","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125936594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Combined Contourlet and Non-subsampled Contourlet Transforms Based Approach for Personal Identification Using Palmprint 基于Contourlet和非下采样Contourlet组合变换的掌纹个人识别方法
Pub Date : 2009-12-01 DOI: 10.1109/DICTA.2009.73
Hassan Masood, Mohammad Asim, Mustafa Mumtaz, A. Mansoor
Palmprint based personal verification is an accepted biometric modality due to its reliability, ease of acquisition and user acceptance. This paper presents a novel palmprint based identification approach which draw on the textural information available on the palmprint by utilizing a combination of Contourlet and Non Subsampled Contourlet Transforms. Center of the palm is computed using the Distance Transform whereas the parameters of best fitting ellipse help determine the alignment of the palmprint. ROI of 256X256 pixels is cropped around the center, and subsequently it is divided into fine slices, using iterated directional filterbanks. Next, directional energy components for each block of the decomposed subband outputs are computed using Contourlet and Non Subsampled Contourlet Transforms. The proposed algorithm captures global details in a palmprint as compact fixed length palm codes for Contourlet and NSCT respectively which are further concatenated at feature level for identification using normalized Euclidean distance classifier. The proposed algorithm is tested on a total of 500 palm images of GPDS Hand database, acquired from University of Las Palmas de Gran Canaria. The experimental results were compiled for individual transforms as well as for their optimized combination at feature level. CT based approach demonstrated the Decidability Index of 2.6212 and Equal Error Rate (EER) of 0.7082% while NSCT based approach depicted Decidability Index of 2.7278 and EER of 0.5082%. The feature level fusion achieved Decidability Index of 2.7956 and EER of 0.3112%.
基于掌纹的个人验证是一种公认的生物识别方式,因为它的可靠性,易于获取和用户接受。本文提出了一种新的基于掌纹的识别方法,该方法将Contourlet变换与非下采样Contourlet变换相结合,充分利用掌纹上的纹理信息。使用距离变换计算掌纹的中心,而最佳拟合椭圆的参数有助于确定掌纹的对齐。在中心周围裁剪256X256像素的ROI,然后使用迭代方向滤波器组将其划分为精细切片。接下来,使用Contourlet和非下采样Contourlet变换计算分解子带输出的每个块的方向能量分量。该算法将掌纹中的全局细节捕获为紧凑的定长掌纹编码,分别用于Contourlet和NSCT,并在特征级进一步拼接,使用归一化欧几里得距离分类器进行识别。该算法在来自大加那利岛拉斯帕尔马斯大学的GPDS Hand数据库的500张手掌图像上进行了测试。实验结果分别针对单个变换和特征级的优化组合进行了编译。基于CT的方法的可判定性指数为2.6212,平均错误率(EER)为0.7082%,而基于NSCT的方法的可判定性指数为2.7278,平均错误率(EER)为0.5082%。特征级融合的decision - ability Index为2.7956,EER为0.3112%。
{"title":"Combined Contourlet and Non-subsampled Contourlet Transforms Based Approach for Personal Identification Using Palmprint","authors":"Hassan Masood, Mohammad Asim, Mustafa Mumtaz, A. Mansoor","doi":"10.1109/DICTA.2009.73","DOIUrl":"https://doi.org/10.1109/DICTA.2009.73","url":null,"abstract":"Palmprint based personal verification is an accepted biometric modality due to its reliability, ease of acquisition and user acceptance. This paper presents a novel palmprint based identification approach which draw on the textural information available on the palmprint by utilizing a combination of Contourlet and Non Subsampled Contourlet Transforms. Center of the palm is computed using the Distance Transform whereas the parameters of best fitting ellipse help determine the alignment of the palmprint. ROI of 256X256 pixels is cropped around the center, and subsequently it is divided into fine slices, using iterated directional filterbanks. Next, directional energy components for each block of the decomposed subband outputs are computed using Contourlet and Non Subsampled Contourlet Transforms. The proposed algorithm captures global details in a palmprint as compact fixed length palm codes for Contourlet and NSCT respectively which are further concatenated at feature level for identification using normalized Euclidean distance classifier. The proposed algorithm is tested on a total of 500 palm images of GPDS Hand database, acquired from University of Las Palmas de Gran Canaria. The experimental results were compiled for individual transforms as well as for their optimized combination at feature level. CT based approach demonstrated the Decidability Index of 2.6212 and Equal Error Rate (EER) of 0.7082% while NSCT based approach depicted Decidability Index of 2.7278 and EER of 0.5082%. The feature level fusion achieved Decidability Index of 2.7956 and EER of 0.3112%.","PeriodicalId":277395,"journal":{"name":"2009 Digital Image Computing: Techniques and Applications","volume":"62 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128692800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
期刊
2009 Digital Image Computing: Techniques and Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1