首页 > 最新文献

2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)最新文献

英文 中文
Fast semantic segmentation of aerial images based on color and texture 基于颜色和纹理的航空图像快速语义分割
Pub Date : 2013-09-01 DOI: 10.1109/IRANIANMVIP.2013.6780004
M. Ghiasi, R. Amirfattahi
In this paper, a semantic segmentation method for aerial images is presented. Semantic segmentation allows the task of segmentation and classification to be performed simultaneously in a single efficient step. This algorithm relies on descriptors of color and texture. In the training phase, we first manually extract homogenous areas and label each area semantically. Then color and texture descriptors for each area in the training image are computed. The pool of descriptors and their semantic label are used to build two separate classifiers for color and texture. We tested our algorithm by KNN classifier. To segment a new image, we over-segment it into a number of superpixels. Then we compute texture and color descriptors for each superpixel and classify it based on the trained classifier. This labels the superpixels semantically. Labeling all superpixels provides a segmentation map. We used local binary pattern histogram fourier features and color histograms of RGB images as texture and color descriptors respectively. This algorithm is applied to a large set of aerial images and is proved to have above 95% success rate.
本文提出了一种航空图像的语义分割方法。语义分割允许在一个有效的步骤中同时执行分割和分类任务。该算法依赖于颜色和纹理描述符。在训练阶段,我们首先手动提取同质区域,并对每个区域进行语义标记。然后计算训练图像中每个区域的颜色和纹理描述符。描述符池及其语义标签用于构建颜色和纹理两个独立的分类器。我们用KNN分类器测试了我们的算法。为了分割新图像,我们将其过度分割成许多超像素。然后计算每个超像素的纹理和颜色描述符,并根据训练好的分类器对其进行分类。这在语义上标记了超像素。标记所有的超像素提供了一个分割图。我们分别使用RGB图像的局部二值模式直方图傅立叶特征和颜色直方图作为纹理和颜色描述符。将该算法应用于大量的航拍图像,成功率在95%以上。
{"title":"Fast semantic segmentation of aerial images based on color and texture","authors":"M. Ghiasi, R. Amirfattahi","doi":"10.1109/IRANIANMVIP.2013.6780004","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6780004","url":null,"abstract":"In this paper, a semantic segmentation method for aerial images is presented. Semantic segmentation allows the task of segmentation and classification to be performed simultaneously in a single efficient step. This algorithm relies on descriptors of color and texture. In the training phase, we first manually extract homogenous areas and label each area semantically. Then color and texture descriptors for each area in the training image are computed. The pool of descriptors and their semantic label are used to build two separate classifiers for color and texture. We tested our algorithm by KNN classifier. To segment a new image, we over-segment it into a number of superpixels. Then we compute texture and color descriptors for each superpixel and classify it based on the trained classifier. This labels the superpixels semantically. Labeling all superpixels provides a segmentation map. We used local binary pattern histogram fourier features and color histograms of RGB images as texture and color descriptors respectively. This algorithm is applied to a large set of aerial images and is proved to have above 95% success rate.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129477025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Automatic dental CT image segmentation using mean shift algorithm 基于均值移位算法的牙科CT图像自动分割
Pub Date : 2013-09-01 DOI: 10.1109/IRANIANMVIP.2013.6779962
Parinaz Mortaheb, M. Rezaeian, H. Soltanian-Zadeh
Identifying the structure and arrangement of the teeth is one of the dentists' requirements for performing various procedures such as diagnosing abnormalities, dental implant and orthodontic planning. In this regard, robust segmentation of dental Computerized Tomography (CT) images is required. However, dental CT images present some major challenges for the segmentation that make it difficult process. In this research, we propose a multi-step approach for automatic segmentation of the teeth in dental CT images. The main steps of this method are presented as follows: 1-Primary segmentation to classify bony tissues from nonbony tissues. 2-Separating the general region of the teeth structure from the other bony structures and arc curve fitting in the region. 3-Individual tooth region detection. 4-Final segmentation using mean shift algorithm by defining a new feature space. The proposed algorithm has been applied to several Cone Beam Computed Tomography (CBCT) data sets and quality assessment metrics are used to evaluate the performance of the algorithm. The evaluation indicates that the accuracy of proposed method is more than 97 percent. Moreover, we compared the proposed method with thresholding, watershed, level set and active contour methods and our method shows an improvement in compare with other techniques.
识别牙齿的结构和排列是牙医进行各种程序(如诊断异常、种植牙和计划正畸)的其中一项要求。在这方面,需要对牙科计算机断层扫描(CT)图像进行鲁棒分割。然而,牙科CT图像的分割存在一些主要的挑战,使其难以处理。在本研究中,我们提出了一种多步骤的牙齿CT图像自动分割方法。该方法的主要步骤如下:1 .对骨组织和非骨组织进行初级分割。2 .将牙齿结构的一般区域与其他骨骼结构分开,并在该区域进行弧线拟合。3 .单个牙齿区域检测。4 .最后通过定义新的特征空间,使用均值移位算法进行分割。该算法已应用于多个锥形束计算机断层扫描(CBCT)数据集,并使用质量评估指标来评估算法的性能。评价结果表明,该方法的准确率可达97%以上。将该方法与阈值法、分水岭法、水平集法和活动轮廓法进行了比较,结果表明,该方法与其他方法相比有较大的改进。
{"title":"Automatic dental CT image segmentation using mean shift algorithm","authors":"Parinaz Mortaheb, M. Rezaeian, H. Soltanian-Zadeh","doi":"10.1109/IRANIANMVIP.2013.6779962","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779962","url":null,"abstract":"Identifying the structure and arrangement of the teeth is one of the dentists' requirements for performing various procedures such as diagnosing abnormalities, dental implant and orthodontic planning. In this regard, robust segmentation of dental Computerized Tomography (CT) images is required. However, dental CT images present some major challenges for the segmentation that make it difficult process. In this research, we propose a multi-step approach for automatic segmentation of the teeth in dental CT images. The main steps of this method are presented as follows: 1-Primary segmentation to classify bony tissues from nonbony tissues. 2-Separating the general region of the teeth structure from the other bony structures and arc curve fitting in the region. 3-Individual tooth region detection. 4-Final segmentation using mean shift algorithm by defining a new feature space. The proposed algorithm has been applied to several Cone Beam Computed Tomography (CBCT) data sets and quality assessment metrics are used to evaluate the performance of the algorithm. The evaluation indicates that the accuracy of proposed method is more than 97 percent. Moreover, we compared the proposed method with thresholding, watershed, level set and active contour methods and our method shows an improvement in compare with other techniques.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129112742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Removal of high density impulse noise using a novel decision based adaptive weighted and trimmed median filter 利用一种新的基于决策的自适应加权和裁剪中值滤波器去除高密度脉冲噪声
Pub Date : 2013-09-01 DOI: 10.1109/IRANIANMVIP.2013.6780016
M. Nooshyar, M. Momeny
Impulse noise is one of the most important factors in degrading of image quality. In this paper, a novel technique is presented for detecting and removing of impulse noise, while the significant information of image, such as edges and texture, are remind untouched. The proposed algorithm use the weighted window with variable sizes and apply median filtering on them. Simulation results, with various images and noise intensities, show that the proposed algorithm has better performance compared with state of the art methods and increases the PSNR value (of the reconstructed image) up to 4dBs.
脉冲噪声是影响图像质量的重要因素之一。本文提出了一种检测和去除脉冲噪声的新方法,同时不影响图像的边缘和纹理等重要信息。该算法采用可变大小的加权窗口,并对其进行中值滤波。在不同图像和噪声强度下的仿真结果表明,与现有方法相比,该算法具有更好的性能,重构图像的PSNR值最高可达4db。
{"title":"Removal of high density impulse noise using a novel decision based adaptive weighted and trimmed median filter","authors":"M. Nooshyar, M. Momeny","doi":"10.1109/IRANIANMVIP.2013.6780016","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6780016","url":null,"abstract":"Impulse noise is one of the most important factors in degrading of image quality. In this paper, a novel technique is presented for detecting and removing of impulse noise, while the significant information of image, such as edges and texture, are remind untouched. The proposed algorithm use the weighted window with variable sizes and apply median filtering on them. Simulation results, with various images and noise intensities, show that the proposed algorithm has better performance compared with state of the art methods and increases the PSNR value (of the reconstructed image) up to 4dBs.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129295482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
A novel multiple kernel learning approach for semi-supervised clustering 一种新的半监督聚类多核学习方法
Pub Date : 2013-09-01 DOI: 10.1109/IRANIANMVIP.2013.6780028
T. Zare, M. Sadeghi, H. R. Abutalebi
Distance metrics are widely used in various machine learning and pattern recognition algorithms. A main issue in these algorithms is choosing the proper distance metric. In recent years, learning an appropriate distance metric has become a very active research field. In the kernelised version of distance metric learning algorithms, the data points are implicitly mapped into a higher dimensional feature space and the learning process is performed in the resulted feature space. The performance of the kernel-based methods heavily depends on the chosen kernel function. So, selecting an appropriate kernel function and/or tuning its parameter(s) impose significant challenges in such methods. The Multiple Kernel Learning theory (MKL) addresses this problem by learning a linear combination of a number of predefined kernels. In this paper, we formulate the MKL problem in a semi-supervised metric learning framework. In the proposed approach, pairwise similarity constraints are used to adjust the weights of the combined kernels and simultaneously learn the appropriate distance metric. Using both synthetic and real-world datasets, we show that the proposed method outperforms some recently introduced semi-supervised metric learning approaches.
距离度量被广泛应用于各种机器学习和模式识别算法中。这些算法的一个主要问题是选择合适的距离度量。近年来,学习合适的距离度量已成为一个非常活跃的研究领域。在距离度量学习算法的核化版本中,数据点被隐式映射到更高维度的特征空间中,并且在结果特征空间中执行学习过程。基于核的方法的性能在很大程度上取决于所选择的核函数。因此,选择合适的核函数和/或调优其参数对这类方法构成了重大挑战。多核学习理论(MKL)通过学习许多预定义核的线性组合来解决这个问题。本文在半监督度量学习框架中提出了MKL问题。在该方法中,使用两两相似约束来调整组合核的权重,同时学习适当的距离度量。使用合成数据集和真实数据集,我们表明所提出的方法优于最近引入的一些半监督度量学习方法。
{"title":"A novel multiple kernel learning approach for semi-supervised clustering","authors":"T. Zare, M. Sadeghi, H. R. Abutalebi","doi":"10.1109/IRANIANMVIP.2013.6780028","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6780028","url":null,"abstract":"Distance metrics are widely used in various machine learning and pattern recognition algorithms. A main issue in these algorithms is choosing the proper distance metric. In recent years, learning an appropriate distance metric has become a very active research field. In the kernelised version of distance metric learning algorithms, the data points are implicitly mapped into a higher dimensional feature space and the learning process is performed in the resulted feature space. The performance of the kernel-based methods heavily depends on the chosen kernel function. So, selecting an appropriate kernel function and/or tuning its parameter(s) impose significant challenges in such methods. The Multiple Kernel Learning theory (MKL) addresses this problem by learning a linear combination of a number of predefined kernels. In this paper, we formulate the MKL problem in a semi-supervised metric learning framework. In the proposed approach, pairwise similarity constraints are used to adjust the weights of the combined kernels and simultaneously learn the appropriate distance metric. Using both synthetic and real-world datasets, we show that the proposed method outperforms some recently introduced semi-supervised metric learning approaches.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125594253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Multimodal expression-invariant face recognition using dual-tree complex wavelet transform 基于双树复小波变换的多模态表情不变人脸识别
Pub Date : 2013-09-01 DOI: 10.1109/IRANIANMVIP.2013.6779969
Fazael Ayatollahi, A. Raie, F. Hajati
A new multimodal face recognition method which extracts features of rigid and semi-rigid regions of the face using Dual-Tree Complex Wavelet Transform (DT-CWT) is proposed. DT-CWT decomposes range and intensity images into eight sub-images consisting of six band-pass sub-images to represent face details and two low-pass sub-images to represent face approximates. In this work, support vector machine (SVM) has been used as the classifier. The proposed method has been evaluated using the face BU-3DFE dataset containing a wide range of expression changes. Findings include the overall identification rate of 98.1% and the overall verification rate of 99.3% at 0.1% false acceptance rate.
提出了一种利用双树复小波变换(DT-CWT)提取人脸刚性和半刚性区域特征的多模态人脸识别方法。DT-CWT将距离和强度图像分解为8个子图像,其中6个带通子图像表示人脸细节,2个低通子图像表示人脸近似。在这项工作中,支持向量机(SVM)被用作分类器。使用包含广泛表情变化的人脸BU-3DFE数据集对所提出的方法进行了评估。在0.1%的误接受率下,总体识别率为98.1%,总体验证率为99.3%。
{"title":"Multimodal expression-invariant face recognition using dual-tree complex wavelet transform","authors":"Fazael Ayatollahi, A. Raie, F. Hajati","doi":"10.1109/IRANIANMVIP.2013.6779969","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779969","url":null,"abstract":"A new multimodal face recognition method which extracts features of rigid and semi-rigid regions of the face using Dual-Tree Complex Wavelet Transform (DT-CWT) is proposed. DT-CWT decomposes range and intensity images into eight sub-images consisting of six band-pass sub-images to represent face details and two low-pass sub-images to represent face approximates. In this work, support vector machine (SVM) has been used as the classifier. The proposed method has been evaluated using the face BU-3DFE dataset containing a wide range of expression changes. Findings include the overall identification rate of 98.1% and the overall verification rate of 99.3% at 0.1% false acceptance rate.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121558137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A fast and accurate steganalysis using Ensemble classifiers 使用集成分类器实现快速准确的隐写分析
Pub Date : 2013-09-01 DOI: 10.1109/IranianMVIP.2013.6779943
A. Torkaman, R. Safabakhsh
Nowadays the steganographic methods use the more sophisticated image models to increase security; consequently, steganalysis algorithm should build the more accurate models of images to detect them. So, the number of extracted feature is increasing. Most modern steganalysis algorithms train a supervised classifier on the feature vectors. The most popular and accurate one is SVM, but the high training time of SVM inhibits the development of steganalysis. To solve this problem, in this paper we propose a fast and accurate steganalysis methods based on Ensemble classifier and Stacking. In this method, the relation between basic learners decisions and true decision is learned by another classifier. To do this, basic learners decisions are mapped to space of uncorrelated dimensions. The complexity of this method is much lower than that of SVM, while our method improves detection accuracy. Proposed method is a fast and accurate classifier that can be used as a part of any steganalysis algorithm. Performance of this method is demonstrated on two steganographic methods, namely nsF5 and Model Based Steganography. The performance of proposed method is compared to that of Ensemble classifier. Experimental results show that the classification error and training time are lowered by 46% and 88%, respectively.
现在的隐写方法使用更复杂的图像模型来增加安全性;因此,隐写分析算法需要建立更精确的图像模型来检测图像。因此,提取的特征数量在不断增加。大多数现代隐写分析算法在特征向量上训练监督分类器。其中最常用和最准确的是支持向量机,但支持向量机的高训练时间抑制了隐写分析的发展。为了解决这一问题,本文提出了一种基于集成分类器和叠加的快速准确的隐写分析方法。在该方法中,基本学习者的决策和真实决策之间的关系由另一个分类器学习。为了做到这一点,基本的学习者决策被映射到不相关维度的空间。该方法的复杂度远低于支持向量机,同时提高了检测精度。该方法是一种快速准确的分类器,可作为任何隐写分析算法的一部分。在nsF5和基于模型的隐写两种隐写方法上验证了该方法的性能。将该方法与集成分类器的性能进行了比较。实验结果表明,该方法的分类误差和训练时间分别降低了46%和88%。
{"title":"A fast and accurate steganalysis using Ensemble classifiers","authors":"A. Torkaman, R. Safabakhsh","doi":"10.1109/IranianMVIP.2013.6779943","DOIUrl":"https://doi.org/10.1109/IranianMVIP.2013.6779943","url":null,"abstract":"Nowadays the steganographic methods use the more sophisticated image models to increase security; consequently, steganalysis algorithm should build the more accurate models of images to detect them. So, the number of extracted feature is increasing. Most modern steganalysis algorithms train a supervised classifier on the feature vectors. The most popular and accurate one is SVM, but the high training time of SVM inhibits the development of steganalysis. To solve this problem, in this paper we propose a fast and accurate steganalysis methods based on Ensemble classifier and Stacking. In this method, the relation between basic learners decisions and true decision is learned by another classifier. To do this, basic learners decisions are mapped to space of uncorrelated dimensions. The complexity of this method is much lower than that of SVM, while our method improves detection accuracy. Proposed method is a fast and accurate classifier that can be used as a part of any steganalysis algorithm. Performance of this method is demonstrated on two steganographic methods, namely nsF5 and Model Based Steganography. The performance of proposed method is compared to that of Ensemble classifier. Experimental results show that the classification error and training time are lowered by 46% and 88%, respectively.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116657729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Border detection of document images scanned from large books 从大型图书中扫描的文档图像的边界检测
Pub Date : 2013-09-01 DOI: 10.1109/IRANIANMVIP.2013.6779955
Maryam Shamqoli, H. Khosravi
Document images usually suffer from some unwanted transformations like skew and warping. When dealing with large books, another problem is also introduced; when we capture a page of a large book using digital camera or scanner, some extra margins appears. The resulting document is often framed by a dark border and noisy text regions from neighboring page. In this paper, we introduce a novel technique for enhancing the document images by automatically detecting the document borders and cutting out noisy area. Our methodology is based on projecting profiles combined with an edge detection process. Experimental results on several document images, mainly historical with a small slope, indicate the effectiveness of the proposed technique.
文档图像通常会遭受一些不必要的转换,如倾斜和扭曲。在处理大部头的书籍时,还会引入另一个问题;当我们用数码相机或扫描仪拍摄一本大书的一页时,会出现一些额外的空白。生成的文档通常由来自相邻页面的暗边框和嘈杂的文本区域构成。本文介绍了一种新的文档图像增强技术,该技术通过自动检测文档边缘和去除噪声区域来增强文档图像。我们的方法是基于投影轮廓结合边缘检测过程。实验结果表明,该方法是有效的。
{"title":"Border detection of document images scanned from large books","authors":"Maryam Shamqoli, H. Khosravi","doi":"10.1109/IRANIANMVIP.2013.6779955","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779955","url":null,"abstract":"Document images usually suffer from some unwanted transformations like skew and warping. When dealing with large books, another problem is also introduced; when we capture a page of a large book using digital camera or scanner, some extra margins appears. The resulting document is often framed by a dark border and noisy text regions from neighboring page. In this paper, we introduce a novel technique for enhancing the document images by automatically detecting the document borders and cutting out noisy area. Our methodology is based on projecting profiles combined with an edge detection process. Experimental results on several document images, mainly historical with a small slope, indicate the effectiveness of the proposed technique.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"415 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123860225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A simple and efficient approach for 3D model decomposition 一种简单有效的三维模型分解方法
Pub Date : 2013-09-01 DOI: 10.1109/IRANIANMVIP.2013.6779939
Fattah Alizadeh, Alistair Sutherland, A. Dehghani
Number of 3D models is growing every day. Segmentation of such models has recently attracted lot of attentions. In this paper we propose a two-phase approach for segmentation of 3D models. We leveraged a well-known fact from electrical physics for both initial segment specification and boundary detections. The first phase tries to locate the initial segments having higher charge density while the second phase, leverages the minima rule and geodesic distance to find the boundary parts in the concave areas. The proposed approach has a great advantage over the similar approach proposed by Wu and Levine [1]. The experimental result on the SHREC 2007 dataset show the promising results for partial matching in 3D model retrieval.
3D模型的数量每天都在增长。近年来,这类模型的分割问题引起了人们的广泛关注。本文提出了一种两阶段的三维模型分割方法。我们利用了一个众所周知的事实,从电物理的初始段规格和边界检测。第一阶段试图定位电荷密度较高的初始段,第二阶段利用最小规则和测地线距离在凹区寻找边界部分。与Wu和Levine[1]提出的类似方法相比,本文提出的方法具有很大的优势。在SHREC 2007数据集上的实验结果表明,该方法在三维模型检索中具有较好的部分匹配效果。
{"title":"A simple and efficient approach for 3D model decomposition","authors":"Fattah Alizadeh, Alistair Sutherland, A. Dehghani","doi":"10.1109/IRANIANMVIP.2013.6779939","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779939","url":null,"abstract":"Number of 3D models is growing every day. Segmentation of such models has recently attracted lot of attentions. In this paper we propose a two-phase approach for segmentation of 3D models. We leveraged a well-known fact from electrical physics for both initial segment specification and boundary detections. The first phase tries to locate the initial segments having higher charge density while the second phase, leverages the minima rule and geodesic distance to find the boundary parts in the concave areas. The proposed approach has a great advantage over the similar approach proposed by Wu and Levine [1]. The experimental result on the SHREC 2007 dataset show the promising results for partial matching in 3D model retrieval.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114253355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Sparse based similarity measure for mono-modal image registration 基于稀疏的单模态图像配准相似度度量
Pub Date : 2013-09-01 DOI: 10.1109/IRANIANMVIP.2013.6780030
A. Ghaffari, E. Fatemizadeh
Similarity measure is an important key in image registration. Most traditional intensity-based similarity measures (e.g., SSD, CC, MI, and CR) assume stationary image and pixel by pixel independence. Hence, perfect image registration cannot be achieved especially in presence of spatially-varying intensity distortions and outlier objects that appear in one image but not in the other. Here, we suppose that non stationary intensity distortion (such as Bias field or Outlier) has sparse representation in transformation domain. Based on this as-sumption, the zero norm (ℓ0)of the residual image between two registered images in transform domain is introduced as a new similarity measure in presence of non-stationary inten-sity. In this paper we replace ℓ0 norm with ℓ1 norm which is a popular sparseness measure. This measure produces accurate registration results in compare to other similarity measure such as SSD, MI and Residual Complexity RC.
相似度度量是图像配准的关键。大多数传统的基于强度的相似性度量(例如,SSD, CC, MI和CR)假设静止图像和逐像素的独立性。因此,完美的图像配准无法实现,特别是在存在空间变化的强度扭曲和在一幅图像中出现但在另一幅图像中没有出现的离群对象的情况下。在这里,我们假设非平稳强度失真(如Bias field或Outlier)在变换域中具有稀疏表示。在此假设的基础上,引入了变换域中两幅配准图像间残差图像的零范数(l0)作为非平稳强度下的一种新的相似性度量。本文用常用的稀疏性度量1范数代替了0范数。与其他相似度度量(如SSD、MI和残余复杂度RC)相比,该度量产生准确的配准结果。
{"title":"Sparse based similarity measure for mono-modal image registration","authors":"A. Ghaffari, E. Fatemizadeh","doi":"10.1109/IRANIANMVIP.2013.6780030","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6780030","url":null,"abstract":"Similarity measure is an important key in image registration. Most traditional intensity-based similarity measures (e.g., SSD, CC, MI, and CR) assume stationary image and pixel by pixel independence. Hence, perfect image registration cannot be achieved especially in presence of spatially-varying intensity distortions and outlier objects that appear in one image but not in the other. Here, we suppose that non stationary intensity distortion (such as Bias field or Outlier) has sparse representation in transformation domain. Based on this as-sumption, the zero norm (ℓ0)of the residual image between two registered images in transform domain is introduced as a new similarity measure in presence of non-stationary inten-sity. In this paper we replace ℓ0 norm with ℓ1 norm which is a popular sparseness measure. This measure produces accurate registration results in compare to other similarity measure such as SSD, MI and Residual Complexity RC.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"41 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131137632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Decoupled active contour (DAC) optimization using wavelet edge detection and curvature based resampling 基于小波边缘检测和曲率重采样的解耦主动轮廓优化
Pub Date : 2013-09-01 DOI: 10.1109/IRANIANMVIP.2013.6779942
Fahime Garmisirian, M. Mohaddesi, Z. Azimifar
Locating an accurate desired object boundary using active contours and deformable models plays an important role in computer vision, particularly in medical imaging applications. Powerful segmentation methods have been introduced to address limitations associated with initialization and poor convergence to boundary concavities. This paper proposes a method to improve one of the strongest and recent segmentation methods, called decoupled active contour (DAC). Here we apply Wavelet edge detection on the image which cause it to have more contrast to have more information about edges, followed by an optimum updating on the measurements using Hidden Markov Model (HMM) and the Viterbi search as an efficient solver. In order to have a more accurate boundary at each iteration more points are injected in the high curvature parts based on the snake curvature so we will have more precision in these part and also flat parts.
利用活动轮廓和可变形模型定位精确的目标边界在计算机视觉中起着重要作用,特别是在医学成像应用中。引入了强大的分割方法来解决与初始化和对边界凹的较差收敛相关的限制。本文提出了一种方法来改进一种最强的和最新的分割方法,称为解耦有源轮廓(DAC)。在这里,我们对图像应用小波边缘检测,使其具有更多的对比度,从而获得更多关于边缘的信息,然后使用隐马尔可夫模型(HMM)和维特比搜索作为有效的求解器对测量进行优化更新。为了在每次迭代中获得更精确的边界,在基于蛇形曲率的高曲率部分注入更多的点,这样我们将在这些部分和平面部分获得更高的精度。
{"title":"Decoupled active contour (DAC) optimization using wavelet edge detection and curvature based resampling","authors":"Fahime Garmisirian, M. Mohaddesi, Z. Azimifar","doi":"10.1109/IRANIANMVIP.2013.6779942","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779942","url":null,"abstract":"Locating an accurate desired object boundary using active contours and deformable models plays an important role in computer vision, particularly in medical imaging applications. Powerful segmentation methods have been introduced to address limitations associated with initialization and poor convergence to boundary concavities. This paper proposes a method to improve one of the strongest and recent segmentation methods, called decoupled active contour (DAC). Here we apply Wavelet edge detection on the image which cause it to have more contrast to have more information about edges, followed by an optimum updating on the measurements using Hidden Markov Model (HMM) and the Viterbi search as an efficient solver. In order to have a more accurate boundary at each iteration more points are injected in the high curvature parts based on the snake curvature so we will have more precision in these part and also flat parts.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134457534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1