Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6780004
M. Ghiasi, R. Amirfattahi
In this paper, a semantic segmentation method for aerial images is presented. Semantic segmentation allows the task of segmentation and classification to be performed simultaneously in a single efficient step. This algorithm relies on descriptors of color and texture. In the training phase, we first manually extract homogenous areas and label each area semantically. Then color and texture descriptors for each area in the training image are computed. The pool of descriptors and their semantic label are used to build two separate classifiers for color and texture. We tested our algorithm by KNN classifier. To segment a new image, we over-segment it into a number of superpixels. Then we compute texture and color descriptors for each superpixel and classify it based on the trained classifier. This labels the superpixels semantically. Labeling all superpixels provides a segmentation map. We used local binary pattern histogram fourier features and color histograms of RGB images as texture and color descriptors respectively. This algorithm is applied to a large set of aerial images and is proved to have above 95% success rate.
{"title":"Fast semantic segmentation of aerial images based on color and texture","authors":"M. Ghiasi, R. Amirfattahi","doi":"10.1109/IRANIANMVIP.2013.6780004","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6780004","url":null,"abstract":"In this paper, a semantic segmentation method for aerial images is presented. Semantic segmentation allows the task of segmentation and classification to be performed simultaneously in a single efficient step. This algorithm relies on descriptors of color and texture. In the training phase, we first manually extract homogenous areas and label each area semantically. Then color and texture descriptors for each area in the training image are computed. The pool of descriptors and their semantic label are used to build two separate classifiers for color and texture. We tested our algorithm by KNN classifier. To segment a new image, we over-segment it into a number of superpixels. Then we compute texture and color descriptors for each superpixel and classify it based on the trained classifier. This labels the superpixels semantically. Labeling all superpixels provides a segmentation map. We used local binary pattern histogram fourier features and color histograms of RGB images as texture and color descriptors respectively. This algorithm is applied to a large set of aerial images and is proved to have above 95% success rate.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129477025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6779962
Parinaz Mortaheb, M. Rezaeian, H. Soltanian-Zadeh
Identifying the structure and arrangement of the teeth is one of the dentists' requirements for performing various procedures such as diagnosing abnormalities, dental implant and orthodontic planning. In this regard, robust segmentation of dental Computerized Tomography (CT) images is required. However, dental CT images present some major challenges for the segmentation that make it difficult process. In this research, we propose a multi-step approach for automatic segmentation of the teeth in dental CT images. The main steps of this method are presented as follows: 1-Primary segmentation to classify bony tissues from nonbony tissues. 2-Separating the general region of the teeth structure from the other bony structures and arc curve fitting in the region. 3-Individual tooth region detection. 4-Final segmentation using mean shift algorithm by defining a new feature space. The proposed algorithm has been applied to several Cone Beam Computed Tomography (CBCT) data sets and quality assessment metrics are used to evaluate the performance of the algorithm. The evaluation indicates that the accuracy of proposed method is more than 97 percent. Moreover, we compared the proposed method with thresholding, watershed, level set and active contour methods and our method shows an improvement in compare with other techniques.
{"title":"Automatic dental CT image segmentation using mean shift algorithm","authors":"Parinaz Mortaheb, M. Rezaeian, H. Soltanian-Zadeh","doi":"10.1109/IRANIANMVIP.2013.6779962","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779962","url":null,"abstract":"Identifying the structure and arrangement of the teeth is one of the dentists' requirements for performing various procedures such as diagnosing abnormalities, dental implant and orthodontic planning. In this regard, robust segmentation of dental Computerized Tomography (CT) images is required. However, dental CT images present some major challenges for the segmentation that make it difficult process. In this research, we propose a multi-step approach for automatic segmentation of the teeth in dental CT images. The main steps of this method are presented as follows: 1-Primary segmentation to classify bony tissues from nonbony tissues. 2-Separating the general region of the teeth structure from the other bony structures and arc curve fitting in the region. 3-Individual tooth region detection. 4-Final segmentation using mean shift algorithm by defining a new feature space. The proposed algorithm has been applied to several Cone Beam Computed Tomography (CBCT) data sets and quality assessment metrics are used to evaluate the performance of the algorithm. The evaluation indicates that the accuracy of proposed method is more than 97 percent. Moreover, we compared the proposed method with thresholding, watershed, level set and active contour methods and our method shows an improvement in compare with other techniques.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129112742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6780016
M. Nooshyar, M. Momeny
Impulse noise is one of the most important factors in degrading of image quality. In this paper, a novel technique is presented for detecting and removing of impulse noise, while the significant information of image, such as edges and texture, are remind untouched. The proposed algorithm use the weighted window with variable sizes and apply median filtering on them. Simulation results, with various images and noise intensities, show that the proposed algorithm has better performance compared with state of the art methods and increases the PSNR value (of the reconstructed image) up to 4dBs.
{"title":"Removal of high density impulse noise using a novel decision based adaptive weighted and trimmed median filter","authors":"M. Nooshyar, M. Momeny","doi":"10.1109/IRANIANMVIP.2013.6780016","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6780016","url":null,"abstract":"Impulse noise is one of the most important factors in degrading of image quality. In this paper, a novel technique is presented for detecting and removing of impulse noise, while the significant information of image, such as edges and texture, are remind untouched. The proposed algorithm use the weighted window with variable sizes and apply median filtering on them. Simulation results, with various images and noise intensities, show that the proposed algorithm has better performance compared with state of the art methods and increases the PSNR value (of the reconstructed image) up to 4dBs.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129295482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6780028
T. Zare, M. Sadeghi, H. R. Abutalebi
Distance metrics are widely used in various machine learning and pattern recognition algorithms. A main issue in these algorithms is choosing the proper distance metric. In recent years, learning an appropriate distance metric has become a very active research field. In the kernelised version of distance metric learning algorithms, the data points are implicitly mapped into a higher dimensional feature space and the learning process is performed in the resulted feature space. The performance of the kernel-based methods heavily depends on the chosen kernel function. So, selecting an appropriate kernel function and/or tuning its parameter(s) impose significant challenges in such methods. The Multiple Kernel Learning theory (MKL) addresses this problem by learning a linear combination of a number of predefined kernels. In this paper, we formulate the MKL problem in a semi-supervised metric learning framework. In the proposed approach, pairwise similarity constraints are used to adjust the weights of the combined kernels and simultaneously learn the appropriate distance metric. Using both synthetic and real-world datasets, we show that the proposed method outperforms some recently introduced semi-supervised metric learning approaches.
{"title":"A novel multiple kernel learning approach for semi-supervised clustering","authors":"T. Zare, M. Sadeghi, H. R. Abutalebi","doi":"10.1109/IRANIANMVIP.2013.6780028","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6780028","url":null,"abstract":"Distance metrics are widely used in various machine learning and pattern recognition algorithms. A main issue in these algorithms is choosing the proper distance metric. In recent years, learning an appropriate distance metric has become a very active research field. In the kernelised version of distance metric learning algorithms, the data points are implicitly mapped into a higher dimensional feature space and the learning process is performed in the resulted feature space. The performance of the kernel-based methods heavily depends on the chosen kernel function. So, selecting an appropriate kernel function and/or tuning its parameter(s) impose significant challenges in such methods. The Multiple Kernel Learning theory (MKL) addresses this problem by learning a linear combination of a number of predefined kernels. In this paper, we formulate the MKL problem in a semi-supervised metric learning framework. In the proposed approach, pairwise similarity constraints are used to adjust the weights of the combined kernels and simultaneously learn the appropriate distance metric. Using both synthetic and real-world datasets, we show that the proposed method outperforms some recently introduced semi-supervised metric learning approaches.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125594253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6779969
Fazael Ayatollahi, A. Raie, F. Hajati
A new multimodal face recognition method which extracts features of rigid and semi-rigid regions of the face using Dual-Tree Complex Wavelet Transform (DT-CWT) is proposed. DT-CWT decomposes range and intensity images into eight sub-images consisting of six band-pass sub-images to represent face details and two low-pass sub-images to represent face approximates. In this work, support vector machine (SVM) has been used as the classifier. The proposed method has been evaluated using the face BU-3DFE dataset containing a wide range of expression changes. Findings include the overall identification rate of 98.1% and the overall verification rate of 99.3% at 0.1% false acceptance rate.
{"title":"Multimodal expression-invariant face recognition using dual-tree complex wavelet transform","authors":"Fazael Ayatollahi, A. Raie, F. Hajati","doi":"10.1109/IRANIANMVIP.2013.6779969","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779969","url":null,"abstract":"A new multimodal face recognition method which extracts features of rigid and semi-rigid regions of the face using Dual-Tree Complex Wavelet Transform (DT-CWT) is proposed. DT-CWT decomposes range and intensity images into eight sub-images consisting of six band-pass sub-images to represent face details and two low-pass sub-images to represent face approximates. In this work, support vector machine (SVM) has been used as the classifier. The proposed method has been evaluated using the face BU-3DFE dataset containing a wide range of expression changes. Findings include the overall identification rate of 98.1% and the overall verification rate of 99.3% at 0.1% false acceptance rate.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121558137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1109/IranianMVIP.2013.6779943
A. Torkaman, R. Safabakhsh
Nowadays the steganographic methods use the more sophisticated image models to increase security; consequently, steganalysis algorithm should build the more accurate models of images to detect them. So, the number of extracted feature is increasing. Most modern steganalysis algorithms train a supervised classifier on the feature vectors. The most popular and accurate one is SVM, but the high training time of SVM inhibits the development of steganalysis. To solve this problem, in this paper we propose a fast and accurate steganalysis methods based on Ensemble classifier and Stacking. In this method, the relation between basic learners decisions and true decision is learned by another classifier. To do this, basic learners decisions are mapped to space of uncorrelated dimensions. The complexity of this method is much lower than that of SVM, while our method improves detection accuracy. Proposed method is a fast and accurate classifier that can be used as a part of any steganalysis algorithm. Performance of this method is demonstrated on two steganographic methods, namely nsF5 and Model Based Steganography. The performance of proposed method is compared to that of Ensemble classifier. Experimental results show that the classification error and training time are lowered by 46% and 88%, respectively.
{"title":"A fast and accurate steganalysis using Ensemble classifiers","authors":"A. Torkaman, R. Safabakhsh","doi":"10.1109/IranianMVIP.2013.6779943","DOIUrl":"https://doi.org/10.1109/IranianMVIP.2013.6779943","url":null,"abstract":"Nowadays the steganographic methods use the more sophisticated image models to increase security; consequently, steganalysis algorithm should build the more accurate models of images to detect them. So, the number of extracted feature is increasing. Most modern steganalysis algorithms train a supervised classifier on the feature vectors. The most popular and accurate one is SVM, but the high training time of SVM inhibits the development of steganalysis. To solve this problem, in this paper we propose a fast and accurate steganalysis methods based on Ensemble classifier and Stacking. In this method, the relation between basic learners decisions and true decision is learned by another classifier. To do this, basic learners decisions are mapped to space of uncorrelated dimensions. The complexity of this method is much lower than that of SVM, while our method improves detection accuracy. Proposed method is a fast and accurate classifier that can be used as a part of any steganalysis algorithm. Performance of this method is demonstrated on two steganographic methods, namely nsF5 and Model Based Steganography. The performance of proposed method is compared to that of Ensemble classifier. Experimental results show that the classification error and training time are lowered by 46% and 88%, respectively.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116657729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6779955
Maryam Shamqoli, H. Khosravi
Document images usually suffer from some unwanted transformations like skew and warping. When dealing with large books, another problem is also introduced; when we capture a page of a large book using digital camera or scanner, some extra margins appears. The resulting document is often framed by a dark border and noisy text regions from neighboring page. In this paper, we introduce a novel technique for enhancing the document images by automatically detecting the document borders and cutting out noisy area. Our methodology is based on projecting profiles combined with an edge detection process. Experimental results on several document images, mainly historical with a small slope, indicate the effectiveness of the proposed technique.
{"title":"Border detection of document images scanned from large books","authors":"Maryam Shamqoli, H. Khosravi","doi":"10.1109/IRANIANMVIP.2013.6779955","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779955","url":null,"abstract":"Document images usually suffer from some unwanted transformations like skew and warping. When dealing with large books, another problem is also introduced; when we capture a page of a large book using digital camera or scanner, some extra margins appears. The resulting document is often framed by a dark border and noisy text regions from neighboring page. In this paper, we introduce a novel technique for enhancing the document images by automatically detecting the document borders and cutting out noisy area. Our methodology is based on projecting profiles combined with an edge detection process. Experimental results on several document images, mainly historical with a small slope, indicate the effectiveness of the proposed technique.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"415 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123860225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6779939
Fattah Alizadeh, Alistair Sutherland, A. Dehghani
Number of 3D models is growing every day. Segmentation of such models has recently attracted lot of attentions. In this paper we propose a two-phase approach for segmentation of 3D models. We leveraged a well-known fact from electrical physics for both initial segment specification and boundary detections. The first phase tries to locate the initial segments having higher charge density while the second phase, leverages the minima rule and geodesic distance to find the boundary parts in the concave areas. The proposed approach has a great advantage over the similar approach proposed by Wu and Levine [1]. The experimental result on the SHREC 2007 dataset show the promising results for partial matching in 3D model retrieval.
{"title":"A simple and efficient approach for 3D model decomposition","authors":"Fattah Alizadeh, Alistair Sutherland, A. Dehghani","doi":"10.1109/IRANIANMVIP.2013.6779939","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779939","url":null,"abstract":"Number of 3D models is growing every day. Segmentation of such models has recently attracted lot of attentions. In this paper we propose a two-phase approach for segmentation of 3D models. We leveraged a well-known fact from electrical physics for both initial segment specification and boundary detections. The first phase tries to locate the initial segments having higher charge density while the second phase, leverages the minima rule and geodesic distance to find the boundary parts in the concave areas. The proposed approach has a great advantage over the similar approach proposed by Wu and Levine [1]. The experimental result on the SHREC 2007 dataset show the promising results for partial matching in 3D model retrieval.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114253355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6780030
A. Ghaffari, E. Fatemizadeh
Similarity measure is an important key in image registration. Most traditional intensity-based similarity measures (e.g., SSD, CC, MI, and CR) assume stationary image and pixel by pixel independence. Hence, perfect image registration cannot be achieved especially in presence of spatially-varying intensity distortions and outlier objects that appear in one image but not in the other. Here, we suppose that non stationary intensity distortion (such as Bias field or Outlier) has sparse representation in transformation domain. Based on this as-sumption, the zero norm (ℓ0)of the residual image between two registered images in transform domain is introduced as a new similarity measure in presence of non-stationary inten-sity. In this paper we replace ℓ0 norm with ℓ1 norm which is a popular sparseness measure. This measure produces accurate registration results in compare to other similarity measure such as SSD, MI and Residual Complexity RC.
{"title":"Sparse based similarity measure for mono-modal image registration","authors":"A. Ghaffari, E. Fatemizadeh","doi":"10.1109/IRANIANMVIP.2013.6780030","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6780030","url":null,"abstract":"Similarity measure is an important key in image registration. Most traditional intensity-based similarity measures (e.g., SSD, CC, MI, and CR) assume stationary image and pixel by pixel independence. Hence, perfect image registration cannot be achieved especially in presence of spatially-varying intensity distortions and outlier objects that appear in one image but not in the other. Here, we suppose that non stationary intensity distortion (such as Bias field or Outlier) has sparse representation in transformation domain. Based on this as-sumption, the zero norm (ℓ0)of the residual image between two registered images in transform domain is introduced as a new similarity measure in presence of non-stationary inten-sity. In this paper we replace ℓ0 norm with ℓ1 norm which is a popular sparseness measure. This measure produces accurate registration results in compare to other similarity measure such as SSD, MI and Residual Complexity RC.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"41 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131137632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6779942
Fahime Garmisirian, M. Mohaddesi, Z. Azimifar
Locating an accurate desired object boundary using active contours and deformable models plays an important role in computer vision, particularly in medical imaging applications. Powerful segmentation methods have been introduced to address limitations associated with initialization and poor convergence to boundary concavities. This paper proposes a method to improve one of the strongest and recent segmentation methods, called decoupled active contour (DAC). Here we apply Wavelet edge detection on the image which cause it to have more contrast to have more information about edges, followed by an optimum updating on the measurements using Hidden Markov Model (HMM) and the Viterbi search as an efficient solver. In order to have a more accurate boundary at each iteration more points are injected in the high curvature parts based on the snake curvature so we will have more precision in these part and also flat parts.
{"title":"Decoupled active contour (DAC) optimization using wavelet edge detection and curvature based resampling","authors":"Fahime Garmisirian, M. Mohaddesi, Z. Azimifar","doi":"10.1109/IRANIANMVIP.2013.6779942","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779942","url":null,"abstract":"Locating an accurate desired object boundary using active contours and deformable models plays an important role in computer vision, particularly in medical imaging applications. Powerful segmentation methods have been introduced to address limitations associated with initialization and poor convergence to boundary concavities. This paper proposes a method to improve one of the strongest and recent segmentation methods, called decoupled active contour (DAC). Here we apply Wavelet edge detection on the image which cause it to have more contrast to have more information about edges, followed by an optimum updating on the measurements using Hidden Markov Model (HMM) and the Viterbi search as an efficient solver. In order to have a more accurate boundary at each iteration more points are injected in the high curvature parts based on the snake curvature so we will have more precision in these part and also flat parts.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134457534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}