Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6779973
N. Alamdari, E. Fatemizadeh
In the last few years there has been growing interest in the use of functional Magnetic Resonance Imaging (fMRI) for brain mapping. To decode brain patterns in fMRI data, we need reliable and accurate classifiers. Towards this goal, we compared performance of eleven popular pattern recognition methods. Before performing pattern recognition, applying the dimensionality reduction methods can improve the classification performance; therefore, seven methods in region of interest (RDI) have been compared to answer the following question: which dimensionality reduction procedure performs best? In both tasks, in addition to measuring prediction accuracy, we estimated standard deviation of accuracies to realize more reliable methods. According to all results, we suggest using support vector machines with linear kernel (C-SVM and v-SVM), or random forest classifier on low dimensional subsets, which is prepared by Active or maxDis feature selection method to classify brain activity patterns more efficiently.
{"title":"Comparison of classification and dimensionality reduction methods used in fMRI decoding","authors":"N. Alamdari, E. Fatemizadeh","doi":"10.1109/IRANIANMVIP.2013.6779973","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779973","url":null,"abstract":"In the last few years there has been growing interest in the use of functional Magnetic Resonance Imaging (fMRI) for brain mapping. To decode brain patterns in fMRI data, we need reliable and accurate classifiers. Towards this goal, we compared performance of eleven popular pattern recognition methods. Before performing pattern recognition, applying the dimensionality reduction methods can improve the classification performance; therefore, seven methods in region of interest (RDI) have been compared to answer the following question: which dimensionality reduction procedure performs best? In both tasks, in addition to measuring prediction accuracy, we estimated standard deviation of accuracies to realize more reliable methods. According to all results, we suggest using support vector machines with linear kernel (C-SVM and v-SVM), or random forest classifier on low dimensional subsets, which is prepared by Active or maxDis feature selection method to classify brain activity patterns more efficiently.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133269325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6779941
V. Soleimani, F. H. Vincheh, Ehsan Zare
In this paper we describe designing and implementation of a powerful, fast and compact simple 3D modeler (SM3D). In addition to saving cost and time (due to high processing speed), 3D objects can be created with minimum system resources by using this application. Easy learning and using are other strengths of this application. Modularity using classification and applying Dynamic-Link Library files are noted aspects that are regarded in writing the source code and this causes separation of main part and user interface, so the application can be easily expanded in the future. Ability to create primary objects and also applying advanced transformations and modifiers have been considered. Moreover, ability to select points of an object and move them is another prominent feature. Working with the camera, its settings and creating desired viewpoints are other professional features. Also, saving and loading object's information from a file and export objects to other popular types of files are included in this application.
{"title":"SM3D studio: A 3D model constructor","authors":"V. Soleimani, F. H. Vincheh, Ehsan Zare","doi":"10.1109/IRANIANMVIP.2013.6779941","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779941","url":null,"abstract":"In this paper we describe designing and implementation of a powerful, fast and compact simple 3D modeler (SM3D). In addition to saving cost and time (due to high processing speed), 3D objects can be created with minimum system resources by using this application. Easy learning and using are other strengths of this application. Modularity using classification and applying Dynamic-Link Library files are noted aspects that are regarded in writing the source code and this causes separation of main part and user interface, so the application can be easily expanded in the future. Ability to create primary objects and also applying advanced transformations and modifiers have been considered. Moreover, ability to select points of an object and move them is another prominent feature. Working with the camera, its settings and creating desired viewpoints are other professional features. Also, saving and loading object's information from a file and export objects to other popular types of files are included in this application.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130184930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6780023
M. Rostami, R. Ghaderi, M. Ezoji, J. Ghasemi
One of the most commonly used methods for Magnetic Resonance Imaging (MRI) segmentation is Fuzzy C-Means (FCM). This method in comparison with other methods preserves more information of the images. Because of using the intensity of pixels as a key feature for clustering, Standard FCM is sensitive to noise. In this study in addition to intensity, mean of neighbourhood of pixels and largest singular value of neighbourhood of pixels are used as features. Also a method for segmenting MRI images is presented which uses both FCM and Radial Basis Function (RBF) neural network and partly decreases the limitation of standard FCM.
{"title":"Brain MRI segmentation using the mixture of FCM and RBF neural network","authors":"M. Rostami, R. Ghaderi, M. Ezoji, J. Ghasemi","doi":"10.1109/IRANIANMVIP.2013.6780023","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6780023","url":null,"abstract":"One of the most commonly used methods for Magnetic Resonance Imaging (MRI) segmentation is Fuzzy C-Means (FCM). This method in comparison with other methods preserves more information of the images. Because of using the intensity of pixels as a key feature for clustering, Standard FCM is sensitive to noise. In this study in addition to intensity, mean of neighbourhood of pixels and largest singular value of neighbourhood of pixels are used as features. Also a method for segmenting MRI images is presented which uses both FCM and Radial Basis Function (RBF) neural network and partly decreases the limitation of standard FCM.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115079502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6779998
A. Pourmohammad, S. Poursajadi, S. Karimifar
Geometrical and radiometrical corrections are important for scene matching applications. We suppose the applications that there are no geometrical errors based on using 3D-Inertial sensors for geometrical corrections. In these cases, Normalized Cross-Correlation (NCC) is commonly used method for scene matching. The problem of matching a pattern image (mask) to an image in these cases needs to correction of radiometrical errors as illumination (contrast) variations. In this paper we show that correlation between a mask and a histogram matched image instead of using that raw version, improves the correlation value. First we match histogram function of the image to histogram function of the mask in order to have two closed contrast images, and then correlate those together using NCC and root mean square error (RMSE) methods. Simulation results confirm that according to using NCC and RMSE simultaneously, not only this method is a fast and real time method, but also according to matching histogram function of the received image to histogram function of the mask, it improves the correlation value. Also we show that using the edge detected version of the mask and histogram matched image, lead us to have the best results.
{"title":"Scene matching NCC value improvement based on contrast matching","authors":"A. Pourmohammad, S. Poursajadi, S. Karimifar","doi":"10.1109/IRANIANMVIP.2013.6779998","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779998","url":null,"abstract":"Geometrical and radiometrical corrections are important for scene matching applications. We suppose the applications that there are no geometrical errors based on using 3D-Inertial sensors for geometrical corrections. In these cases, Normalized Cross-Correlation (NCC) is commonly used method for scene matching. The problem of matching a pattern image (mask) to an image in these cases needs to correction of radiometrical errors as illumination (contrast) variations. In this paper we show that correlation between a mask and a histogram matched image instead of using that raw version, improves the correlation value. First we match histogram function of the image to histogram function of the mask in order to have two closed contrast images, and then correlate those together using NCC and root mean square error (RMSE) methods. Simulation results confirm that according to using NCC and RMSE simultaneously, not only this method is a fast and real time method, but also according to matching histogram function of the received image to histogram function of the mask, it improves the correlation value. Also we show that using the edge detected version of the mask and histogram matched image, lead us to have the best results.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132219500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6779952
M. Ziaratban, F. Bagheri
Textline segmentation is an important preprocess before trying to recognize words. Handwritten texts include complex lines such as connected/overlapped, multi skewed, and curved textlines. In the proposed approach, to overcome these problems, local reliable text regions are locally extracted for each block of a handwritten text. Text image is first filtered by a set of directional 2D filters and filtered images are divided to a number of overlapping blocks. The filtered block with the highest contrast is selected to be used for text region detection. Experiments show that our proposed method accurately segments complex handwritten textlines.
{"title":"Extracting local reliable text regions to segment complex handwritten textlines","authors":"M. Ziaratban, F. Bagheri","doi":"10.1109/IRANIANMVIP.2013.6779952","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779952","url":null,"abstract":"Textline segmentation is an important preprocess before trying to recognize words. Handwritten texts include complex lines such as connected/overlapped, multi skewed, and curved textlines. In the proposed approach, to overcome these problems, local reliable text regions are locally extracted for each block of a handwritten text. Text image is first filtered by a set of directional 2D filters and filtered images are divided to a number of overlapping blocks. The filtered block with the highest contrast is selected to be used for text region detection. Experiments show that our proposed method accurately segments complex handwritten textlines.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134537540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6779997
Amin Abdullahzadeh, F. Mohanna
In this paper, a specific region called affine noisy invariant region is extracted from a query and database images to help accurate retrieval on different attacks. Then, only a 64×1 codebook based feature vector is obtained from this specific region applying vector quantization and codebook generation based on the Linde-Buzo-Gray algorithm, which reduces retrieval feature comparison calculations. Also a number of texture and frequency domain based features are computed and established for the region. Finally combination of these two groups of feature vectors improves the retrieval system efficiency. Besides, in order to optimize weighting combination coefficients of the feature vectors, the particle swarm optimization algorithm is applied. The experimental results show a real-time content-based image retrieval system with higher accuracy and acceptable retrieval time.
{"title":"Applying specific region frequency and texture features on content-based image retrieval","authors":"Amin Abdullahzadeh, F. Mohanna","doi":"10.1109/IRANIANMVIP.2013.6779997","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779997","url":null,"abstract":"In this paper, a specific region called affine noisy invariant region is extracted from a query and database images to help accurate retrieval on different attacks. Then, only a 64×1 codebook based feature vector is obtained from this specific region applying vector quantization and codebook generation based on the Linde-Buzo-Gray algorithm, which reduces retrieval feature comparison calculations. Also a number of texture and frequency domain based features are computed and established for the region. Finally combination of these two groups of feature vectors improves the retrieval system efficiency. Besides, in order to optimize weighting combination coefficients of the feature vectors, the particle swarm optimization algorithm is applied. The experimental results show a real-time content-based image retrieval system with higher accuracy and acceptable retrieval time.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133569767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6779985
M. Afzali, E. Fatemizadeh, H. Soltanian-Zadeh
Diffusion Tensor Imaging (DTI) is a common method for the investigation of brain white matter. In this method, it is assumed that diffusion of water molecules is Gaussian and so, it fails in fiber crossings where this assumption does not hold. High Angular Resolution Diffusion Imaging (HARDI) allows more accurate investigation of microstructures of the brain white matter; it can present fiber crossing in each voxel. HARDI contains complex orientation information of the fibers. Therefore, registration of these images is more complicated than the scalar images. In this paper, we propose a HARDI registration algorithm based on the feature vectors that are extracted from the Orientation Distribution Functions (ODFs) in each voxel. Hammer similarity measure is used to match the feature vectors and thin-plate spline (TPS) based registration is used for spatial registration of the skeleton and its neighbors. A re-orientation strategy is utilized to re-orient the ODFs after spatial registration. Finally, we evaluate our method based on the differences in principal diffusion direction and we will show that utilizing the skeleton as landmark in the registration results in accurate alignment of HARDI data.
{"title":"High angular resolution diffusion image registration","authors":"M. Afzali, E. Fatemizadeh, H. Soltanian-Zadeh","doi":"10.1109/IRANIANMVIP.2013.6779985","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779985","url":null,"abstract":"Diffusion Tensor Imaging (DTI) is a common method for the investigation of brain white matter. In this method, it is assumed that diffusion of water molecules is Gaussian and so, it fails in fiber crossings where this assumption does not hold. High Angular Resolution Diffusion Imaging (HARDI) allows more accurate investigation of microstructures of the brain white matter; it can present fiber crossing in each voxel. HARDI contains complex orientation information of the fibers. Therefore, registration of these images is more complicated than the scalar images. In this paper, we propose a HARDI registration algorithm based on the feature vectors that are extracted from the Orientation Distribution Functions (ODFs) in each voxel. Hammer similarity measure is used to match the feature vectors and thin-plate spline (TPS) based registration is used for spatial registration of the skeleton and its neighbors. A re-orientation strategy is utilized to re-orient the ODFs after spatial registration. Finally, we evaluate our method based on the differences in principal diffusion direction and we will show that utilizing the skeleton as landmark in the registration results in accurate alignment of HARDI data.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"319 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116424731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6779960
Mohsen Azizabadi, A. Behrad
Nowadays, hardware implementation of image and video processing algorithms is highly attractive. Needing to real-time processing makes hardware implementation of these algorithms inevitable. In most of image and video processing algorithms, pre-processing filters are the first and most important stage of the algorithm. In this paper, we propose new hardware architectures for the implementation of image filters including Gaussian, median and weighted median filters. The proposed architectures aim to optimize the filter implementation for speed and gate usage. The proposed architectures are implemented and synthesized in ASIC with 65 nm technology and different specification of the implementation such as maximum clock frequency and IC area are reported.
{"title":"Design and VLSI implementation of new hardware architectures for image filtering","authors":"Mohsen Azizabadi, A. Behrad","doi":"10.1109/IRANIANMVIP.2013.6779960","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779960","url":null,"abstract":"Nowadays, hardware implementation of image and video processing algorithms is highly attractive. Needing to real-time processing makes hardware implementation of these algorithms inevitable. In most of image and video processing algorithms, pre-processing filters are the first and most important stage of the algorithm. In this paper, we propose new hardware architectures for the implementation of image filters including Gaussian, median and weighted median filters. The proposed architectures aim to optimize the filter implementation for speed and gate usage. The proposed architectures are implemented and synthesized in ASIC with 65 nm technology and different specification of the implementation such as maximum clock frequency and IC area are reported.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121746773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6779974
Ebrahim Emami, M. Fathy, Ehsan Kozegar
Tracking failure is an inevitable problem in any object tracking algorithm. Online evaluation of a tracking algorithm to detect and correct failures is therefore an important task in any object tracking system. In this paper we propose an early tracking failure detection procedure for the Continuously Adaptive Mean-Shift(CAMShift) tracking algorithm. We also propose an algorithm to modify the tracker in order to correct the detected failures. CAMShift is a light-weight tracking algorithm first developed based on mean-shift to track human face as a component in a perceptual user interface, but it easily fails in tracking targets in more complex situations like surveillance applications. With our proposed failure detection and correction algorithm, CAMShift shows promising results in the test video sequences.
{"title":"Online failure detection and correction for CAMShift tracking algorithm","authors":"Ebrahim Emami, M. Fathy, Ehsan Kozegar","doi":"10.1109/IRANIANMVIP.2013.6779974","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779974","url":null,"abstract":"Tracking failure is an inevitable problem in any object tracking algorithm. Online evaluation of a tracking algorithm to detect and correct failures is therefore an important task in any object tracking system. In this paper we propose an early tracking failure detection procedure for the Continuously Adaptive Mean-Shift(CAMShift) tracking algorithm. We also propose an algorithm to modify the tracker in order to correct the detected failures. CAMShift is a light-weight tracking algorithm first developed based on mean-shift to track human face as a component in a perceptual user interface, but it easily fails in tracking targets in more complex situations like surveillance applications. With our proposed failure detection and correction algorithm, CAMShift shows promising results in the test video sequences.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128856359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6780014
Gelareh Meydanipour, K. Faez
Head pose estimation is an important preprocessing step in many computer vision and pattern recognition systems such as face recognition. Compared to face detection and recognition which have been wildly used in computer vision systems, head pose estimation has fewer proposed systems and generic solutions. In this paper we propose a novel approach for robust human head pose estimation using contourletSD transform. At first we apply contourletSD transform on images, then we create feature vector by computing gray-level co-occurrence matrix (GLCM) from each contourlet sub-band. Linear discriminant analysis (LDA) is used for dimensionality reduction of feature vector. Finally, we classify obtained feature vectors using Support Vector Machine (SVM), K-nearest Neighbor (KNN) and hierarchical decision tree (HDT) classifiers, separately. Experimental results on FERET database demonstrate robustness of the proposed method than previous methods in human head pose estimation.
{"title":"Robust head pose estimation using contourletSD transform and GLCM","authors":"Gelareh Meydanipour, K. Faez","doi":"10.1109/IRANIANMVIP.2013.6780014","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6780014","url":null,"abstract":"Head pose estimation is an important preprocessing step in many computer vision and pattern recognition systems such as face recognition. Compared to face detection and recognition which have been wildly used in computer vision systems, head pose estimation has fewer proposed systems and generic solutions. In this paper we propose a novel approach for robust human head pose estimation using contourletSD transform. At first we apply contourletSD transform on images, then we create feature vector by computing gray-level co-occurrence matrix (GLCM) from each contourlet sub-band. Linear discriminant analysis (LDA) is used for dimensionality reduction of feature vector. Finally, we classify obtained feature vectors using Support Vector Machine (SVM), K-nearest Neighbor (KNN) and hierarchical decision tree (HDT) classifiers, separately. Experimental results on FERET database demonstrate robustness of the proposed method than previous methods in human head pose estimation.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124520840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}