The proposed system in this paper gives the invisible watermarking which is performed by using Discrete Wavelet Transform. To get the invisible watermarking the alternate pixel value of the host video is replaced by the pixel value of watermark video/image. This type of watermarking provides a means of forensic analysis for combating media piracy. Video watermarking provides robustness to geometric attack such as rotation, cropping, contract altercation, time editing without compromising the security of the watermark.
{"title":"Dual Watermarking in Video Using Discrete Wavelet Transform","authors":"S. Gandhe, Ujwala Potdar, K. Talele","doi":"10.1109/ICMV.2009.22","DOIUrl":"https://doi.org/10.1109/ICMV.2009.22","url":null,"abstract":"The proposed system in this paper gives the invisible watermarking which is performed by using Discrete Wavelet Transform. To get the invisible watermarking the alternate pixel value of the host video is replaced by the pixel value of watermark video/image. This type of watermarking provides a means of forensic analysis for combating media piracy. Video watermarking provides robustness to geometric attack such as rotation, cropping, contract altercation, time editing without compromising the security of the watermark.","PeriodicalId":315778,"journal":{"name":"2009 Second International Conference on Machine Vision","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123638475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Despite Gabor filtering has emerged as one of the leading techniques for texture classification, a unifying approach to its adoption has not emerged yet. As it is true for Gabor filter bank, the design of a filter bank consists of the selection of a proper set of values for the filter parameters. In this paper, it is intended to find a set of Gabor filter bank parameters optimized for the performance of texture classification system. The application method is suggested to compute Gabor filter parameters based on Genetic Algorithm (GA). The parameters are optimized according to each group of textures. We tested the proposed method with several texture images using a standard database. The experimental results demonstrate the effectiveness of proposed approach as the overall success is about 97.5%.
{"title":"Gabor Filter Parameters Optimization for Texture Classification Based on Genetic Algorithm","authors":"Mehrnaz Afshang, M. Helfroush, Azardokht Zahernia","doi":"10.1109/ICMV.2009.50","DOIUrl":"https://doi.org/10.1109/ICMV.2009.50","url":null,"abstract":"Despite Gabor filtering has emerged as one of the leading techniques for texture classification, a unifying approach to its adoption has not emerged yet. As it is true for Gabor filter bank, the design of a filter bank consists of the selection of a proper set of values for the filter parameters. In this paper, it is intended to find a set of Gabor filter bank parameters optimized for the performance of texture classification system. The application method is suggested to compute Gabor filter parameters based on Genetic Algorithm (GA). The parameters are optimized according to each group of textures. We tested the proposed method with several texture images using a standard database. The experimental results demonstrate the effectiveness of proposed approach as the overall success is about 97.5%.","PeriodicalId":315778,"journal":{"name":"2009 Second International Conference on Machine Vision","volume":"09 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125768126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ming Li, Limin Wang, Yang Liu, Ying Liu, Qian Sun, Xuming Han
Output-input feedback (OIF) Elman neural network is a dynamic feedback network. An improved model is proposed based on the OIF Elman neural network by introducing direction profit factor in this paper. Moreover, the proposed model is applied to forecast the composite index of stock. In addition, some comparisons are also made when the stock exchange is performed using prediction results from OIF Elman neural network. Simulation results show that the proposed model is feasible and effective in the finance field. It shows that the proposed model can not only improve the forecasting precision evidently and possess the characteristic of quick convergence but also provide a good reference tool for investors to obtain more profits.
{"title":"An Improved OIF Elman Neural Network Model with Direction Profit Factor and Its Applications","authors":"Ming Li, Limin Wang, Yang Liu, Ying Liu, Qian Sun, Xuming Han","doi":"10.1109/ICMV.2009.39","DOIUrl":"https://doi.org/10.1109/ICMV.2009.39","url":null,"abstract":"Output-input feedback (OIF) Elman neural network is a dynamic feedback network. An improved model is proposed based on the OIF Elman neural network by introducing direction profit factor in this paper. Moreover, the proposed model is applied to forecast the composite index of stock. In addition, some comparisons are also made when the stock exchange is performed using prediction results from OIF Elman neural network. Simulation results show that the proposed model is feasible and effective in the finance field. It shows that the proposed model can not only improve the forecasting precision evidently and possess the characteristic of quick convergence but also provide a good reference tool for investors to obtain more profits.","PeriodicalId":315778,"journal":{"name":"2009 Second International Conference on Machine Vision","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127176071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud computing is a catchphrase that is flipped around a lot these days to describe the direction in which information road and rail network seems to be stirring. The concept, is that immense computing data will reside someplace out there in the anonymous place (in spite of the computer space) and we'll bond to them and utilize them as needed. This research paper presents basic issues regarding data usage and processing in cloud computing and their limitations. An attempt to propose appropriate solutions for these underlying issues has also been made.
{"title":"Data Processing Issues in Cloud Computing","authors":"A. Khalid, H. Mujtaba","doi":"10.1109/ICMV.2009.31","DOIUrl":"https://doi.org/10.1109/ICMV.2009.31","url":null,"abstract":"Cloud computing is a catchphrase that is flipped around a lot these days to describe the direction in which information road and rail network seems to be stirring. The concept, is that immense computing data will reside someplace out there in the anonymous place (in spite of the computer space) and we'll bond to them and utilize them as needed. This research paper presents basic issues regarding data usage and processing in cloud computing and their limitations. An attempt to propose appropriate solutions for these underlying issues has also been made.","PeriodicalId":315778,"journal":{"name":"2009 Second International Conference on Machine Vision","volume":"133 8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131298327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lei Yunqi, Dongjie Chen, Meiling Yuan, Qingmin Li, Zhenxiang Shi
An approach of 3D face recognition by using of facial surface classification image and PCA is presented. In the step of pre-processing, the scattered 3D points of a facial surface are normalized by surface fitting algorithm using multilevel B-splines approximation. Then, partial-ICP method is utilized to adjust 3D face model to be in the right front pose for a better recognition performance. By using the normalized facial depth image been acquired through the two previous steps, and by calculating the Gaussian and mean curvatures at each point, the surface types are classified and the classification result is used to mark different kinds of area on the facial depth image by 8 gray-levels. This achieved gray image is named as Surface Classification Image (SCI) and the SCI now represents the 3D features of the face and then it is input to the process of PCA to obtain the SCI eigenfaces to recognize the face. In the experiments conducted on 3D Facial database ZJU-3DFED of Zhejiang University, we obtained the rank-1 identification score of 94.5%, which outperformed the result of using PCA method directly on the face depth image (instead of SCI) by 16.5%.
{"title":"3D Face Recognition by Surface Classification Image and PCA","authors":"Lei Yunqi, Dongjie Chen, Meiling Yuan, Qingmin Li, Zhenxiang Shi","doi":"10.1109/ICMV.2009.61","DOIUrl":"https://doi.org/10.1109/ICMV.2009.61","url":null,"abstract":"An approach of 3D face recognition by using of facial surface classification image and PCA is presented. In the step of pre-processing, the scattered 3D points of a facial surface are normalized by surface fitting algorithm using multilevel B-splines approximation. Then, partial-ICP method is utilized to adjust 3D face model to be in the right front pose for a better recognition performance. By using the normalized facial depth image been acquired through the two previous steps, and by calculating the Gaussian and mean curvatures at each point, the surface types are classified and the classification result is used to mark different kinds of area on the facial depth image by 8 gray-levels. This achieved gray image is named as Surface Classification Image (SCI) and the SCI now represents the 3D features of the face and then it is input to the process of PCA to obtain the SCI eigenfaces to recognize the face. In the experiments conducted on 3D Facial database ZJU-3DFED of Zhejiang University, we obtained the rank-1 identification score of 94.5%, which outperformed the result of using PCA method directly on the face depth image (instead of SCI) by 16.5%.","PeriodicalId":315778,"journal":{"name":"2009 Second International Conference on Machine Vision","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125524035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automatic analysis of facial expression has become a popular research area because of it’s many applications in the field of computer vision. This paper presents a hybrid method based on Gabor filter, Kernel Principle Component Analysis (KPCA) and Support Vector Machine (SVM) for classification of facial expressions into six basic emotions. At first, Gabor filter bank is applied on input images. Then, the feature reduction technique of KPCA is performed on the outputs of the filter. Finally, SVM is used for classification. The proposed method is tested on the Cohen-Kanade’s facial expression images dataset. The results of the proposed method are compared to the ones of the combined Principle Component Analysis (PCA) and SVM classifier. Experimental results show the effectiveness of the proposed method. The average recognition rate of 89.9% is achieved in this work which is higher than 87.3% resulted from a common combined PCA and SVM method.
{"title":"A Combined KPCA and SVM Method for Basic Emotional Expressions Recognition","authors":"S. Fazli, R. Afrouzian, Hadi Seyedarabi","doi":"10.1109/ICMV.2009.67","DOIUrl":"https://doi.org/10.1109/ICMV.2009.67","url":null,"abstract":"Automatic analysis of facial expression has become a popular research area because of it’s many applications in the field of computer vision. This paper presents a hybrid method based on Gabor filter, Kernel Principle Component Analysis (KPCA) and Support Vector Machine (SVM) for classification of facial expressions into six basic emotions. At first, Gabor filter bank is applied on input images. Then, the feature reduction technique of KPCA is performed on the outputs of the filter. Finally, SVM is used for classification. The proposed method is tested on the Cohen-Kanade’s facial expression images dataset. The results of the proposed method are compared to the ones of the combined Principle Component Analysis (PCA) and SVM classifier. Experimental results show the effectiveness of the proposed method. The average recognition rate of 89.9% is achieved in this work which is higher than 87.3% resulted from a common combined PCA and SVM method.","PeriodicalId":315778,"journal":{"name":"2009 Second International Conference on Machine Vision","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125525259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There are many factors affect the warship Life Cycle Cost (LCC), the importance of every factor is different, and the relationships between factors are correlated. In order to establish the precise LCC model, the Principal Components Regression (PCR) and Partial Least Squares Regression (PLSR) are proposed to reduce the correlativity between factors which affect the modeling of LCC. However, the components often don’t strongly explain the dependent variables when filtering principal components in the independent variables. Therefore, the improved PCR with Rough Set is proposed to overcome the correlativity between the variables, which could choose the important parameters and reduce the unimportant parameters in the modeling of LCC. The modeling of the process and the regression model are described in the content. Compared with the method of PCR and PLSR, the precision of the improved PCR with Rough Set is much higher.
{"title":"Improved Principal Components Regression with Rough Set and its Application in the Modeling of Warship LCC","authors":"Xiao-Hai Zhang, Jia-shan Jin, Jun-bao Geng","doi":"10.1109/ICMV.2009.25","DOIUrl":"https://doi.org/10.1109/ICMV.2009.25","url":null,"abstract":"There are many factors affect the warship Life Cycle Cost (LCC), the importance of every factor is different, and the relationships between factors are correlated. In order to establish the precise LCC model, the Principal Components Regression (PCR) and Partial Least Squares Regression (PLSR) are proposed to reduce the correlativity between factors which affect the modeling of LCC. However, the components often don’t strongly explain the dependent variables when filtering principal components in the independent variables. Therefore, the improved PCR with Rough Set is proposed to overcome the correlativity between the variables, which could choose the important parameters and reduce the unimportant parameters in the modeling of LCC. The modeling of the process and the regression model are described in the content. Compared with the method of PCR and PLSR, the precision of the improved PCR with Rough Set is much higher.","PeriodicalId":315778,"journal":{"name":"2009 Second International Conference on Machine Vision","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122034106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, a fingerprint verification method is presented that improves matching accuracy by overcoming the shortcomings of previous methods due to missing some minutiae, non-linear distortions, and rotation and distortion variations. It reduces multi-spectral noise by enhancing a fingerprint image to accurately and reliably determine a reference point and then extract a 129 X 129 block, making the reference point its center. From the 4 co-occurrence matrices four statistical descriptors are computed. Experimental results show that the proposed method is more accurate than other methods the average false acceptance rate (FAR) is 0.62%, the average false rejection rate (FRR) is 0.08%, and the equal error rate (EER) is 0.35%.
本文提出了一种指纹验证方法,克服了以往指纹验证方法缺少一些细节、非线性失真以及旋转和失真变化等缺点,提高了匹配精度。该算法通过对指纹图像进行增强,使其准确可靠地确定一个参考点,然后提取一个129 X 129的块,以参考点为中心,从而降低多光谱噪声。从这4个共现矩阵中计算出4个统计描述符。实验结果表明,该方法的平均错误接受率(FAR)为0.62%,平均错误拒绝率(FRR)为0.08%,平均错误率(EER)为0.35%,优于其他方法。
{"title":"Fingerprint Verification Using the Texture of Fingerprint Image","authors":"M. Khalil, Dzulkifli Muhammad, Q. Al-Nuzaili","doi":"10.1109/ICMV.2009.18","DOIUrl":"https://doi.org/10.1109/ICMV.2009.18","url":null,"abstract":"In this paper, a fingerprint verification method is presented that improves matching accuracy by overcoming the shortcomings of previous methods due to missing some minutiae, non-linear distortions, and rotation and distortion variations. It reduces multi-spectral noise by enhancing a fingerprint image to accurately and reliably determine a reference point and then extract a 129 X 129 block, making the reference point its center. From the 4 co-occurrence matrices four statistical descriptors are computed. Experimental results show that the proposed method is more accurate than other methods the average false acceptance rate (FAR) is 0.62%, the average false rejection rate (FRR) is 0.08%, and the equal error rate (EER) is 0.35%.","PeriodicalId":315778,"journal":{"name":"2009 Second International Conference on Machine Vision","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132143821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elham Behradfar, A. Mahloojifar, Amir E. Behradfar
This paper presents a new coded excitation/pulse compression scheme that efficiently increase the signal to noise ratio, spatial resolution and image contrast using a modified matched filter. The proposed method implements a simple form of spatial filtering and uses a filter bank of spatial match filters, with each filter designed to reconstruct the image along one A-line. The method is evaluated through simulations with the Field II program using a linear array. The receiving array employs a conventional delay and sum beamformer followed by a bank of compression filters matched to the echo signal sample (ESS), each filter associated with echoes from a specific direction. Simulation results revealed that this approach generates higher lateral resolution and relatively lower range sidelobe amplitudes, as compared with other compression schemes, acceptable for many industrial and medical imaging applications without time weighting. Further sidelobe reduction was achieved through applying Taylor weighting function without considerable sacrificing axial resolution whereas the highest sidelobe was lower than 60 dB. Additionally an eSNR improvement about 20 dB can be expected in comparison with conventional pulsing technique.
{"title":"Performance Enhancement of Coded Excitation in Ultrasonic B-mode Images","authors":"Elham Behradfar, A. Mahloojifar, Amir E. Behradfar","doi":"10.1109/ICMV.2009.62","DOIUrl":"https://doi.org/10.1109/ICMV.2009.62","url":null,"abstract":"This paper presents a new coded excitation/pulse compression scheme that efficiently increase the signal to noise ratio, spatial resolution and image contrast using a modified matched filter. The proposed method implements a simple form of spatial filtering and uses a filter bank of spatial match filters, with each filter designed to reconstruct the image along one A-line. The method is evaluated through simulations with the Field II program using a linear array. The receiving array employs a conventional delay and sum beamformer followed by a bank of compression filters matched to the echo signal sample (ESS), each filter associated with echoes from a specific direction. Simulation results revealed that this approach generates higher lateral resolution and relatively lower range sidelobe amplitudes, as compared with other compression schemes, acceptable for many industrial and medical imaging applications without time weighting. Further sidelobe reduction was achieved through applying Taylor weighting function without considerable sacrificing axial resolution whereas the highest sidelobe was lower than 60 dB. Additionally an eSNR improvement about 20 dB can be expected in comparison with conventional pulsing technique.","PeriodicalId":315778,"journal":{"name":"2009 Second International Conference on Machine Vision","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131341081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Contourlet transform is a new multiscale and multidirectional image representation which effectively captures the edges and contours of images. Hidden Markov Tree model (HMT) can capture all inter-scale, interdirection and inter-location dependencies. Also, HMT can capture the statistical properties of the contourlet coefficients. Therefore, it is used to detect the image singularities (edges and ridges). In this paper, we have proposed three methods for texture segmentation, based on the HMT contourlet model. At first contourlet coefficient is computed and then, for each texture an HMT Contourlet model is trained for test phase, a set of decisions are made for each block of input image based on the maximum likelihood probability. Final decision will be based on the majority vote criterion. The proposed method has been examined on test images and promising results in terms of low segmentation errors has been obtained.
{"title":"Hmt-Contourlet Image Segmentation Based on Majority Vote","authors":"M. Helfroush, Narges Taghdir","doi":"10.1109/ICMV.2009.60","DOIUrl":"https://doi.org/10.1109/ICMV.2009.60","url":null,"abstract":"Contourlet transform is a new multiscale and multidirectional image representation which effectively captures the edges and contours of images. Hidden Markov Tree model (HMT) can capture all inter-scale, interdirection and inter-location dependencies. Also, HMT can capture the statistical properties of the contourlet coefficients. Therefore, it is used to detect the image singularities (edges and ridges). In this paper, we have proposed three methods for texture segmentation, based on the HMT contourlet model. At first contourlet coefficient is computed and then, for each texture an HMT Contourlet model is trained for test phase, a set of decisions are made for each block of input image based on the maximum likelihood probability. Final decision will be based on the majority vote criterion. The proposed method has been examined on test images and promising results in terms of low segmentation errors has been obtained.","PeriodicalId":315778,"journal":{"name":"2009 Second International Conference on Machine Vision","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126573675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}