Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208283
S. Jayanthi, S. Prema
Web services describe a standardized way of integrating Web-based applications using the XML (Extensible Markup Language), SOAP (Simple Object Access Protocol), WSDL and UDDI (Universal Description Discovery and Integration) open standards over an Internet protocol backbone. WSDL (Web Service Definition Language) is used for describing the available services. The dynamic approach starts with crawling on the Web for Web Services, simultaneously gathering the WSDL service descriptions and related documents. The Web APIs provide the methodology for building unique service objects from multiple web resources. In this semantic search engine, if the web user gets satisfied with the description they can crawl into the webpage, otherwise they can shift to another link. This query enhancement process is exploited to learn useful information that helps to generate related queries. In this research work the add-on is automatically generated when compared with the existing system. Add-on is programs that are integrated into the browser application, usually providing additional functionality. Finally this work gives an overview of how to segregate the unique service object (USO) using Bookshelf Data Structure from web resources and use it to semantically annotate the resulting services in visual mode.
{"title":"Segregating unique service object from multi-web sources for effective visualization","authors":"S. Jayanthi, S. Prema","doi":"10.1109/ICPRIME.2012.6208283","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208283","url":null,"abstract":"Web services describe a standardized way of integrating Web-based applications using the XML (Extensible Markup Language), SOAP (Simple Object Access Protocol), WSDL and UDDI (Universal Description Discovery and Integration) open standards over an Internet protocol backbone. WSDL (Web Service Definition Language) is used for describing the available services. The dynamic approach starts with crawling on the Web for Web Services, simultaneously gathering the WSDL service descriptions and related documents. The Web APIs provide the methodology for building unique service objects from multiple web resources. In this semantic search engine, if the web user gets satisfied with the description they can crawl into the webpage, otherwise they can shift to another link. This query enhancement process is exploited to learn useful information that helps to generate related queries. In this research work the add-on is automatically generated when compared with the existing system. Add-on is programs that are integrated into the browser application, usually providing additional functionality. Finally this work gives an overview of how to segregate the unique service object (USO) using Bookshelf Data Structure from web resources and use it to semantically annotate the resulting services in visual mode.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131119007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208360
R. Boss, K. Thangavel, D. Daniel
This paper proposes mammogram image segmentation using Fuzzy C-Means (FCM) clustering algorithm. The median filter is used for pre-processing of image. It is normally used to reduce noise in an image. The 14 Haralick features are extracted from mammogram image using Gray Level Co-occurrence Matrix (GLCM) for different angles. The features are clustered by K-Means and FCM algorithms inorder to segment the region of interests for further classification. The performance of segmentation result of the proposed algorithm is measured according to the error values such as Mean Square Error (MSE) and Root Means Square Error (RMSE). The Mammogram images used in our experiment are obtained from MIAS database.
本文提出了一种基于模糊c均值(FCM)聚类算法的乳房x线图像分割方法。采用中值滤波器对图像进行预处理。它通常用于减少图像中的噪声。利用灰度共生矩阵(GLCM)对不同角度的乳房x线照片提取14个哈拉利克特征。通过K-Means和FCM算法对特征进行聚类,以分割感兴趣的区域进行进一步分类。根据均方误差(Mean Square error, MSE)和均方根误差(Root Mean Square error, RMSE)等误差值来衡量该算法的分割效果。在我们的实验中使用的乳房x光图像是从MIAS数据库中获得的。
{"title":"Mammogram image segmentation using fuzzy clustering","authors":"R. Boss, K. Thangavel, D. Daniel","doi":"10.1109/ICPRIME.2012.6208360","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208360","url":null,"abstract":"This paper proposes mammogram image segmentation using Fuzzy C-Means (FCM) clustering algorithm. The median filter is used for pre-processing of image. It is normally used to reduce noise in an image. The 14 Haralick features are extracted from mammogram image using Gray Level Co-occurrence Matrix (GLCM) for different angles. The features are clustered by K-Means and FCM algorithms inorder to segment the region of interests for further classification. The performance of segmentation result of the proposed algorithm is measured according to the error values such as Mean Square Error (MSE) and Root Means Square Error (RMSE). The Mammogram images used in our experiment are obtained from MIAS database.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131240500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208342
C. L. Chowdhary, P. Mouli
In present scenario, it is challenging to access widely distributed and huge data from many network systems to a single network system. There are several problems like, monitoring of remote devices and controlling of its operations. A reliable, secure and platform-free remote controller, with ability of monitoring, can overcome such problems. In this paper, a new design of network-based remote controlling and monitoring system is proposed which is platform-free and more secure in comparison with other existing systems. The basic concept is to use the network base for the purpose of real-time remote monitoring and controlling of processing equipment.
{"title":"Design and implementation of secure, platform-free, and network-based remote controlling and monitoring system","authors":"C. L. Chowdhary, P. Mouli","doi":"10.1109/ICPRIME.2012.6208342","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208342","url":null,"abstract":"In present scenario, it is challenging to access widely distributed and huge data from many network systems to a single network system. There are several problems like, monitoring of remote devices and controlling of its operations. A reliable, secure and platform-free remote controller, with ability of monitoring, can overcome such problems. In this paper, a new design of network-based remote controlling and monitoring system is proposed which is platform-free and more secure in comparison with other existing systems. The basic concept is to use the network base for the purpose of real-time remote monitoring and controlling of processing equipment.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121140440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208351
M. Sundaresan, E. Devika
Compound image is combination of text, graphics and pictures. Compression is the process of reducing the amount of data required to represent information. It also reduces the time required for the data to be sent over the Internet or Web pages. Compound image compression is done on the basis of lossy and lossless compression. Lossy compression is a data encoding method that compresses data by discarding (losing) some data in the image. Lossless compression is used to compress the image without any loss of data in the image. Image compression is done using lossy compression and lossless compression. In this paper different techniques are used for compressing compound images. The performance of these techniques has been compared.
{"title":"Image compression using H.264 and deflate algorithm","authors":"M. Sundaresan, E. Devika","doi":"10.1109/ICPRIME.2012.6208351","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208351","url":null,"abstract":"Compound image is combination of text, graphics and pictures. Compression is the process of reducing the amount of data required to represent information. It also reduces the time required for the data to be sent over the Internet or Web pages. Compound image compression is done on the basis of lossy and lossless compression. Lossy compression is a data encoding method that compresses data by discarding (losing) some data in the image. Lossless compression is used to compress the image without any loss of data in the image. Image compression is done using lossy compression and lossless compression. In this paper different techniques are used for compressing compound images. The performance of these techniques has been compared.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127132572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208381
S. Pramanik, D. Bhattacharjee
This paper presents a novel facial sketch image or face-sketch recognition approach based on facial feature extraction. To recognize a face-sketch, we have concentrated on a set of geometric face features like eyes, nose, eyebrows, lips, etc and their length and width ratio because it is difficult to match photos and sketches because they belong to two different modalities. In this system, first the facial features/components from training images are extracted, then ratios of length, width, and area etc. are calculated and those are stored as feature vectors for individual images. After that the mean feature vectors are computed and subtracted from each feature vector for centering of the feature vectors. In the next phase, feature vector for the incoming probe face-sketch is also computed in similar fashion. Here, K-NN classifier is used to recognize probe face-sketch. It is experimentally verified that the proposed method is robust against faces are in a frontal pose, with normal lighting and neutral expression and have no occlusions. The experiment has been conducted with 80 male and female face images from different face databases. It has useful applications for both law enforcement and digital entertainment.
{"title":"Geometric feature based face-sketch recognition","authors":"S. Pramanik, D. Bhattacharjee","doi":"10.1109/ICPRIME.2012.6208381","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208381","url":null,"abstract":"This paper presents a novel facial sketch image or face-sketch recognition approach based on facial feature extraction. To recognize a face-sketch, we have concentrated on a set of geometric face features like eyes, nose, eyebrows, lips, etc and their length and width ratio because it is difficult to match photos and sketches because they belong to two different modalities. In this system, first the facial features/components from training images are extracted, then ratios of length, width, and area etc. are calculated and those are stored as feature vectors for individual images. After that the mean feature vectors are computed and subtracted from each feature vector for centering of the feature vectors. In the next phase, feature vector for the incoming probe face-sketch is also computed in similar fashion. Here, K-NN classifier is used to recognize probe face-sketch. It is experimentally verified that the proposed method is robust against faces are in a frontal pose, with normal lighting and neutral expression and have no occlusions. The experiment has been conducted with 80 male and female face images from different face databases. It has useful applications for both law enforcement and digital entertainment.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117029467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208366
M. Joshi, A. Bhale
The importance of mammograms in early breast cancer detection is an accepted fact. Mammograms (either an analog x-ray film or a digital softcopy) are computationally empowered to extract significant information. Several computational techniques/algorithms process mammograms to highlight and reveal otherwise unseen features. Thus mammographic images are computationally unfolded to obtain appropriate information that can be used for further analysis. Computational analysis of mammograms is an essential tool, which is used by numerous specialists for various purposes. In this paper we review such research work reported in the literature in recent years. Our focus is in particular on computational preprocessing of mammograms. Preprocessing involves enhancement of mammographic images as well as extraction of relevant features from images. We grouped various image enhancement research approaches systematically. We also categorized various research techniques based on the types of features that are extracted and used to obtain intended results. Although mammograms are used mostly for breast cancer detection, the research is not confined to this aspect only. Several other areas that deal with mammograms are also explored by researchers including image compression, Content based Image Retrieval (CBIR) etc. Variety in these research applications is also discussed and presented in this paper.
{"title":"Computational unfoldment of mammograms","authors":"M. Joshi, A. Bhale","doi":"10.1109/ICPRIME.2012.6208366","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208366","url":null,"abstract":"The importance of mammograms in early breast cancer detection is an accepted fact. Mammograms (either an analog x-ray film or a digital softcopy) are computationally empowered to extract significant information. Several computational techniques/algorithms process mammograms to highlight and reveal otherwise unseen features. Thus mammographic images are computationally unfolded to obtain appropriate information that can be used for further analysis. Computational analysis of mammograms is an essential tool, which is used by numerous specialists for various purposes. In this paper we review such research work reported in the literature in recent years. Our focus is in particular on computational preprocessing of mammograms. Preprocessing involves enhancement of mammographic images as well as extraction of relevant features from images. We grouped various image enhancement research approaches systematically. We also categorized various research techniques based on the types of features that are extracted and used to obtain intended results. Although mammograms are used mostly for breast cancer detection, the research is not confined to this aspect only. Several other areas that deal with mammograms are also explored by researchers including image compression, Content based Image Retrieval (CBIR) etc. Variety in these research applications is also discussed and presented in this paper.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117221623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208344
P. Prasenna, A. V. T. RaghavRamana, R. Krishnakumar, A. Devanbu
In conventional network security simply relies on mathematical algorithms and low counter measures to taken to prevent intrusion detection system, although most of this approaches in terms of theoretically challenged to implement. Therefore, a variety of algorithms have been committed to this challenge. Instead of generating large number of rules the evolution optimization techniques like Genetic Network Programming (GNP) can be used. The GNP is based on directed graph, In this paper the security issues related to deploy a data mining-based IDS in a real time environment is focused upon. We generalize the problem of GNP with association rule mining and propose a fuzzy weighted association rule mining with GNP framework suitable for both continuous and discrete attributes. Our proposal follows an Apriori algorithm based fuzzy WAR and GNP and avoids pre and post processing thus eliminating the extra steps during rules generation. This method can sufficient to evaluate misuse and anomaly detection. Experiments on KDD99Cup and DARPA98 data show the high detection rate and accuracy compared with other conventional method.
{"title":"Network programming and mining classifier for intrusion detection using probability classification","authors":"P. Prasenna, A. V. T. RaghavRamana, R. Krishnakumar, A. Devanbu","doi":"10.1109/ICPRIME.2012.6208344","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208344","url":null,"abstract":"In conventional network security simply relies on mathematical algorithms and low counter measures to taken to prevent intrusion detection system, although most of this approaches in terms of theoretically challenged to implement. Therefore, a variety of algorithms have been committed to this challenge. Instead of generating large number of rules the evolution optimization techniques like Genetic Network Programming (GNP) can be used. The GNP is based on directed graph, In this paper the security issues related to deploy a data mining-based IDS in a real time environment is focused upon. We generalize the problem of GNP with association rule mining and propose a fuzzy weighted association rule mining with GNP framework suitable for both continuous and discrete attributes. Our proposal follows an Apriori algorithm based fuzzy WAR and GNP and avoids pre and post processing thus eliminating the extra steps during rules generation. This method can sufficient to evaluate misuse and anomaly detection. Experiments on KDD99Cup and DARPA98 data show the high detection rate and accuracy compared with other conventional method.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122624894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208390
M. M. Ali, L. Rajamani
Deceptive Phishing is the major problem in Instant Messengers, much of sensitive and personal information, disclosed through socio-engineered text messages for which solution is proposed[2] but, detection of phishing through voice chatting technique in Instant Messengers is not yet done which is the motivating factor to carry out the work and solution to address this problem of privacy in Instant Messengers (IM) is proposed using Association Rule Mining (ARM) technique a Data Mining approach integrated with Speech Recognition system. Words are recognized from speech with the help of FFT spectrum analysis and LPC coefficients methodologies. Online criminal's now-a-days adapted voice chatting technique along with text messages collaboratively or either of them in IM's and wraps out personal information leads to threat and hindrance for privacy. In order to focus on privacy preserving we developed and experimented Anti Phishing Detection system (APD) in IM's to detect deceptive phishing for text and audio collaboratively.
{"title":"Deceptive phishing detection system: From audio and text messages in Instant Messengers using Data Mining approach","authors":"M. M. Ali, L. Rajamani","doi":"10.1109/ICPRIME.2012.6208390","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208390","url":null,"abstract":"Deceptive Phishing is the major problem in Instant Messengers, much of sensitive and personal information, disclosed through socio-engineered text messages for which solution is proposed[2] but, detection of phishing through voice chatting technique in Instant Messengers is not yet done which is the motivating factor to carry out the work and solution to address this problem of privacy in Instant Messengers (IM) is proposed using Association Rule Mining (ARM) technique a Data Mining approach integrated with Speech Recognition system. Words are recognized from speech with the help of FFT spectrum analysis and LPC coefficients methodologies. Online criminal's now-a-days adapted voice chatting technique along with text messages collaboratively or either of them in IM's and wraps out personal information leads to threat and hindrance for privacy. In order to focus on privacy preserving we developed and experimented Anti Phishing Detection system (APD) in IM's to detect deceptive phishing for text and audio collaboratively.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127870962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208391
S. C. Sajjan, C. Vijaya
This study proposes limited vocabulary isolated word recognition using Linear Predictive Coding(LPC) and Mel Frequency Cepstral Coefficients(MFCC) for feature extraction, Dynamic Time Warping(DTW) and discrete Hidden Markov Model (HMM) for recognition and their comparisons. Feature extraction is carried over the speech frame of 300 samples with 100 samples overlap at 8 KHz sampling rate of the input speech. MFCC analysis provides better recognition rate than LPC as it operates on a logarithmic scale which resembles human auditory system whereas LPC has uniform resolution over the frequency plane. This is followed by pattern recognition. Since the voice signal tends to have different temporal rate, DTW is one of the methods that provide non-linear alignment between two voice signals. Another method called HMM that statistically models the words is also presented. Experimentally it is observed that recognition accuracy is better for HMM compared with DTW. The database used is TI-46 isolated word corpus zero-nine from Linguist Data Consortium.
本研究提出了使用线性预测编码(LPC)和Mel频率倒谱系数(MFCC)进行特征提取,动态时间扭曲(DTW)和离散隐马尔可夫模型(HMM)进行识别并比较有限词汇孤立词的方法。在输入语音的8 KHz采样率下,对300个样本的语音帧进行特征提取,其中100个样本重叠。MFCC分析具有比LPC更好的识别率,因为它在类似于人类听觉系统的对数尺度上运行,而LPC在频率平面上具有均匀的分辨率。接下来是模式识别。由于语音信号往往具有不同的时间速率,DTW是在两个语音信号之间提供非线性对准的方法之一。本文还介绍了另一种称为HMM的方法,该方法可以对单词进行统计建模。实验结果表明,HMM的识别精度优于DTW。使用的数据库是来自Linguist Data Consortium的TI-46孤立词语料库zero- 9。
{"title":"Comparison of DTW and HMM for isolated word recognition","authors":"S. C. Sajjan, C. Vijaya","doi":"10.1109/ICPRIME.2012.6208391","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208391","url":null,"abstract":"This study proposes limited vocabulary isolated word recognition using Linear Predictive Coding(LPC) and Mel Frequency Cepstral Coefficients(MFCC) for feature extraction, Dynamic Time Warping(DTW) and discrete Hidden Markov Model (HMM) for recognition and their comparisons. Feature extraction is carried over the speech frame of 300 samples with 100 samples overlap at 8 KHz sampling rate of the input speech. MFCC analysis provides better recognition rate than LPC as it operates on a logarithmic scale which resembles human auditory system whereas LPC has uniform resolution over the frequency plane. This is followed by pattern recognition. Since the voice signal tends to have different temporal rate, DTW is one of the methods that provide non-linear alignment between two voice signals. Another method called HMM that statistically models the words is also presented. Experimentally it is observed that recognition accuracy is better for HMM compared with DTW. The database used is TI-46 isolated word corpus zero-nine from Linguist Data Consortium.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128671487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208353
J. Veerappan, G. Pitchammal
The main theme of this application is to provide an algorithm for grayscale and color image watermark to manage the attacks such as rotation, scaling and translation. In the existing watermarking algorithms, those exploited robust features are more or less related to the pixel position, so they cannot be more robust against the attacks. In order to solve this problem this application focus on certain parameters rather than the pixel position for watermarking. Two statistical features such as the histogram shape and the mean of Gaussian filtered low-frequency component of images are taken for this proposed application to make the watermarking algorithm robust to attacks and also interpolation technique is used to increase the number of bites to be needed.
{"title":"Interpolation based image watermarking resisting to geometrical attacks","authors":"J. Veerappan, G. Pitchammal","doi":"10.1109/ICPRIME.2012.6208353","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208353","url":null,"abstract":"The main theme of this application is to provide an algorithm for grayscale and color image watermark to manage the attacks such as rotation, scaling and translation. In the existing watermarking algorithms, those exploited robust features are more or less related to the pixel position, so they cannot be more robust against the attacks. In order to solve this problem this application focus on certain parameters rather than the pixel position for watermarking. Two statistical features such as the histogram shape and the mean of Gaussian filtered low-frequency component of images are taken for this proposed application to make the watermarking algorithm robust to attacks and also interpolation technique is used to increase the number of bites to be needed.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121909923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}