Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208283
S. Jayanthi, S. Prema
Web services describe a standardized way of integrating Web-based applications using the XML (Extensible Markup Language), SOAP (Simple Object Access Protocol), WSDL and UDDI (Universal Description Discovery and Integration) open standards over an Internet protocol backbone. WSDL (Web Service Definition Language) is used for describing the available services. The dynamic approach starts with crawling on the Web for Web Services, simultaneously gathering the WSDL service descriptions and related documents. The Web APIs provide the methodology for building unique service objects from multiple web resources. In this semantic search engine, if the web user gets satisfied with the description they can crawl into the webpage, otherwise they can shift to another link. This query enhancement process is exploited to learn useful information that helps to generate related queries. In this research work the add-on is automatically generated when compared with the existing system. Add-on is programs that are integrated into the browser application, usually providing additional functionality. Finally this work gives an overview of how to segregate the unique service object (USO) using Bookshelf Data Structure from web resources and use it to semantically annotate the resulting services in visual mode.
{"title":"Segregating unique service object from multi-web sources for effective visualization","authors":"S. Jayanthi, S. Prema","doi":"10.1109/ICPRIME.2012.6208283","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208283","url":null,"abstract":"Web services describe a standardized way of integrating Web-based applications using the XML (Extensible Markup Language), SOAP (Simple Object Access Protocol), WSDL and UDDI (Universal Description Discovery and Integration) open standards over an Internet protocol backbone. WSDL (Web Service Definition Language) is used for describing the available services. The dynamic approach starts with crawling on the Web for Web Services, simultaneously gathering the WSDL service descriptions and related documents. The Web APIs provide the methodology for building unique service objects from multiple web resources. In this semantic search engine, if the web user gets satisfied with the description they can crawl into the webpage, otherwise they can shift to another link. This query enhancement process is exploited to learn useful information that helps to generate related queries. In this research work the add-on is automatically generated when compared with the existing system. Add-on is programs that are integrated into the browser application, usually providing additional functionality. Finally this work gives an overview of how to segregate the unique service object (USO) using Bookshelf Data Structure from web resources and use it to semantically annotate the resulting services in visual mode.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131119007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208360
R. Boss, K. Thangavel, D. Daniel
This paper proposes mammogram image segmentation using Fuzzy C-Means (FCM) clustering algorithm. The median filter is used for pre-processing of image. It is normally used to reduce noise in an image. The 14 Haralick features are extracted from mammogram image using Gray Level Co-occurrence Matrix (GLCM) for different angles. The features are clustered by K-Means and FCM algorithms inorder to segment the region of interests for further classification. The performance of segmentation result of the proposed algorithm is measured according to the error values such as Mean Square Error (MSE) and Root Means Square Error (RMSE). The Mammogram images used in our experiment are obtained from MIAS database.
本文提出了一种基于模糊c均值(FCM)聚类算法的乳房x线图像分割方法。采用中值滤波器对图像进行预处理。它通常用于减少图像中的噪声。利用灰度共生矩阵(GLCM)对不同角度的乳房x线照片提取14个哈拉利克特征。通过K-Means和FCM算法对特征进行聚类,以分割感兴趣的区域进行进一步分类。根据均方误差(Mean Square error, MSE)和均方根误差(Root Mean Square error, RMSE)等误差值来衡量该算法的分割效果。在我们的实验中使用的乳房x光图像是从MIAS数据库中获得的。
{"title":"Mammogram image segmentation using fuzzy clustering","authors":"R. Boss, K. Thangavel, D. Daniel","doi":"10.1109/ICPRIME.2012.6208360","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208360","url":null,"abstract":"This paper proposes mammogram image segmentation using Fuzzy C-Means (FCM) clustering algorithm. The median filter is used for pre-processing of image. It is normally used to reduce noise in an image. The 14 Haralick features are extracted from mammogram image using Gray Level Co-occurrence Matrix (GLCM) for different angles. The features are clustered by K-Means and FCM algorithms inorder to segment the region of interests for further classification. The performance of segmentation result of the proposed algorithm is measured according to the error values such as Mean Square Error (MSE) and Root Means Square Error (RMSE). The Mammogram images used in our experiment are obtained from MIAS database.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131240500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208342
C. L. Chowdhary, P. Mouli
In present scenario, it is challenging to access widely distributed and huge data from many network systems to a single network system. There are several problems like, monitoring of remote devices and controlling of its operations. A reliable, secure and platform-free remote controller, with ability of monitoring, can overcome such problems. In this paper, a new design of network-based remote controlling and monitoring system is proposed which is platform-free and more secure in comparison with other existing systems. The basic concept is to use the network base for the purpose of real-time remote monitoring and controlling of processing equipment.
{"title":"Design and implementation of secure, platform-free, and network-based remote controlling and monitoring system","authors":"C. L. Chowdhary, P. Mouli","doi":"10.1109/ICPRIME.2012.6208342","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208342","url":null,"abstract":"In present scenario, it is challenging to access widely distributed and huge data from many network systems to a single network system. There are several problems like, monitoring of remote devices and controlling of its operations. A reliable, secure and platform-free remote controller, with ability of monitoring, can overcome such problems. In this paper, a new design of network-based remote controlling and monitoring system is proposed which is platform-free and more secure in comparison with other existing systems. The basic concept is to use the network base for the purpose of real-time remote monitoring and controlling of processing equipment.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121140440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208351
M. Sundaresan, E. Devika
Compound image is combination of text, graphics and pictures. Compression is the process of reducing the amount of data required to represent information. It also reduces the time required for the data to be sent over the Internet or Web pages. Compound image compression is done on the basis of lossy and lossless compression. Lossy compression is a data encoding method that compresses data by discarding (losing) some data in the image. Lossless compression is used to compress the image without any loss of data in the image. Image compression is done using lossy compression and lossless compression. In this paper different techniques are used for compressing compound images. The performance of these techniques has been compared.
{"title":"Image compression using H.264 and deflate algorithm","authors":"M. Sundaresan, E. Devika","doi":"10.1109/ICPRIME.2012.6208351","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208351","url":null,"abstract":"Compound image is combination of text, graphics and pictures. Compression is the process of reducing the amount of data required to represent information. It also reduces the time required for the data to be sent over the Internet or Web pages. Compound image compression is done on the basis of lossy and lossless compression. Lossy compression is a data encoding method that compresses data by discarding (losing) some data in the image. Lossless compression is used to compress the image without any loss of data in the image. Image compression is done using lossy compression and lossless compression. In this paper different techniques are used for compressing compound images. The performance of these techniques has been compared.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127132572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208381
S. Pramanik, D. Bhattacharjee
This paper presents a novel facial sketch image or face-sketch recognition approach based on facial feature extraction. To recognize a face-sketch, we have concentrated on a set of geometric face features like eyes, nose, eyebrows, lips, etc and their length and width ratio because it is difficult to match photos and sketches because they belong to two different modalities. In this system, first the facial features/components from training images are extracted, then ratios of length, width, and area etc. are calculated and those are stored as feature vectors for individual images. After that the mean feature vectors are computed and subtracted from each feature vector for centering of the feature vectors. In the next phase, feature vector for the incoming probe face-sketch is also computed in similar fashion. Here, K-NN classifier is used to recognize probe face-sketch. It is experimentally verified that the proposed method is robust against faces are in a frontal pose, with normal lighting and neutral expression and have no occlusions. The experiment has been conducted with 80 male and female face images from different face databases. It has useful applications for both law enforcement and digital entertainment.
{"title":"Geometric feature based face-sketch recognition","authors":"S. Pramanik, D. Bhattacharjee","doi":"10.1109/ICPRIME.2012.6208381","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208381","url":null,"abstract":"This paper presents a novel facial sketch image or face-sketch recognition approach based on facial feature extraction. To recognize a face-sketch, we have concentrated on a set of geometric face features like eyes, nose, eyebrows, lips, etc and their length and width ratio because it is difficult to match photos and sketches because they belong to two different modalities. In this system, first the facial features/components from training images are extracted, then ratios of length, width, and area etc. are calculated and those are stored as feature vectors for individual images. After that the mean feature vectors are computed and subtracted from each feature vector for centering of the feature vectors. In the next phase, feature vector for the incoming probe face-sketch is also computed in similar fashion. Here, K-NN classifier is used to recognize probe face-sketch. It is experimentally verified that the proposed method is robust against faces are in a frontal pose, with normal lighting and neutral expression and have no occlusions. The experiment has been conducted with 80 male and female face images from different face databases. It has useful applications for both law enforcement and digital entertainment.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117029467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208366
M. Joshi, A. Bhale
The importance of mammograms in early breast cancer detection is an accepted fact. Mammograms (either an analog x-ray film or a digital softcopy) are computationally empowered to extract significant information. Several computational techniques/algorithms process mammograms to highlight and reveal otherwise unseen features. Thus mammographic images are computationally unfolded to obtain appropriate information that can be used for further analysis. Computational analysis of mammograms is an essential tool, which is used by numerous specialists for various purposes. In this paper we review such research work reported in the literature in recent years. Our focus is in particular on computational preprocessing of mammograms. Preprocessing involves enhancement of mammographic images as well as extraction of relevant features from images. We grouped various image enhancement research approaches systematically. We also categorized various research techniques based on the types of features that are extracted and used to obtain intended results. Although mammograms are used mostly for breast cancer detection, the research is not confined to this aspect only. Several other areas that deal with mammograms are also explored by researchers including image compression, Content based Image Retrieval (CBIR) etc. Variety in these research applications is also discussed and presented in this paper.
{"title":"Computational unfoldment of mammograms","authors":"M. Joshi, A. Bhale","doi":"10.1109/ICPRIME.2012.6208366","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208366","url":null,"abstract":"The importance of mammograms in early breast cancer detection is an accepted fact. Mammograms (either an analog x-ray film or a digital softcopy) are computationally empowered to extract significant information. Several computational techniques/algorithms process mammograms to highlight and reveal otherwise unseen features. Thus mammographic images are computationally unfolded to obtain appropriate information that can be used for further analysis. Computational analysis of mammograms is an essential tool, which is used by numerous specialists for various purposes. In this paper we review such research work reported in the literature in recent years. Our focus is in particular on computational preprocessing of mammograms. Preprocessing involves enhancement of mammographic images as well as extraction of relevant features from images. We grouped various image enhancement research approaches systematically. We also categorized various research techniques based on the types of features that are extracted and used to obtain intended results. Although mammograms are used mostly for breast cancer detection, the research is not confined to this aspect only. Several other areas that deal with mammograms are also explored by researchers including image compression, Content based Image Retrieval (CBIR) etc. Variety in these research applications is also discussed and presented in this paper.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117221623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208386
B. Nagaraja, H. S. Jayanna
Nowadays, Speaker identification system plays a very important role in the field of fast growing internet based communication/transactions. In this paper, speaker identification in the context of Mono-lingual and Cross-lingual are demonstrated for Indian languages with the constraint of limited data. The languages considered for the study are English, Hindi and Kannada. Since the standard Multi-lingual database is not available, experiments are carried out on an our own created database of 30 speakers who can speak the three different languages. It was found out in the experimental study that the Mono-lingual speaker identification gives better performance with English as training and testing language though it is not a native language of speakers considered for the study. Further, it was observed in Cross-lingual study that the use of English language either in training or testing gives better identification performance.
{"title":"Mono and Cross lingual speaker identification with the constraint of limited data","authors":"B. Nagaraja, H. S. Jayanna","doi":"10.1109/ICPRIME.2012.6208386","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208386","url":null,"abstract":"Nowadays, Speaker identification system plays a very important role in the field of fast growing internet based communication/transactions. In this paper, speaker identification in the context of Mono-lingual and Cross-lingual are demonstrated for Indian languages with the constraint of limited data. The languages considered for the study are English, Hindi and Kannada. Since the standard Multi-lingual database is not available, experiments are carried out on an our own created database of 30 speakers who can speak the three different languages. It was found out in the experimental study that the Mono-lingual speaker identification gives better performance with English as training and testing language though it is not a native language of speakers considered for the study. Further, it was observed in Cross-lingual study that the use of English language either in training or testing gives better identification performance.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114945274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208357
P. Bagchi, D. Bhattacharjee, M. Nasipuri, D. K. Basu
This paper is based on n application of smoothing of 3D face images followed by feature detection i.e. detecting the nose tip. The present method uses a weighted mesh median filtering technique for smoothing. In this present smoothing technique we have built the neighborhood surrounding a particular point in 3D face and replaced that with the weighted value of the surrounding points in 3D face image. After applying the smoothing technique to the 3D face images our experimental results show that we have obtained considerable improvement as compared to the algorithm without smoothing. We have used here the maximum intensity algorithm for detecting the nose-tip and this method correctly detects the nose-tip in case of any pose i.e. along X, Y, and Z axes. The present technique gave us worked successfully on 535 out of 542 3D face images as compared to the method without smoothing which worked only on 521 3D face images out of 542 face images. Thus we have obtained a 98.70% performance rate over 96.12% performance rate of the algorithm without smoothing. All the experiments have been performed on the FRAV3D database.
{"title":"A novel approach for nose tip detection using smoothing by weighted median filtering applied to 3D face images in variant poses","authors":"P. Bagchi, D. Bhattacharjee, M. Nasipuri, D. K. Basu","doi":"10.1109/ICPRIME.2012.6208357","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208357","url":null,"abstract":"This paper is based on n application of smoothing of 3D face images followed by feature detection i.e. detecting the nose tip. The present method uses a weighted mesh median filtering technique for smoothing. In this present smoothing technique we have built the neighborhood surrounding a particular point in 3D face and replaced that with the weighted value of the surrounding points in 3D face image. After applying the smoothing technique to the 3D face images our experimental results show that we have obtained considerable improvement as compared to the algorithm without smoothing. We have used here the maximum intensity algorithm for detecting the nose-tip and this method correctly detects the nose-tip in case of any pose i.e. along X, Y, and Z axes. The present technique gave us worked successfully on 535 out of 542 3D face images as compared to the method without smoothing which worked only on 521 3D face images out of 542 face images. Thus we have obtained a 98.70% performance rate over 96.12% performance rate of the algorithm without smoothing. All the experiments have been performed on the FRAV3D database.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116715867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208387
S. Seeri, S. Giraddi, B. Prashant
Popularity of the digital cameras is increasing rapidly day by day because of advanced applications and availability of digital cameras. The detection and extraction of text regions in an image is a well known problem in the computer vision. Text in images contains useful semantic information which can be used to fully understand the images. Proposed method aims at detecting and extracting Kannada text from government organization signboard images acquired by digital camera. Segmentation is performed using edge detection method and heuristic features are used to remove the non text regions. Kannada text identification is performed using the structural feature boundary length of the object strokes. Rule based method is employed to validate the objects as Kannada text. The proposed method is effective, efficient and encouraging results are obtained. It has the precision rate of 84.21%, recall rate of 83.16% and Kannada text identification accuracy of 75.77%. Hence proposed method is robust with font size, small orientation and alignment of text.
{"title":"A novel approach for Kannada text extraction","authors":"S. Seeri, S. Giraddi, B. Prashant","doi":"10.1109/ICPRIME.2012.6208387","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208387","url":null,"abstract":"Popularity of the digital cameras is increasing rapidly day by day because of advanced applications and availability of digital cameras. The detection and extraction of text regions in an image is a well known problem in the computer vision. Text in images contains useful semantic information which can be used to fully understand the images. Proposed method aims at detecting and extracting Kannada text from government organization signboard images acquired by digital camera. Segmentation is performed using edge detection method and heuristic features are used to remove the non text regions. Kannada text identification is performed using the structural feature boundary length of the object strokes. Rule based method is employed to validate the objects as Kannada text. The proposed method is effective, efficient and encouraging results are obtained. It has the precision rate of 84.21%, recall rate of 83.16% and Kannada text identification accuracy of 75.77%. Hence proposed method is robust with font size, small orientation and alignment of text.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114730089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-03-21DOI: 10.1109/ICPRIME.2012.6208383
N. Raju, P. Preethi, T. L. Priya, S. Mathini
The emotional influence on human behavior can be identified by speech. Recognition of emotion plays a vital role in many fields such as automatic emotion recognition etc. In this paper, we distinguish a normal person from the terrorist/victim by identifying their emotional state from speech. Emotional states dealt with in this paper are neutral, sad, anger, fear, etc. Two different algorithm of pitch is used to extract the pitch here. Moreover, support vector machine is used to classify the emotional state. The accuracy level of the classifier differentiates the emotional state of the normal person from the terrorist/victim. For the classification of all emotions, the average accuracy of both male and female is 80%.
{"title":"Emotion recognition — An approach to identify the terrorist","authors":"N. Raju, P. Preethi, T. L. Priya, S. Mathini","doi":"10.1109/ICPRIME.2012.6208383","DOIUrl":"https://doi.org/10.1109/ICPRIME.2012.6208383","url":null,"abstract":"The emotional influence on human behavior can be identified by speech. Recognition of emotion plays a vital role in many fields such as automatic emotion recognition etc. In this paper, we distinguish a normal person from the terrorist/victim by identifying their emotional state from speech. Emotional states dealt with in this paper are neutral, sad, anger, fear, etc. Two different algorithm of pitch is used to extract the pitch here. Moreover, support vector machine is used to classify the emotional state. The accuracy level of the classifier differentiates the emotional state of the normal person from the terrorist/victim. For the classification of all emotions, the average accuracy of both male and female is 80%.","PeriodicalId":148511,"journal":{"name":"International Conference on Pattern Recognition, Informatics and Medical Engineering (PRIME-2012)","volume":"2020 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114733926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}