Pub Date : 2013-04-15DOI: 10.1109/ICPRIME.2013.6496497
R. Gupta, N. Garg, A. Das
The success of a movie is stochastic but it is no secret that it is dependent to a large extent upon the level of advertisement and also upon the ratings received by the major movie critics. The general audience values their time and money and hence, refers to the leading critics when making a decision about whether to watch a particular movie or not. Due to this, several production houses tends to influence the critics to provide fraudulent ratings in order to increase one's business or decrease other movie's business. In our paper, we have used a methodology called Kappa Measure to analyse the concordance of the Bollywood and Hollywood movie ratings among themselves. Our study proves that there is a statistically significant disagreement between Indian critics, implying that the ratings are biased. The Hollywood ratings showed good agreement and thus, are more reliable. This peculiarity had gone unnoticed so far and no previous studies exist regarding such mismatching patterns in the ratings. Such a result implies that there is a considerable bias among Indian critics and thus, the Indian audiences are not getting the benefit of an impartial critic to guide their judgement. The same methodology was used for Tamil movies (Kollywood) to further investigate the agreement among critics with respect to a regional movie industry. The state of affairs is such that even if a viewer relies on a number of independent critics to form a judgement about a movie's worth, she/he is unlikely to form a clear picture of the movie's actual worth. Our paper shows that the Indian viewers should not rely heavily on movie critics and also that the Bollywood movie rating system is in serious need of an overhaul.
{"title":"A novel method to measure the reliability of the bollywood movie rating system","authors":"R. Gupta, N. Garg, A. Das","doi":"10.1109/ICPRIME.2013.6496497","DOIUrl":"https://doi.org/10.1109/ICPRIME.2013.6496497","url":null,"abstract":"The success of a movie is stochastic but it is no secret that it is dependent to a large extent upon the level of advertisement and also upon the ratings received by the major movie critics. The general audience values their time and money and hence, refers to the leading critics when making a decision about whether to watch a particular movie or not. Due to this, several production houses tends to influence the critics to provide fraudulent ratings in order to increase one's business or decrease other movie's business. In our paper, we have used a methodology called Kappa Measure to analyse the concordance of the Bollywood and Hollywood movie ratings among themselves. Our study proves that there is a statistically significant disagreement between Indian critics, implying that the ratings are biased. The Hollywood ratings showed good agreement and thus, are more reliable. This peculiarity had gone unnoticed so far and no previous studies exist regarding such mismatching patterns in the ratings. Such a result implies that there is a considerable bias among Indian critics and thus, the Indian audiences are not getting the benefit of an impartial critic to guide their judgement. The same methodology was used for Tamil movies (Kollywood) to further investigate the agreement among critics with respect to a regional movie industry. The state of affairs is such that even if a viewer relies on a number of independent critics to form a judgement about a movie's worth, she/he is unlikely to form a clear picture of the movie's actual worth. Our paper shows that the Indian viewers should not rely heavily on movie critics and also that the Bollywood movie rating system is in serious need of an overhaul.","PeriodicalId":123210,"journal":{"name":"2013 International Conference on Pattern Recognition, Informatics and Mobile Engineering","volume":"184 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114836796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/ICPRIME.2013.6496439
B. Vani, R. Deepalakshmi, S. Suriya
Software Testing is a difficult task and testing web applications may be even more difficult due to peculiarities of such applications. One way to assess IT infrastructure performance is through load testing, which lets you assess how your Web site supports its expected workload by running a specified set of scripts that emulate customer behavior at different load levels. This paper describe the QoS factors load testing addresses, how to conduct load testing, and how it addresses business needs at several requirement levels and presents the efficiency of web based applications in terms of QoS, throughput and Response Time.
{"title":"Web based testing — An optimal solution to handle peak load","authors":"B. Vani, R. Deepalakshmi, S. Suriya","doi":"10.1109/ICPRIME.2013.6496439","DOIUrl":"https://doi.org/10.1109/ICPRIME.2013.6496439","url":null,"abstract":"Software Testing is a difficult task and testing web applications may be even more difficult due to peculiarities of such applications. One way to assess IT infrastructure performance is through load testing, which lets you assess how your Web site supports its expected workload by running a specified set of scripts that emulate customer behavior at different load levels. This paper describe the QoS factors load testing addresses, how to conduct load testing, and how it addresses business needs at several requirement levels and presents the efficiency of web based applications in terms of QoS, throughput and Response Time.","PeriodicalId":123210,"journal":{"name":"2013 International Conference on Pattern Recognition, Informatics and Mobile Engineering","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115051516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/ICPRIME.2013.6496467
V. Navaneethakumar
Text classification and information mining are two significant objectives of natural language processing. Whereas handcrafting rules for both tasks has an extensive convention, learning strategies increased much attention in the past. Existing work presented concept based mining model for text, sentence mining and does not support text classification. To enhance the text clustering approach, we first presented Conceptual Rule Mining On Text clusters to evaluate the more related and influential sentences contributing the document topic. But this model might discriminate terms with semantic variation and negligible authority on the sentence meaning. In addition, we plan to extend conceptual text clustering to web documents, by assigning sentence weights based on conditional probability. Probability ratio is identified for the sentence similarity from which unique sentence meaning contributing to the document topic are listed. In this work, our plan is to implement ranking of the sentences which are calculated using the weights assigned to each and every sentences. With sentence rank conceptual rules are defined for the text cluster documents. The conceptual rule depicts finer tuned document topic and sentence meaning utilized to evaluate the global document contribution. Experiments are conducted with the web documents extracted from the research repositories to evaluate the efficiency of the proposed efficient conceptual rule mining on web document clusters using sentence ranking conditional probability [CRMSRCP] and compared with an existing Model for Concept Based Clustering and Classification and our previous works in terms of Sentence Term Relation, Cluster Object weights, and cluster quality.
{"title":"Mining conceptual rules for web document using sentence ranking conditional probability","authors":"V. Navaneethakumar","doi":"10.1109/ICPRIME.2013.6496467","DOIUrl":"https://doi.org/10.1109/ICPRIME.2013.6496467","url":null,"abstract":"Text classification and information mining are two significant objectives of natural language processing. Whereas handcrafting rules for both tasks has an extensive convention, learning strategies increased much attention in the past. Existing work presented concept based mining model for text, sentence mining and does not support text classification. To enhance the text clustering approach, we first presented Conceptual Rule Mining On Text clusters to evaluate the more related and influential sentences contributing the document topic. But this model might discriminate terms with semantic variation and negligible authority on the sentence meaning. In addition, we plan to extend conceptual text clustering to web documents, by assigning sentence weights based on conditional probability. Probability ratio is identified for the sentence similarity from which unique sentence meaning contributing to the document topic are listed. In this work, our plan is to implement ranking of the sentences which are calculated using the weights assigned to each and every sentences. With sentence rank conceptual rules are defined for the text cluster documents. The conceptual rule depicts finer tuned document topic and sentence meaning utilized to evaluate the global document contribution. Experiments are conducted with the web documents extracted from the research repositories to evaluate the efficiency of the proposed efficient conceptual rule mining on web document clusters using sentence ranking conditional probability [CRMSRCP] and compared with an existing Model for Concept Based Clustering and Classification and our previous works in terms of Sentence Term Relation, Cluster Object weights, and cluster quality.","PeriodicalId":123210,"journal":{"name":"2013 International Conference on Pattern Recognition, Informatics and Mobile Engineering","volume":"128 12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123198952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/ICPRIME.2013.6496504
S. Rosaline, C. Rengarajaswamy
Reversible Data Hiding (RDH) Technique aims in recovering back the original content from the marked media. The original image is desirable in some applications. Thus, RDH plays a vital role in such situations. Securing the multimedia content can be achieved by performing encryption. Transmission time is further decreased by compressing such encrypted images. The process of compression reduces the amount of data required for representing the image. The content owner thus encrypts the original image using Stream Cipher process. The encrypted image is then used as the media for hiding secret image. The embedded image can then be compressed using wavelet compression. The receiver does all the three processes in reverse for getting back the original image and the secret image. Thus the compressed image is first decompressed. Second, the data hiding key is employed to extract the secret message. Third, the encryption key is employed to decrypt and get back the original content. Thus this paper focuses on achieving better security and improved transmission rate.
{"title":"Reversible data hiding technique for stream ciphered and wavelet compressed image","authors":"S. Rosaline, C. Rengarajaswamy","doi":"10.1109/ICPRIME.2013.6496504","DOIUrl":"https://doi.org/10.1109/ICPRIME.2013.6496504","url":null,"abstract":"Reversible Data Hiding (RDH) Technique aims in recovering back the original content from the marked media. The original image is desirable in some applications. Thus, RDH plays a vital role in such situations. Securing the multimedia content can be achieved by performing encryption. Transmission time is further decreased by compressing such encrypted images. The process of compression reduces the amount of data required for representing the image. The content owner thus encrypts the original image using Stream Cipher process. The encrypted image is then used as the media for hiding secret image. The embedded image can then be compressed using wavelet compression. The receiver does all the three processes in reverse for getting back the original image and the secret image. Thus the compressed image is first decompressed. Second, the data hiding key is employed to extract the secret message. Third, the encryption key is employed to decrypt and get back the original content. Thus this paper focuses on achieving better security and improved transmission rate.","PeriodicalId":123210,"journal":{"name":"2013 International Conference on Pattern Recognition, Informatics and Mobile Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128986353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/ICPRIME.2013.6496719
T. Dharani, I. L. Aroquiaraj
Literature survey is most important for understanding and gaining much more knowledge about specific area of a subject. In this paper a survey on content based image retrieval presented. Content Based Image Retrieval (CBIR) is a technique which uses visual features of image such as color, shape, texture, etc... to search user required image from large image database according to user's requests in the form of a query image. We consider Content Based Image Retrieval viz. labelled and unlabelled images for analyzing efficient image for different image retrieval process viz. D-EM, SVM, RF, etc. To determining the efficient imaging for Content Based Image Retrieval, We performance literature review by using principles of Content Based Image Retrieval based unlabelled images. And also give some recommendations for improve the CBIR system using unlabelled images.
{"title":"A survey on content based image retrieval","authors":"T. Dharani, I. L. Aroquiaraj","doi":"10.1109/ICPRIME.2013.6496719","DOIUrl":"https://doi.org/10.1109/ICPRIME.2013.6496719","url":null,"abstract":"Literature survey is most important for understanding and gaining much more knowledge about specific area of a subject. In this paper a survey on content based image retrieval presented. Content Based Image Retrieval (CBIR) is a technique which uses visual features of image such as color, shape, texture, etc... to search user required image from large image database according to user's requests in the form of a query image. We consider Content Based Image Retrieval viz. labelled and unlabelled images for analyzing efficient image for different image retrieval process viz. D-EM, SVM, RF, etc. To determining the efficient imaging for Content Based Image Retrieval, We performance literature review by using principles of Content Based Image Retrieval based unlabelled images. And also give some recommendations for improve the CBIR system using unlabelled images.","PeriodicalId":123210,"journal":{"name":"2013 International Conference on Pattern Recognition, Informatics and Mobile Engineering","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124702263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/ICPRIME.2013.6496481
C. Chithapuram, C. A. Kumar, Y. Jeppu
Design of Experiments (DOE) is a mathematical methodology employed for information gathering and inference. This research uses the Design of Experiments methodology to analyze the worst case scenario for guiding the UAV (Unmanned Aerial Vehicle) to a maneuverable target by an UAV. Using a minimal set of simulations the DOE provides the worst case target tracking scenario against the results obtained with several simulations.
{"title":"Worst case scenario analysis for dynamic target tracking using design of experiments","authors":"C. Chithapuram, C. A. Kumar, Y. Jeppu","doi":"10.1109/ICPRIME.2013.6496481","DOIUrl":"https://doi.org/10.1109/ICPRIME.2013.6496481","url":null,"abstract":"Design of Experiments (DOE) is a mathematical methodology employed for information gathering and inference. This research uses the Design of Experiments methodology to analyze the worst case scenario for guiding the UAV (Unmanned Aerial Vehicle) to a maneuverable target by an UAV. Using a minimal set of simulations the DOE provides the worst case target tracking scenario against the results obtained with several simulations.","PeriodicalId":123210,"journal":{"name":"2013 International Conference on Pattern Recognition, Informatics and Mobile Engineering","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126944749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/ICPRIME.2013.6496514
N. Hemageetha, G. M. Nasira
The Agricultural sector needs more support for its development in developing countries like India. Price prediction helps the farmers and also the Government to make effective decision. Based on the complexity of vegetable price prediction, making use of the characteristics of data mining classification technique like neural networks such as self-adapt, self-study and high fault tolerance, to build up the model of Back-propagation neural network (BPNN) and Radial basis function neural network (RBF) to predict vegetable price. A prediction models were set up by applying the BPNN and RBF neural networks. Taking tomato as an example, the parameters of the model are analysed through experiment. Compare the two neural network forecast results. The result shows that the RBF neural network is more efficient and accurate than Back-propagation neural network.
{"title":"Radial basis function model for vegetable price prediction","authors":"N. Hemageetha, G. M. Nasira","doi":"10.1109/ICPRIME.2013.6496514","DOIUrl":"https://doi.org/10.1109/ICPRIME.2013.6496514","url":null,"abstract":"The Agricultural sector needs more support for its development in developing countries like India. Price prediction helps the farmers and also the Government to make effective decision. Based on the complexity of vegetable price prediction, making use of the characteristics of data mining classification technique like neural networks such as self-adapt, self-study and high fault tolerance, to build up the model of Back-propagation neural network (BPNN) and Radial basis function neural network (RBF) to predict vegetable price. A prediction models were set up by applying the BPNN and RBF neural networks. Taking tomato as an example, the parameters of the model are analysed through experiment. Compare the two neural network forecast results. The result shows that the RBF neural network is more efficient and accurate than Back-propagation neural network.","PeriodicalId":123210,"journal":{"name":"2013 International Conference on Pattern Recognition, Informatics and Mobile Engineering","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132008552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/ICPRIME.2013.6496449
K. Sivaselvan, C. Vijayalakshmi
Network optimization techniques have found a prime application to design the framework and operational analysis for large scale network. In the stochastic system of communication and computer network the server crash have massively more attention, for the reason that noticeably a negative impact on the performance and functionality of computer networks such as processor failure, a service disruption, job priority and some peripheral riot factor. This paper deals with multi queue network of N server may occur randomly with many types of crashes. Multiple paths may exists between the source-destination nodes that direct that traffic load variations, overhead or response time. Each types of crash require repairs a finite number of stages before the service is smarten up. Abortive servers are repair in the sequential order follows the previous stage. Moreover the traffic congestion has become a critical problem which deteriorate the Quality of Service for network users. Stochastic decomposition has employed to obtain approximations for the queue length distributions. The usefulness of a particular stochastic model depends on both its computational advantages and on the extent to which it can be adjust to describe different phenomena. Graphical representation shows that how the new method improves papers the performance measure on queueing network in multi server crash.
{"title":"Stochastic decomposition on multi-server crash in sequential revamp scheme","authors":"K. Sivaselvan, C. Vijayalakshmi","doi":"10.1109/ICPRIME.2013.6496449","DOIUrl":"https://doi.org/10.1109/ICPRIME.2013.6496449","url":null,"abstract":"Network optimization techniques have found a prime application to design the framework and operational analysis for large scale network. In the stochastic system of communication and computer network the server crash have massively more attention, for the reason that noticeably a negative impact on the performance and functionality of computer networks such as processor failure, a service disruption, job priority and some peripheral riot factor. This paper deals with multi queue network of N server may occur randomly with many types of crashes. Multiple paths may exists between the source-destination nodes that direct that traffic load variations, overhead or response time. Each types of crash require repairs a finite number of stages before the service is smarten up. Abortive servers are repair in the sequential order follows the previous stage. Moreover the traffic congestion has become a critical problem which deteriorate the Quality of Service for network users. Stochastic decomposition has employed to obtain approximations for the queue length distributions. The usefulness of a particular stochastic model depends on both its computational advantages and on the extent to which it can be adjust to describe different phenomena. Graphical representation shows that how the new method improves papers the performance measure on queueing network in multi server crash.","PeriodicalId":123210,"journal":{"name":"2013 International Conference on Pattern Recognition, Informatics and Mobile Engineering","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125633244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/ICPRIME.2013.6496452
N. Sankara, T. M. Brughuram
This paper presents a strategic approach for localizing and recognizing the vehicles amidst the traffic scenes generated by monocular camera or video. Previous studies on localization and recognition of vehicles are Model based recognition, 3D triangle based modeling, Model based on Wheel alignment, Ferryman 29D PCA coefficient model and etc. The disadvantages of above listed proposals are Affine transformation issues, redundant Data's, Noise in computation, inability to arrive at accurate shape parameters, poor occlusion detection and too much of modeling's. This paper addresses the above issues and proposes a Deformable Efficient local Gradient based method for localizing the vehicle and Evolutionary Fitness evaluation method with EDA for recognizing exact vehicle model from the traffic scenes. Each images are projected (12D + 3D = 15D) in the image plane. Since the vehicle moves over the ground plane, the pose of the vehicle is determined by position co-efficient X, Y and orientation Θ (3D), the 12 parameters are the parameters of Shape, and it is set up as the prior information based on the mined rules for vehicle localization and continuous EDA approach for vehicle recovery. The system also deals with occlusion of related structures based on stochastic analysis.
本文提出了一种在单目摄像机或视频生成的交通场景中进行车辆定位和识别的策略方法。以往针对车辆定位识别的研究有基于模型的识别、基于三维三角形的建模、基于车轮对中模型、Ferryman 29D PCA系数模型等。上述建议的缺点是仿射变换问题、冗余数据、计算噪声、无法得到准确的形状参数、遮挡检测差以及建模过多。针对上述问题,本文提出了一种基于形变高效局部梯度的车辆定位方法和基于EDA的进化适应度评价方法,用于从交通场景中准确识别车辆模型。每张图像在图像平面上投影(12D + 3D = 15D)。由于车辆在地面上移动,因此车辆的姿态由位置系数X, Y和方向Θ (3D)确定,这12个参数为Shape参数,并根据挖掘的车辆定位规则和连续EDA方法设置为车辆回收的先验信息。该系统还处理了基于随机分析的相关结构遮挡问题。
{"title":"A deformable 15D approach for localization and recognition of road traffic monocular images","authors":"N. Sankara, T. M. Brughuram","doi":"10.1109/ICPRIME.2013.6496452","DOIUrl":"https://doi.org/10.1109/ICPRIME.2013.6496452","url":null,"abstract":"This paper presents a strategic approach for localizing and recognizing the vehicles amidst the traffic scenes generated by monocular camera or video. Previous studies on localization and recognition of vehicles are Model based recognition, 3D triangle based modeling, Model based on Wheel alignment, Ferryman 29D PCA coefficient model and etc. The disadvantages of above listed proposals are Affine transformation issues, redundant Data's, Noise in computation, inability to arrive at accurate shape parameters, poor occlusion detection and too much of modeling's. This paper addresses the above issues and proposes a Deformable Efficient local Gradient based method for localizing the vehicle and Evolutionary Fitness evaluation method with EDA for recognizing exact vehicle model from the traffic scenes. Each images are projected (12D + 3D = 15D) in the image plane. Since the vehicle moves over the ground plane, the pose of the vehicle is determined by position co-efficient X, Y and orientation Θ (3D), the 12 parameters are the parameters of Shape, and it is set up as the prior information based on the mined rules for vehicle localization and continuous EDA approach for vehicle recovery. The system also deals with occlusion of related structures based on stochastic analysis.","PeriodicalId":123210,"journal":{"name":"2013 International Conference on Pattern Recognition, Informatics and Mobile Engineering","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134627523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/ICPRIME.2013.6496716
N. Naveen, A. Annalakshmi, K. R. Valluvan
Mobile Ad Hoc Networks rely on each node passively monitor the data forwarding by its next hop. Actually ad hoc network suffers from high false positives. The false positives are reducing network performance (throughput) and increase overhead and inability to mitigate effect of attacks. Trust Node Valuation and Path Reliability technique to thwart intrusion detection against collusion attacks in MANET. Node reputation ranking is made to reduce the false positive detection. This technique is used to enhance monitoring system based IDTs against collusion risk factor. The computation of path reliability considers the number and reputation of nodes for compare both source and retransmitted messages. The main purpose of this technique is work effectively to select the route and promptly detect colluding attacks. Therefore the total number of lost messages decreased and provides more efficient transmissions in ad hoc networks.
{"title":"Trust node valuation and path reliability technique for intrusion detection in MANET","authors":"N. Naveen, A. Annalakshmi, K. R. Valluvan","doi":"10.1109/ICPRIME.2013.6496716","DOIUrl":"https://doi.org/10.1109/ICPRIME.2013.6496716","url":null,"abstract":"Mobile Ad Hoc Networks rely on each node passively monitor the data forwarding by its next hop. Actually ad hoc network suffers from high false positives. The false positives are reducing network performance (throughput) and increase overhead and inability to mitigate effect of attacks. Trust Node Valuation and Path Reliability technique to thwart intrusion detection against collusion attacks in MANET. Node reputation ranking is made to reduce the false positive detection. This technique is used to enhance monitoring system based IDTs against collusion risk factor. The computation of path reliability considers the number and reputation of nodes for compare both source and retransmitted messages. The main purpose of this technique is work effectively to select the route and promptly detect colluding attacks. Therefore the total number of lost messages decreased and provides more efficient transmissions in ad hoc networks.","PeriodicalId":123210,"journal":{"name":"2013 International Conference on Pattern Recognition, Informatics and Mobile Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131061874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}