Ning Chen, B. Ribeiro, Armando Vieira, João M. M. Duarte, J. C. Neves
Cost-sensitive classification algorithms that enable effective prediction, where the costs of misclassification can be very different, are crucial to creditors and auditors in credit risk analysis. Learning vector quantization (LVQ) is a powerful tool to solve bankruptcy prediction problem as a classification task. The genetic algorithm (GA) is applied widely in conjunction with artificial intelligent methods. The hybridization of genetic algorithm with existing classification algorithms is well illustrated in the field of bankruptcy prediction. In this paper, a hybrid GA and LVQ approach is proposed to minimize the expected misclassified cost under the asymmetric cost preference. Experiments on real-life French private company data show the proposed approach helps to improve the predictive performance in asymmetric cost setup.
{"title":"Hybrid Genetic Algorithm and Learning Vector Quantization Modeling for Cost-Sensitive Bankruptcy Prediction","authors":"Ning Chen, B. Ribeiro, Armando Vieira, João M. M. Duarte, J. C. Neves","doi":"10.1109/ICMLC.2010.29","DOIUrl":"https://doi.org/10.1109/ICMLC.2010.29","url":null,"abstract":"Cost-sensitive classification algorithms that enable effective prediction, where the costs of misclassification can be very different, are crucial to creditors and auditors in credit risk analysis. Learning vector quantization (LVQ) is a powerful tool to solve bankruptcy prediction problem as a classification task. The genetic algorithm (GA) is applied widely in conjunction with artificial intelligent methods. The hybridization of genetic algorithm with existing classification algorithms is well illustrated in the field of bankruptcy prediction. In this paper, a hybrid GA and LVQ approach is proposed to minimize the expected misclassified cost under the asymmetric cost preference. Experiments on real-life French private company data show the proposed approach helps to improve the predictive performance in asymmetric cost setup.","PeriodicalId":423912,"journal":{"name":"2010 Second International Conference on Machine Learning and Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128253446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The discovery of knowledge from medical databases is important in order to make effective medical diagnosis. The aim of data mining is extract the information from database and generate clear and understandable description of patterns. In this study we have introduced a new approach to generate association rules on numeric data. We propose a modified equal width binning interval approach to discretizing continuous valued attributes. The approximate width of the desired intervals is chosen based on the opinion of medical expert and is provided as an input parameter to the model. First we have converted numeric attributes into categorical form based on above techniques. Apriori algorithm is usually used for the market basket analysis was used to generate rules on Pima Indian diabetes data. The data set was taken from UCI machine learning repository containing total instances 768 and 8 numeric attributes.We discover that the often neglected pre-processing steps in knowledge discovery are the most critical elements in determining the success of a data mining application. Lastly we have generated the association rules which are useful to identify general associations in the data, to understand the relationship between the measured fields whether the patient goes on to develop diabetes or not. We are presented step-by-step approach to help the health doctors to explore their data and to understand the discovered rules better.
{"title":"Association Rule for Classification of Type-2 Diabetic Patients","authors":"B. Patil, R. C. Joshi, Durga Toshniwal","doi":"10.1109/ICMLC.2010.67","DOIUrl":"https://doi.org/10.1109/ICMLC.2010.67","url":null,"abstract":"The discovery of knowledge from medical databases is important in order to make effective medical diagnosis. The aim of data mining is extract the information from database and generate clear and understandable description of patterns. In this study we have introduced a new approach to generate association rules on numeric data. We propose a modified equal width binning interval approach to discretizing continuous valued attributes. The approximate width of the desired intervals is chosen based on the opinion of medical expert and is provided as an input parameter to the model. First we have converted numeric attributes into categorical form based on above techniques. Apriori algorithm is usually used for the market basket analysis was used to generate rules on Pima Indian diabetes data. The data set was taken from UCI machine learning repository containing total instances 768 and 8 numeric attributes.We discover that the often neglected pre-processing steps in knowledge discovery are the most critical elements in determining the success of a data mining application. Lastly we have generated the association rules which are useful to identify general associations in the data, to understand the relationship between the measured fields whether the patient goes on to develop diabetes or not. We are presented step-by-step approach to help the health doctors to explore their data and to understand the discovered rules better.","PeriodicalId":423912,"journal":{"name":"2010 Second International Conference on Machine Learning and Computing","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131631927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Linear Support Vector Machines (SVMs) have been used successfully to classify text documents into set of concepts. With the increasing number of linear SVM formulations and decomposition algorithms publicly available, this paper performs a study on their efficiency and efficacy for text categorization tasks. Eight publicly available implementations are investigated in terms of Break Even Point (BEP), F1 measure, ROC plots, learning speed and sensitivity to penalty parameter, based on the experimental results on two benchmark text corpuses. The results show that out of the eight implementations, SVMlin and Proximal SVM perform better in terms of consistent performance and reduced training time. However being an extremely simple algorithm with training time independent of the penalty parameter and the category for which training is being done, Proximal SVM is appealing. We further investigated fuzzy proximal SVM on both the text corpuses; it showed improved generalization over proximal SVM.
{"title":"An Investigation on Linear SVM and its Variants for Text Categorization","authors":"M. A. Kumar, M. Gopal","doi":"10.1109/ICMLC.2010.64","DOIUrl":"https://doi.org/10.1109/ICMLC.2010.64","url":null,"abstract":"Linear Support Vector Machines (SVMs) have been used successfully to classify text documents into set of concepts. With the increasing number of linear SVM formulations and decomposition algorithms publicly available, this paper performs a study on their efficiency and efficacy for text categorization tasks. Eight publicly available implementations are investigated in terms of Break Even Point (BEP), F1 measure, ROC plots, learning speed and sensitivity to penalty parameter, based on the experimental results on two benchmark text corpuses. The results show that out of the eight implementations, SVMlin and Proximal SVM perform better in terms of consistent performance and reduced training time. However being an extremely simple algorithm with training time independent of the penalty parameter and the category for which training is being done, Proximal SVM is appealing. We further investigated fuzzy proximal SVM on both the text corpuses; it showed improved generalization over proximal SVM.","PeriodicalId":423912,"journal":{"name":"2010 Second International Conference on Machine Learning and Computing","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134143166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a novel approach to achieve parallelism on multi-core systems out of the legacy software without recompilation. A profiler tool can be enhanced, from identifying the bottleneck areas, to analyzing the instruction set in bottleneck areas. As the instructions along with all data dependencies are available in the running program, heuristics can be applied to detect the candidates for instruction level parallelism. The serial regions can be regenerated into parallel regions for multiple cores using predefined OpenMP calls and instrument dynamically at runtime. We discuss the problems for parallelism 1) Identifying the parallel regions for parallelism from serial code 2) Detailed approach for generating code generation at runtime.
{"title":"Parallelism through dynamic instrumentation at runtime","authors":"Raj Yadav, Mankawal Deep Singh, Neha Mahajan","doi":"10.1109/ICMLC.2010.58","DOIUrl":"https://doi.org/10.1109/ICMLC.2010.58","url":null,"abstract":"This paper presents a novel approach to achieve parallelism on multi-core systems out of the legacy software without recompilation. A profiler tool can be enhanced, from identifying the bottleneck areas, to analyzing the instruction set in bottleneck areas. As the instructions along with all data dependencies are available in the running program, heuristics can be applied to detect the candidates for instruction level parallelism. The serial regions can be regenerated into parallel regions for multiple cores using predefined OpenMP calls and instrument dynamically at runtime. We discuss the problems for parallelism 1) Identifying the parallel regions for parallelism from serial code 2) Detailed approach for generating code generation at runtime.","PeriodicalId":423912,"journal":{"name":"2010 Second International Conference on Machine Learning and Computing","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124550011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Routing is the process of moving packets through an internetwork, such as the Internet. Routing consists of two separate but related tasks: i) Defining and selecting path in the network ii) Forwarding packets based upon the defined paths from a designated source node to a designated destination node. With the advance of wireless communication technology, small size and high performance computing and communication devices like commercial laptops and personal computers are increasingly used in convention centers, conferences and electronic classrooms. In wireless ad-hoc networks, a collection of nodes with wireless communications and networking capability communicate with each other without the aid of any centralized administrator. The nodes are powered by batteries with limited energy reservoir. It becomes difficult to recharge or replace the batteries of the nodes hence energy conservation is essential. An energy efficient routing protocol (EERP) balances node energy utilization to reduce energy consumption and increase the life of nodes thus increasing the network lifetime, reducing the routing delay and increasing the reliability of the packets reaching the destination. Wireless networks do not have any fixed communication infrastructure. For an active connection the end host as well as the intermediate nodes can be mobile. Therefore routes are subject to frequent disconnection. In such an environment it is important to minimize disruptions caused by changing topology for applications using voice and video. Power Aware Routing enables the nodes to detect misbehavior like deviation from regular routing and forwarding by observing the status of the node. By exploiting non-random behaviors for the mobility patterns that mobile user exhibit, state of network topology can be predicted and perform route reconstruction proactively in a timely manner. In this paper we propose an Energy Efficient- Power Aware routing algorithm where we have integrated energy efficiency with power awareness parameters for routing of packets.
{"title":"Study of Energy Efficient, Power Aware Routing Algorithm and Their Applications","authors":"A. A., G. Sakthidharan, Kanchan M. Miskin","doi":"10.1109/ICMLC.2010.44","DOIUrl":"https://doi.org/10.1109/ICMLC.2010.44","url":null,"abstract":"Routing is the process of moving packets through an internetwork, such as the Internet. Routing consists of two separate but related tasks: i) Defining and selecting path in the network ii) Forwarding packets based upon the defined paths from a designated source node to a designated destination node. With the advance of wireless communication technology, small size and high performance computing and communication devices like commercial laptops and personal computers are increasingly used in convention centers, conferences and electronic classrooms. In wireless ad-hoc networks, a collection of nodes with wireless communications and networking capability communicate with each other without the aid of any centralized administrator. The nodes are powered by batteries with limited energy reservoir. It becomes difficult to recharge or replace the batteries of the nodes hence energy conservation is essential. An energy efficient routing protocol (EERP) balances node energy utilization to reduce energy consumption and increase the life of nodes thus increasing the network lifetime, reducing the routing delay and increasing the reliability of the packets reaching the destination. Wireless networks do not have any fixed communication infrastructure. For an active connection the end host as well as the intermediate nodes can be mobile. Therefore routes are subject to frequent disconnection. In such an environment it is important to minimize disruptions caused by changing topology for applications using voice and video. Power Aware Routing enables the nodes to detect misbehavior like deviation from regular routing and forwarding by observing the status of the node. By exploiting non-random behaviors for the mobility patterns that mobile user exhibit, state of network topology can be predicted and perform route reconstruction proactively in a timely manner. In this paper we propose an Energy Efficient- Power Aware routing algorithm where we have integrated energy efficiency with power awareness parameters for routing of packets.","PeriodicalId":423912,"journal":{"name":"2010 Second International Conference on Machine Learning and Computing","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121408764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents an algorithm to automatically determine the number of clusters in a given input data set, under a mixture of Gaussians assumption. Our algorithm extends the Expectation- Maximization clustering approach by starting with a single cluster assumption for the data, and recursively splitting one of the clusters in order to find a tighter fit. An Information Criterion parameter is used to make a selection between the current and previous model after each split. We build this approach upon prior work done on both the K-Means and Expectation-Maximization algorithms. We also present a novel idea for intelligent cluster splitting which minimizes convergence time and substantially improves accuracy.
{"title":"Detecting the Number of Clusters during Expectation-Maximization Clustering Using Information Criterion","authors":"U. Gupta, Vinay Menon, Uday Babbar","doi":"10.1109/ICMLC.2010.47","DOIUrl":"https://doi.org/10.1109/ICMLC.2010.47","url":null,"abstract":"This paper presents an algorithm to automatically determine the number of clusters in a given input data set, under a mixture of Gaussians assumption. Our algorithm extends the Expectation- Maximization clustering approach by starting with a single cluster assumption for the data, and recursively splitting one of the clusters in order to find a tighter fit. An Information Criterion parameter is used to make a selection between the current and previous model after each split. We build this approach upon prior work done on both the K-Means and Expectation-Maximization algorithms. We also present a novel idea for intelligent cluster splitting which minimizes convergence time and substantially improves accuracy.","PeriodicalId":423912,"journal":{"name":"2010 Second International Conference on Machine Learning and Computing","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115376700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When the usages of electronic mail continue, unsolicited bulk email also continues to grow. These unsolicited bulk emails occupies server storage space and consumes large amount of network bandwidth. To overcome this serious problem, Anti-spam filters become a common component of internet security. Recently, Image spamming is a new kind of method of email spamming in which the text is embedded in image or picture files. Identifying and preventing spam is one of the top challenges in the internet world. Many approaches for identifying image spam have been established in literature. The artificial neural network is an effective classification method for solving feature extraction problems. In this paper we present an experimental system for the classification of image spam by considering statistical image feature histogram and mean value of an block of image. A comparative study of image classification based on color histogram and mean value is presented in this paper. The experimental result shows the performance of the proposed system and it achieves best results with minimum false positive.
{"title":"Statistical Feature Extraction for Classification of Image Spam Using Artificial Neural Networks","authors":"M. Soranamageswari, C. Meena","doi":"10.1109/ICMLC.2010.72","DOIUrl":"https://doi.org/10.1109/ICMLC.2010.72","url":null,"abstract":"When the usages of electronic mail continue, unsolicited bulk email also continues to grow. These unsolicited bulk emails occupies server storage space and consumes large amount of network bandwidth. To overcome this serious problem, Anti-spam filters become a common component of internet security. Recently, Image spamming is a new kind of method of email spamming in which the text is embedded in image or picture files. Identifying and preventing spam is one of the top challenges in the internet world. Many approaches for identifying image spam have been established in literature. The artificial neural network is an effective classification method for solving feature extraction problems. In this paper we present an experimental system for the classification of image spam by considering statistical image feature histogram and mean value of an block of image. A comparative study of image classification based on color histogram and mean value is presented in this paper. The experimental result shows the performance of the proposed system and it achieves best results with minimum false positive.","PeriodicalId":423912,"journal":{"name":"2010 Second International Conference on Machine Learning and Computing","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114193626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Road Extraction from satellite imagery is of fundamental importance in the context of spatial data capturing and updating for GIS applications. As fully automatic method for feature extraction is difficult due to the increasing complexity of objects. This paper proposes a semi-automatic road extraction methodology from high resolution satellite imagery using active contour model (Snakes). First the image is preprocessed using relaxed median filter. In the next step the user inputs initial seed points on the road to be extracted. Then the road segment is extracted using active contour model. The method is tested using high resolution satellite imagery and the results are presented in the paper.
{"title":"A Novel Approach Using Active Contour Model for Semi-Automatic Road Extraction from High Resolution Satellite Imagery","authors":"Anil P.N., S. Natarajan","doi":"10.1109/ICMLC.2010.36","DOIUrl":"https://doi.org/10.1109/ICMLC.2010.36","url":null,"abstract":"Road Extraction from satellite imagery is of fundamental importance in the context of spatial data capturing and updating for GIS applications. As fully automatic method for feature extraction is difficult due to the increasing complexity of objects. This paper proposes a semi-automatic road extraction methodology from high resolution satellite imagery using active contour model (Snakes). First the image is preprocessed using relaxed median filter. In the next step the user inputs initial seed points on the road to be extracted. Then the road segment is extracted using active contour model. The method is tested using high resolution satellite imagery and the results are presented in the paper.","PeriodicalId":423912,"journal":{"name":"2010 Second International Conference on Machine Learning and Computing","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121492962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although there have been many recent studies of link prediction in co-authorship networks, few have tried to utilize the Semantic information hidden in abstracts of the research documents. We propose to build a link predictor in a co-authorship network where nodes represent researchers and links represent co-authorship. In this method, we use the structure of the constructed graph, and propose to add a semantic approach using abstract information, research titles and the event information to improve the accuracy of the predictor. Secondly, we make use of the fact that researchers tend to work in close knit communities. The knowledge of a pair of researchers lying in the same dense community can be used to improve the accuracy of our predictor further. Finally, we test out hypothesis on the DBLP database in a reasonable time by under-sampling and balancing the data set using decision trees and the SMOTE technique.
{"title":"Using Abstract Information and Community Alignment Information for Link Prediction","authors":"Mrinmaya Sachan, R. Ichise","doi":"10.1109/ICMLC.2010.25","DOIUrl":"https://doi.org/10.1109/ICMLC.2010.25","url":null,"abstract":"Although there have been many recent studies of link prediction in co-authorship networks, few have tried to utilize the Semantic information hidden in abstracts of the research documents. We propose to build a link predictor in a co-authorship network where nodes represent researchers and links represent co-authorship. In this method, we use the structure of the constructed graph, and propose to add a semantic approach using abstract information, research titles and the event information to improve the accuracy of the predictor. Secondly, we make use of the fact that researchers tend to work in close knit communities. The knowledge of a pair of researchers lying in the same dense community can be used to improve the accuracy of our predictor further. Finally, we test out hypothesis on the DBLP database in a reasonable time by under-sampling and balancing the data set using decision trees and the SMOTE technique.","PeriodicalId":423912,"journal":{"name":"2010 Second International Conference on Machine Learning and Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131395809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Balajee, B. Suresh, M. Suneetha, V. Rani, G. Veerraju
This paper describes a new policy to schedule parallel jobs on Clusters that may be part of a Computational Grid. This algorithm proposed 3 Job Queues. In each Cluster, some number of resources is assigned to each of the Queue. The 1st Queue has some jobs which has low expected execution time(EET). The 2nd Queue has some jobs which has high expected execution time. The 3rd Queue has jobs which are part of Meta-Job from Computational Grid. In 1st there is no chance of starvation. But in 2nd Queue there is a chance of starvation. So this algorithm applied Aging technique to preempt the jobs which has low priority. And the 3rd Queue is fully dedicated to execute a part of Meta-Jobs only. So here we maintain multiple job Queues which are effectively separate jobs according to their projected execution time for Local Jobs and for part of Meta-Job. Here we preempt jobs by applying Aging Technique. Here we can avoid unnecessary traffic congestion in networks by comparing Expected Execution Time with Total Time for submitting job(s) and receiving result(s) from node(s).
{"title":"Premptive Job Scheduling with Priorities and Starvation cum Congestion Avoidance in Clusters","authors":"M. Balajee, B. Suresh, M. Suneetha, V. Rani, G. Veerraju","doi":"10.1109/ICMLC.2010.60","DOIUrl":"https://doi.org/10.1109/ICMLC.2010.60","url":null,"abstract":"This paper describes a new policy to schedule parallel jobs on Clusters that may be part of a Computational Grid. This algorithm proposed 3 Job Queues. In each Cluster, some number of resources is assigned to each of the Queue. The 1st Queue has some jobs which has low expected execution time(EET). The 2nd Queue has some jobs which has high expected execution time. The 3rd Queue has jobs which are part of Meta-Job from Computational Grid. In 1st there is no chance of starvation. But in 2nd Queue there is a chance of starvation. So this algorithm applied Aging technique to preempt the jobs which has low priority. And the 3rd Queue is fully dedicated to execute a part of Meta-Jobs only. So here we maintain multiple job Queues which are effectively separate jobs according to their projected execution time for Local Jobs and for part of Meta-Job. Here we preempt jobs by applying Aging Technique. Here we can avoid unnecessary traffic congestion in networks by comparing Expected Execution Time with Total Time for submitting job(s) and receiving result(s) from node(s).","PeriodicalId":423912,"journal":{"name":"2010 Second International Conference on Machine Learning and Computing","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128895041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}