RELIEF is a very effective and extremely popular feature selection algorithm developed for the first time in 1992 by Kira and Rendell. Since then it has been modified and expanded in various ways to make it more efficient. But the original RELIEF and all of its expansions are for feature selection over labeled data for classification purposes. To the best of our knowledge, for the first time ever RELIEF is used in this paper as RELIEF-C for unlabeled data to select relevant features for clustering. We modified RELIEF so as to overcome its inherent difficulties in the presence of large number of irrelevant features and/or significant number of noisy tuples. RELIEF-C has several advantages over existing wrapper and filter feature selection methods: (a) it works well in the presence of large amount of noisy tuples, (b) it is robust even when underlying clustering algorithm fails to cluster properly, and (c) it accurately recognizes the relevant features even in the presence of large number of irrelevant features. We compared RELIEF-C with two established feature selection methods for clustering. RELIEF-C outperforms other methods significantly over synthetic, benchmark and real world data sets particularly when data set consists of large amount of noisy tuples and/or irrelevant features.
{"title":"RELIEF-C: Efficient Feature Selection for Clustering over Noisy Data","authors":"M. Dash, Y. Ong","doi":"10.1109/ICTAI.2011.135","DOIUrl":"https://doi.org/10.1109/ICTAI.2011.135","url":null,"abstract":"RELIEF is a very effective and extremely popular feature selection algorithm developed for the first time in 1992 by Kira and Rendell. Since then it has been modified and expanded in various ways to make it more efficient. But the original RELIEF and all of its expansions are for feature selection over labeled data for classification purposes. To the best of our knowledge, for the first time ever RELIEF is used in this paper as RELIEF-C for unlabeled data to select relevant features for clustering. We modified RELIEF so as to overcome its inherent difficulties in the presence of large number of irrelevant features and/or significant number of noisy tuples. RELIEF-C has several advantages over existing wrapper and filter feature selection methods: (a) it works well in the presence of large amount of noisy tuples, (b) it is robust even when underlying clustering algorithm fails to cluster properly, and (c) it accurately recognizes the relevant features even in the presence of large number of irrelevant features. We compared RELIEF-C with two established feature selection methods for clustering. RELIEF-C outperforms other methods significantly over synthetic, benchmark and real world data sets particularly when data set consists of large amount of noisy tuples and/or irrelevant features.","PeriodicalId":332661,"journal":{"name":"2011 IEEE 23rd International Conference on Tools with Artificial Intelligence","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114164839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ability to monitor and detect abnormalities accurately is important in a manufacturing process. This can be achieved by recognizing abnormalities in its control charts. This work is concerned with classification of control chart patterns (CCPs) by utilizing a technique known as Symbolic Aggregate Approximation (SAX) and an evolutionary based data mining program known as Self-adjusting Association Rules Generator (SARG). SAX is used in preprocessing to transform CCPs, which can be considered as time series, to symbolic representations. SARG is then applied to these symbolic representations to generate a classifier in a form of a nested IF-THEN-ELSE rules. A more efficient nested IF-THEN-ELSE rules classifier in SARG is discovered. A systematic investigation was carried out to find the capability of the proposed method. This was done by attempting to generate classifiers for CCPs datasets with different level of noises in them. CCPs were generated by Generalized Autoregressive Conditional Heteroskedasticity (GARH) Model where ó is the noise level parameter. Two crucial parameters in SAX are Piecewise Aggregate Approximation and Alphabet Size values. This work identifies suitable values for both parameters in SAX for SARG to generate CCPs classifiers. This is the first work to generate CCPs classifiers with accuracy up to 90% for ó at 13 and 95 % for ó at 9.
{"title":"Capability of Classification of Control Chart Patterns Classifiers Using Symbolic Representation Preprocessing and Evolutionary Computation","authors":"K. Lavangnananda, P. Sawasdimongkol","doi":"10.1109/ICTAI.2011.178","DOIUrl":"https://doi.org/10.1109/ICTAI.2011.178","url":null,"abstract":"Ability to monitor and detect abnormalities accurately is important in a manufacturing process. This can be achieved by recognizing abnormalities in its control charts. This work is concerned with classification of control chart patterns (CCPs) by utilizing a technique known as Symbolic Aggregate Approximation (SAX) and an evolutionary based data mining program known as Self-adjusting Association Rules Generator (SARG). SAX is used in preprocessing to transform CCPs, which can be considered as time series, to symbolic representations. SARG is then applied to these symbolic representations to generate a classifier in a form of a nested IF-THEN-ELSE rules. A more efficient nested IF-THEN-ELSE rules classifier in SARG is discovered. A systematic investigation was carried out to find the capability of the proposed method. This was done by attempting to generate classifiers for CCPs datasets with different level of noises in them. CCPs were generated by Generalized Autoregressive Conditional Heteroskedasticity (GARH) Model where ó is the noise level parameter. Two crucial parameters in SAX are Piecewise Aggregate Approximation and Alphabet Size values. This work identifies suitable values for both parameters in SAX for SARG to generate CCPs classifiers. This is the first work to generate CCPs classifiers with accuracy up to 90% for ó at 13 and 95 % for ó at 9.","PeriodicalId":332661,"journal":{"name":"2011 IEEE 23rd International Conference on Tools with Artificial Intelligence","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124736107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wearable health monitoring systems (WHMS) enable ubiquitous and unobtrusive monitoring of a variety of vital signs that can be measured non-invasively. These systems have the potential to revolutionize healthcare delivery by achieving early detection of critical health changes and thus possibly even disease or hazardous event prevention. Amongst the patient populations that can greatly benefit from WHMS are Congestive Heart Failure (CHF) patients. For CHF management the detection of heart arrhythmias is of crucial importance. However, since WHMS have limited computing and storage resources, diagnostic algorithms need to be computationally inexpensive. Towards this goal, we investigate in this paper the efficiency of the Matching algorithm in deriving compact time-frequency representations of ECG data, which can then be utilized from an Artificial Neural Network (ANN) to achieve beat classification. In order to select the most appropriate decomposition structure, we examine the effect of the type of dictionary utilized (stationary wavelets, cosine packets, wavelet packets) in deriving optimal features for classfication. Our results show that by applying a greedy algorithm to determine the dictionary atoms that show the greatest correlation with the ECG morphologies, an accurate, efficient and real-time beat classification scheme can be derived. Such an algorithm can then be inexpensively run on a resource-constrained portable device such as a cell phone or even directly on a smaller microcontroller-based board. The performance of our approach is evaluated using the MIT-BIH Arrhythmia database. Provided results illustrate the accuracy of the proposed method (94.9%), which together with its simplicity (a single linear transform is required for feature extraction) justify its use for real-time classification of abnormal heartbeats on a portable heart monitoring system.
{"title":"ECG Beat Classification Using Optimal Projections in Overcomplete Dictionaries","authors":"A. Pantelopoulos, N. Bourbakis","doi":"10.1109/ICTAI.2011.187","DOIUrl":"https://doi.org/10.1109/ICTAI.2011.187","url":null,"abstract":"Wearable health monitoring systems (WHMS) enable ubiquitous and unobtrusive monitoring of a variety of vital signs that can be measured non-invasively. These systems have the potential to revolutionize healthcare delivery by achieving early detection of critical health changes and thus possibly even disease or hazardous event prevention. Amongst the patient populations that can greatly benefit from WHMS are Congestive Heart Failure (CHF) patients. For CHF management the detection of heart arrhythmias is of crucial importance. However, since WHMS have limited computing and storage resources, diagnostic algorithms need to be computationally inexpensive. Towards this goal, we investigate in this paper the efficiency of the Matching algorithm in deriving compact time-frequency representations of ECG data, which can then be utilized from an Artificial Neural Network (ANN) to achieve beat classification. In order to select the most appropriate decomposition structure, we examine the effect of the type of dictionary utilized (stationary wavelets, cosine packets, wavelet packets) in deriving optimal features for classfication. Our results show that by applying a greedy algorithm to determine the dictionary atoms that show the greatest correlation with the ECG morphologies, an accurate, efficient and real-time beat classification scheme can be derived. Such an algorithm can then be inexpensively run on a resource-constrained portable device such as a cell phone or even directly on a smaller microcontroller-based board. The performance of our approach is evaluated using the MIT-BIH Arrhythmia database. Provided results illustrate the accuracy of the proposed method (94.9%), which together with its simplicity (a single linear transform is required for feature extraction) justify its use for real-time classification of abnormal heartbeats on a portable heart monitoring system.","PeriodicalId":332661,"journal":{"name":"2011 IEEE 23rd International Conference on Tools with Artificial Intelligence","volume":"235 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122981858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qingfeng Tan, Peipeng Liu, Jinqiao Shi, Xiao Wang, Li Guo
with the worldwide increasing of Internet censorship, censorship-resistance technology has attracted more and more attentions, some famous systems, such as Tor and JAP, have been deployed to provide public service for censorship-resistance. However, these systems all rely on dedicated infrastructure and entry points for service accessibility. The network infrastructure and entry points may become the target of censorship attack. In this paper, a UGC-based method is proposed (called user-generated content based covert communication, UGC3) for covert communication in a friends-to-friends (F2F) manner. It uses existing infrastructures (i.e., UGC sites ) to form a fully distributed overlay network. An efficient resource discovery algorithm is proposed to negotiate the rendezvous point. Analysis shows that this method is able to circumvent internet censorship with user repudiation and fault tolerance.
{"title":"A Covert Communication Method Based on User-Generated Content Sites","authors":"Qingfeng Tan, Peipeng Liu, Jinqiao Shi, Xiao Wang, Li Guo","doi":"10.1109/ICTAI.2011.179","DOIUrl":"https://doi.org/10.1109/ICTAI.2011.179","url":null,"abstract":"with the worldwide increasing of Internet censorship, censorship-resistance technology has attracted more and more attentions, some famous systems, such as Tor and JAP, have been deployed to provide public service for censorship-resistance. However, these systems all rely on dedicated infrastructure and entry points for service accessibility. The network infrastructure and entry points may become the target of censorship attack. In this paper, a UGC-based method is proposed (called user-generated content based covert communication, UGC3) for covert communication in a friends-to-friends (F2F) manner. It uses existing infrastructures (i.e., UGC sites ) to form a fully distributed overlay network. An efficient resource discovery algorithm is proposed to negotiate the rendezvous point. Analysis shows that this method is able to circumvent internet censorship with user repudiation and fault tolerance.","PeriodicalId":332661,"journal":{"name":"2011 IEEE 23rd International Conference on Tools with Artificial Intelligence","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122502570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stefano Bistarelli, Giorgio Gosti, Francesco Santini
In this paper we focus on solving Fuzzy Distributes Constraint Satisfaction Problems (Fuzzy DCSPs) with an algorithm for Naming Games (NGs): each word on which the agents have to agree on is associated with a preference represented as a fuzzy score. The solution is the agreed word associated with the highest preference value. The two main features that distinguish this methodology from Fuzzy DCSPs methods are that the system can react to small instance changes and and it does not require pre-agreed agent/variable ordering.
{"title":"Solving Fuzzy DCSPs with Naming Games","authors":"Stefano Bistarelli, Giorgio Gosti, Francesco Santini","doi":"10.1109/ICTAI.2011.159","DOIUrl":"https://doi.org/10.1109/ICTAI.2011.159","url":null,"abstract":"In this paper we focus on solving Fuzzy Distributes Constraint Satisfaction Problems (Fuzzy DCSPs) with an algorithm for Naming Games (NGs): each word on which the agents have to agree on is associated with a preference represented as a fuzzy score. The solution is the agreed word associated with the highest preference value. The two main features that distinguish this methodology from Fuzzy DCSPs methods are that the system can react to small instance changes and and it does not require pre-agreed agent/variable ordering.","PeriodicalId":332661,"journal":{"name":"2011 IEEE 23rd International Conference on Tools with Artificial Intelligence","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125169396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In many real-life applications, the available source training information is either too small or not representative enough of the underlying target test problem. In the past few years, a new line of machine learning research has been developed to overcome such awkward situations, called Domain Adaptation (DA), giving rise to many adaptation algorithms and theoretical results in the form of generalization bounds. In this paper, a novel contribution is proposed in the form of a DA algorithm dealing with string-structured data, inspired from the DA support vector machine (SVM) technique introduced in [Bruzzone et al, PAMI 2010]. To ensure the convergence of SVM-based learning, the similarity functions involved in the process must be valid kernels, i.e. positive semi-definite (PSD) and symmetric. However, in the string-based context that we are considering in this paper, this condition is often not satisfied. Indeed, it has been proven that most string similarity functions based on the edit distance are not PSD. To overcome this drawback, we make use in this paper of the new theory of learning with good similarity functions introduced by Balcan et al., which (i) does not require the use of a valid kernel to learn well and (ii) allows us to induce sparser models. We take advantage of this theoretical framework to propose a new DA algorithm using good edit similarity functions. Using a suitable string-representation of handwritten digits, we show that are our new algorithm is very efficient to deal with the scaling and rotation problems usually encountered in image classification.
{"title":"Domain Adaptation with Good Edit Similarities: A Sparse Way to Deal with Scaling and Rotation Problems in Image Classification","authors":"Amaury Habrard, Jean-Philippe Peyrache, M. Sebban","doi":"10.1109/ICTAI.2011.35","DOIUrl":"https://doi.org/10.1109/ICTAI.2011.35","url":null,"abstract":"In many real-life applications, the available source training information is either too small or not representative enough of the underlying target test problem. In the past few years, a new line of machine learning research has been developed to overcome such awkward situations, called Domain Adaptation (DA), giving rise to many adaptation algorithms and theoretical results in the form of generalization bounds. In this paper, a novel contribution is proposed in the form of a DA algorithm dealing with string-structured data, inspired from the DA support vector machine (SVM) technique introduced in [Bruzzone et al, PAMI 2010]. To ensure the convergence of SVM-based learning, the similarity functions involved in the process must be valid kernels, i.e. positive semi-definite (PSD) and symmetric. However, in the string-based context that we are considering in this paper, this condition is often not satisfied. Indeed, it has been proven that most string similarity functions based on the edit distance are not PSD. To overcome this drawback, we make use in this paper of the new theory of learning with good similarity functions introduced by Balcan et al., which (i) does not require the use of a valid kernel to learn well and (ii) allows us to induce sparser models. We take advantage of this theoretical framework to propose a new DA algorithm using good edit similarity functions. Using a suitable string-representation of handwritten digits, we show that are our new algorithm is very efficient to deal with the scaling and rotation problems usually encountered in image classification.","PeriodicalId":332661,"journal":{"name":"2011 IEEE 23rd International Conference on Tools with Artificial Intelligence","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129486663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jicheng Fu, Jerrad Genson, Yih-Kuen Jan, Maria Jones
People with spinal cord injury (SCI) are at risk for pressure ulcers because of their poor motor function and consequent prolonged sitting in wheelchairs. The current clinical practice typically uses the wheelchair tilt and recline to attain specific seating angles (sitting postures) to reduce seating pressure in order to prevent pressure ulcers. The rationale is to allow the development of reactive hyperemia to re-perfuse the ischemic tissues. However, our study reveals that a particular tilt and recline setting may result in a significant increase of skin perfusion for one person with SCI, but may cause neutral or even negative effect on another person. Therefore, an individualized guidance on wheelchair tilt and recline usage is desirable in people with various levels of SCI. In this study, we intend to demonstrate the feasibility of using machine-learning techniques to classify and predict favorable wheelchair tilt and recline settings for individual wheelchair users with SCI. Specifically, we use artificial neural networks (ANNs) to classify whether a given tilt and recline setting would cause a positive, neutral, or negative skin perfusion response. The challenge, however, is that ANN is prone to over fitting, a situation in which ANN can perfectly classify the existing data while cannot correctly classify new (unseen) data. We investigate using the genetic algorithm (GA) to train ANN to reduce the chance of converging on local optima and improve the generalization capability of classifying unseen data. Our experimental results indicate that the GA-based ANN significantly improves the generalization ability and outperforms the traditional statistical approach and other commonly used classification techniques, such as BP-based ANN and support vector machine (SVM). To the best of our knowledge, there are no such intelligent systems available now. Our research fills in the gap in existing evidence.
{"title":"Using Artificial Neural Network to Determine Favorable Wheelchair Tilt and Recline Usage in People with Spinal Cord Injury: Training ANN with Genetic Algorithm to Improve Generalization","authors":"Jicheng Fu, Jerrad Genson, Yih-Kuen Jan, Maria Jones","doi":"10.1109/ICTAI.2011.13","DOIUrl":"https://doi.org/10.1109/ICTAI.2011.13","url":null,"abstract":"People with spinal cord injury (SCI) are at risk for pressure ulcers because of their poor motor function and consequent prolonged sitting in wheelchairs. The current clinical practice typically uses the wheelchair tilt and recline to attain specific seating angles (sitting postures) to reduce seating pressure in order to prevent pressure ulcers. The rationale is to allow the development of reactive hyperemia to re-perfuse the ischemic tissues. However, our study reveals that a particular tilt and recline setting may result in a significant increase of skin perfusion for one person with SCI, but may cause neutral or even negative effect on another person. Therefore, an individualized guidance on wheelchair tilt and recline usage is desirable in people with various levels of SCI. In this study, we intend to demonstrate the feasibility of using machine-learning techniques to classify and predict favorable wheelchair tilt and recline settings for individual wheelchair users with SCI. Specifically, we use artificial neural networks (ANNs) to classify whether a given tilt and recline setting would cause a positive, neutral, or negative skin perfusion response. The challenge, however, is that ANN is prone to over fitting, a situation in which ANN can perfectly classify the existing data while cannot correctly classify new (unseen) data. We investigate using the genetic algorithm (GA) to train ANN to reduce the chance of converging on local optima and improve the generalization capability of classifying unseen data. Our experimental results indicate that the GA-based ANN significantly improves the generalization ability and outperforms the traditional statistical approach and other commonly used classification techniques, such as BP-based ANN and support vector machine (SVM). To the best of our knowledge, there are no such intelligent systems available now. Our research fills in the gap in existing evidence.","PeriodicalId":332661,"journal":{"name":"2011 IEEE 23rd International Conference on Tools with Artificial Intelligence","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129637572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Personalized systems are a response to the increasing number of resources on the Internet. In order to facilitate their design and creation, we aim at formalizing them. In this paper, we consider the relationship between a personalized application and its non-personalized counterpart. We argue that a personalized application is a formal extension of a non-personalized one. We aim at characterizing the syntactic differences between the expression of the personalized and non-personalized versions of the application. Situation calculus is our framework to formalize applications. We introduce two scenarios of non-personalized application that we personalize to illustrate our approach.
{"title":"A Formal Approach to Personalization","authors":"G. Dubus, Fabrice Popineau, Yolaine Bourda","doi":"10.1109/ICTAI.2011.43","DOIUrl":"https://doi.org/10.1109/ICTAI.2011.43","url":null,"abstract":"Personalized systems are a response to the increasing number of resources on the Internet. In order to facilitate their design and creation, we aim at formalizing them. In this paper, we consider the relationship between a personalized application and its non-personalized counterpart. We argue that a personalized application is a formal extension of a non-personalized one. We aim at characterizing the syntactic differences between the expression of the personalized and non-personalized versions of the application. Situation calculus is our framework to formalize applications. We introduce two scenarios of non-personalized application that we personalize to illustrate our approach.","PeriodicalId":332661,"journal":{"name":"2011 IEEE 23rd International Conference on Tools with Artificial Intelligence","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128841814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes methods for controlling routing points of VLAN domains using binary particle swarm optimization (BPSO) and angle modulated particle swarm optimization (AMPSO). Virtual LAN (VLAN) is a technique for virtualizing data link layer (or L2) and can construct arbitrary logical networks on top of a physical network. However, VLAN often causes much redundant traffic due to inappropriate deployments of network-layer (L3) routing capabilities in VLAN networks. We propose two methods using BPSO and AMPSO, and show that they can adaptively select the routing points dynamically in accordance with the observed traffic patterns and thus reduce the redundant traffic. The convergence features are compared with those of the conventional method on the basis of a statistical method. Then we also show that the scalability of the algorithm using AMPOS is high and thus we can expect that it is applicable to practical large VLAN environments.
{"title":"Adaptive Routing Point Control in Virtualized Local Area Networks Using Particle Swarm Optimizations","authors":"Kensuke Takahashi, Toshio Hirotsu, T. Sugawara","doi":"10.1109/ICTAI.2011.59","DOIUrl":"https://doi.org/10.1109/ICTAI.2011.59","url":null,"abstract":"This paper describes methods for controlling routing points of VLAN domains using binary particle swarm optimization (BPSO) and angle modulated particle swarm optimization (AMPSO). Virtual LAN (VLAN) is a technique for virtualizing data link layer (or L2) and can construct arbitrary logical networks on top of a physical network. However, VLAN often causes much redundant traffic due to inappropriate deployments of network-layer (L3) routing capabilities in VLAN networks. We propose two methods using BPSO and AMPSO, and show that they can adaptively select the routing points dynamically in accordance with the observed traffic patterns and thus reduce the redundant traffic. The convergence features are compared with those of the conventional method on the basis of a statistical method. Then we also show that the scalability of the algorithm using AMPOS is high and thus we can expect that it is applicable to practical large VLAN environments.","PeriodicalId":332661,"journal":{"name":"2011 IEEE 23rd International Conference on Tools with Artificial Intelligence","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115815909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trajectory data streams are huge amounts of data pertaining to time and position of moving objects. They are continuously generated by different sources exploiting a wide variety of technologies (e.g., RFID tags, GPS, GSM networks). Mining such amounts of data is challenging, since the possibility to extract useful information from this peculiar kind of data is crucial in many application scenarios such as vehicle traffic management, hand-off in cellular networks, supply chain management. Moreover, spatial data poses interesting challenges both for their proper definition and acquisition, thus making the mining process harder than for classical point data. In this paper, we address the problem of trajectory data outlier detection, that revealed really challenging as we deal with data (trajectories) for which the order of elements is relevant. We propose a complete framework starting from data preparation task that allows us to make the mining step quite effective. Since the validation of data mining approaches has to be experimental we performed several tests on real world datasets that confirmed the efficiency and effectiveness of the proposed technique.
{"title":"Trajectory Outlier Detection Using an Analytical Approach","authors":"E. Masciari","doi":"10.1109/ICTAI.2011.62","DOIUrl":"https://doi.org/10.1109/ICTAI.2011.62","url":null,"abstract":"Trajectory data streams are huge amounts of data pertaining to time and position of moving objects. They are continuously generated by different sources exploiting a wide variety of technologies (e.g., RFID tags, GPS, GSM networks). Mining such amounts of data is challenging, since the possibility to extract useful information from this peculiar kind of data is crucial in many application scenarios such as vehicle traffic management, hand-off in cellular networks, supply chain management. Moreover, spatial data poses interesting challenges both for their proper definition and acquisition, thus making the mining process harder than for classical point data. In this paper, we address the problem of trajectory data outlier detection, that revealed really challenging as we deal with data (trajectories) for which the order of elements is relevant. We propose a complete framework starting from data preparation task that allows us to make the mining step quite effective. Since the validation of data mining approaches has to be experimental we performed several tests on real world datasets that confirmed the efficiency and effectiveness of the proposed technique.","PeriodicalId":332661,"journal":{"name":"2011 IEEE 23rd International Conference on Tools with Artificial Intelligence","volume":"329 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115975146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}