Pub Date : 2014-04-01DOI: 10.4018/ijncr.2014040102
J. L. Guerrero, A. Berlanga, J. M. López
Diversity in evolutionary algorithms is a critical issue related to the performance obtained during the search process and strongly linked to convergence issues. The lack of the required diversity has been traditionally linked to problematic situations such as early stopping in the presence of local optima (usually faced when the number of individuals in the population is insufficient to deal with the search space). Current proposal introduces a guided mutation operator to cope with these diversity issues, introducing tracking mechanisms of the search space in order to feed the required information to this mutation operator. The objective of the proposed mutation operator is to guarantee a certain degree of coverage over the search space before the algorithm is stopped, attempting to prevent early convergence, which may be introduced by the lack of population diversity. A dynamic mechanism is included in order to determine, in execution time, the degree of application of the technique, adapting the number of cycles when the technique is applied. The results have been tested over a dataset of ten standard single objective functions with different characteristics regarding dimensionality, presence of multiple local optima, search space range and three different dimensionality values, 30D, 300D and 1000D. Thirty different runs have been performed in order to cover the effect of the introduced operator and the statistical relevance of the measured results
{"title":"A Guided Mutation Operator for Dynamic Diversity Enhancement in Evolutionary Strategies","authors":"J. L. Guerrero, A. Berlanga, J. M. López","doi":"10.4018/ijncr.2014040102","DOIUrl":"https://doi.org/10.4018/ijncr.2014040102","url":null,"abstract":"Diversity in evolutionary algorithms is a critical issue related to the performance obtained during the search process and strongly linked to convergence issues. The lack of the required diversity has been traditionally linked to problematic situations such as early stopping in the presence of local optima (usually faced when the number of individuals in the population is insufficient to deal with the search space). Current proposal introduces a guided mutation operator to cope with these diversity issues, introducing tracking mechanisms of the search space in order to feed the required information to this mutation operator. The objective of the proposed mutation operator is to guarantee a certain degree of coverage over the search space before the algorithm is stopped, attempting to prevent early convergence, which may be introduced by the lack of population diversity. A dynamic mechanism is included in order to determine, in execution time, the degree of application of the technique, adapting the number of cycles when the technique is applied. The results have been tested over a dataset of ten standard single objective functions with different characteristics regarding dimensionality, presence of multiple local optima, search space range and three different dimensionality values, 30D, 300D and 1000D. Thirty different runs have been performed in order to cover the effect of the introduced operator and the statistical relevance of the measured results","PeriodicalId":369881,"journal":{"name":"Int. J. Nat. Comput. Res.","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125936003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-04-01DOI: 10.4018/ijncr.2014040104
Fernando Silva, P. Urbano, A. Christensen
The authors propose and evaluate a novel approach to the online synthesis of neural controllers for autonomous robots. The authors combine online evolution of weights and network topology with neuromodulated learning. The authors demonstrate our method through a series of simulation-based experiments in which an e-puck-like robot must perform a dynamic concurrent foraging task. In this task, scattered food items periodically change their nutritive value or become poisonous. The authors demonstrate that the online evolutionary process, both with and without neuromodulation, is capable of generating controllers well adapted to the periodic task changes. The authors show that when neuromodulated learning is combined with evolution, neural controllers are synthesised faster than by evolution alone. An analysis of the evolved solutions reveals that neuromodulation allows for a more effective expression of a given topology's potential due to the active modification of internal dynamics. Neuromodulated networks learn abstractions of the task and different modes of operation that are triggered by external stimulus.
{"title":"Online Evolution of Adaptive Robot Behaviour","authors":"Fernando Silva, P. Urbano, A. Christensen","doi":"10.4018/ijncr.2014040104","DOIUrl":"https://doi.org/10.4018/ijncr.2014040104","url":null,"abstract":"The authors propose and evaluate a novel approach to the online synthesis of neural controllers for autonomous robots. The authors combine online evolution of weights and network topology with neuromodulated learning. The authors demonstrate our method through a series of simulation-based experiments in which an e-puck-like robot must perform a dynamic concurrent foraging task. In this task, scattered food items periodically change their nutritive value or become poisonous. The authors demonstrate that the online evolutionary process, both with and without neuromodulation, is capable of generating controllers well adapted to the periodic task changes. The authors show that when neuromodulated learning is combined with evolution, neural controllers are synthesised faster than by evolution alone. An analysis of the evolved solutions reveals that neuromodulation allows for a more effective expression of a given topology's potential due to the active modification of internal dynamics. Neuromodulated networks learn abstractions of the task and different modes of operation that are triggered by external stimulus.","PeriodicalId":369881,"journal":{"name":"Int. J. Nat. Comput. Res.","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128101421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Estimation of Distribution Algorithms (EDAs) have proved themselves as an efficient alternative to Genetic Algorithms when solving nearly decomposable optimization problems. In general, EDAs substitute genetic operators by probabilistic sampling, enabling a better use of the information provided by the population and, consequently, a more efficient search. In this paper the authors exploit EDAs' probabilistic models from a different point-of-view, the authors argue that by looking for substructures in the probabilistic models it is possible to decompose a black-box optimization problem and solve it in a more straightforward way. Relying on the Building-Block hypothesis and the nearly-decomposability concept, their decompositional approach is implemented by a two-step method: 1) the current population is modeled by a Bayesian network, which is further decomposed into substructures (communities) using a version of the Fast Newman Algorithm. 2) Since the identified communities can be seen as sub-problems, they are solved separately and used to compose a solution for the original problem. The experiments showed strengths and limitations for the proposed method, but for some of the tested scenarios the authors’ method outperformed the Bayesian Optimization Algorithm by requiring up to 78% fewer fitness evaluations and being 30 times faster.
{"title":"Decomposition of Black-Box Optimization Problems by Community Detection in Bayesian Networks","authors":"M. K. Crocomo, J. P. Martins, A. Delbem","doi":"10.4018/jncr.2012100101","DOIUrl":"https://doi.org/10.4018/jncr.2012100101","url":null,"abstract":"Estimation of Distribution Algorithms (EDAs) have proved themselves as an efficient alternative to Genetic Algorithms when solving nearly decomposable optimization problems. In general, EDAs substitute genetic operators by probabilistic sampling, enabling a better use of the information provided by the population and, consequently, a more efficient search. In this paper the authors exploit EDAs' probabilistic models from a different point-of-view, the authors argue that by looking for substructures in the probabilistic models it is possible to decompose a black-box optimization problem and solve it in a more straightforward way. Relying on the Building-Block hypothesis and the nearly-decomposability concept, their decompositional approach is implemented by a two-step method: 1) the current population is modeled by a Bayesian network, which is further decomposed into substructures (communities) using a version of the Fast Newman Algorithm. 2) Since the identified communities can be seen as sub-problems, they are solved separately and used to compose a solution for the original problem. The experiments showed strengths and limitations for the proposed method, but for some of the tested scenarios the authors’ method outperformed the Bayesian Optimization Algorithm by requiring up to 78% fewer fitness evaluations and being 30 times faster.","PeriodicalId":369881,"journal":{"name":"Int. J. Nat. Comput. Res.","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122894950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a new procedure for the nondominated sorting with constraint handling to be used in a multiobjective evolutionary algorithm. The strategy uses a sorting algorithm and binary search to classify the solutions in the correct level of the Pareto front. In a problem with objective functions, using solutions in the population, the original nondominated sorting algorithm, used by NSGA-II, has always a computational cost of in a naA¯ve implementation. The complexity of the new algorithm can vary from in the best case and in the worst case. A experiment was executed in order to compare the new algorithm with the original and another improved version of the Deb’s algorithm. Results reveal that the new strategy is much better than other versions when there are many levels in Pareto front. It is also concluded that is interesting to alternate the new algorithm and the improved Deb’s version during the evolution of the evolutionary algorithm.
{"title":"An Improved Nondominated Sorting Algorithm","authors":"A. R. Cruz","doi":"10.4018/jncr.2012100102","DOIUrl":"https://doi.org/10.4018/jncr.2012100102","url":null,"abstract":"This paper presents a new procedure for the nondominated sorting with constraint handling to be used in a multiobjective evolutionary algorithm. The strategy uses a sorting algorithm and binary search to classify the solutions in the correct level of the Pareto front. In a problem with objective functions, using solutions in the population, the original nondominated sorting algorithm, used by NSGA-II, has always a computational cost of in a naA¯ve implementation. The complexity of the new algorithm can vary from in the best case and in the worst case. A experiment was executed in order to compare the new algorithm with the original and another improved version of the Deb’s algorithm. Results reveal that the new strategy is much better than other versions when there are many levels in Pareto front. It is also concluded that is interesting to alternate the new algorithm and the improved Deb’s version during the evolution of the evolutionary algorithm.","PeriodicalId":369881,"journal":{"name":"Int. J. Nat. Comput. Res.","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129673860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Victor de Lucena, T. W. Lima, A. S. Soares, C. Coelho
This paper proposes a multiobjective formulation for variable selection in multivariate calibration problems in order to improve the generalization ability of the calibration model. The authors applied this proposed formulation in the multiobjective genetic algorithm NSGA-II. The formulation consists in two conflicting objectives: minimize the prediction error and minimize the number of selected variables for multiple linear regression. These objectives are conflicting because, when the number of variables is reduced the prediction error increases. As study of case is used the wheat data set obtained by NIR spectrometry with the objective for determining a variable subgroup with information about protein concentration. The results of traditional techniques of multivariate calibration as the partial least square and successive projection algorithm for multiple linear regression are presented for comparisons. The obtained results showed that the proposed approach obtained better results when compared with a mono-objective evolutionary algorithm and with traditional techniques of multivariate calibration.
{"title":"Multi-Objective Evolutionary Algorithm NSGA-II for Variables Selection in Multivariate Calibration Problems","authors":"Daniel Victor de Lucena, T. W. Lima, A. S. Soares, C. Coelho","doi":"10.4018/jncr.2012100103","DOIUrl":"https://doi.org/10.4018/jncr.2012100103","url":null,"abstract":"This paper proposes a multiobjective formulation for variable selection in multivariate calibration problems in order to improve the generalization ability of the calibration model. The authors applied this proposed formulation in the multiobjective genetic algorithm NSGA-II. The formulation consists in two conflicting objectives: minimize the prediction error and minimize the number of selected variables for multiple linear regression. These objectives are conflicting because, when the number of variables is reduced the prediction error increases. As study of case is used the wheat data set obtained by NIR spectrometry with the objective for determining a variable subgroup with information about protein concentration. The results of traditional techniques of multivariate calibration as the partial least square and successive projection algorithm for multiple linear regression are presented for comparisons. The obtained results showed that the proposed approach obtained better results when compared with a mono-objective evolutionary algorithm and with traditional techniques of multivariate calibration.","PeriodicalId":369881,"journal":{"name":"Int. J. Nat. Comput. Res.","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123907794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A number of current applications require algorithms able to extract a model from one-class data and classify unseen data as self or non-self in a novelty detection scenario, such as spam identification and intrusion detection. In this paper the authors focus on keystroke dynamics, which analyses the user typing rhythm to improve the reliability of user authentication process. However, several different features may be extracted from the typing data, making it difficult to define the feature vector. This problem is even more critical in a novelty detection scenario, when data from the negative class is not available. Based on a keystroke dynamics review, this work evaluated the most used features and evaluated which ones are more significant to differentiate a user from another using keystroke dynamics. In order to perform this evaluation, the authors tested the impact on two benchmark databases applying bio-inspired algorithms based on neural networks and artificial immune systems.
{"title":"Comparison of Feature Vectors in Keystroke Dynamics: A Novelty Detection Approach","authors":"P. Pisani, Ana Carolina Lorena","doi":"10.4018/jncr.2012100104","DOIUrl":"https://doi.org/10.4018/jncr.2012100104","url":null,"abstract":"A number of current applications require algorithms able to extract a model from one-class data and classify unseen data as self or non-self in a novelty detection scenario, such as spam identification and intrusion detection. In this paper the authors focus on keystroke dynamics, which analyses the user typing rhythm to improve the reliability of user authentication process. However, several different features may be extracted from the typing data, making it difficult to define the feature vector. This problem is even more critical in a novelty detection scenario, when data from the negative class is not available. Based on a keystroke dynamics review, this work evaluated the most used features and evaluated which ones are more significant to differentiate a user from another using keystroke dynamics. In order to perform this evaluation, the authors tested the impact on two benchmark databases applying bio-inspired algorithms based on neural networks and artificial immune systems.","PeriodicalId":369881,"journal":{"name":"Int. J. Nat. Comput. Res.","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114062684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To reduce mutation test costs, different strategies were proposed to find a set of essential operators that generates a reduced number of mutants without decreasing the mutation score. However, the operator selection is influenced by other factors, such as: number of test data, execution time, number of revealed faults, etc. In fact this is a multiobjective problem. For that, different good solutions exist. To properly deal with this problem, a selection strategy based on multiobjective algorithms was proposed and investigated for unit testing. This work explores the use of such strategy in the integration testing phase. Three multiobjective algorithms are used and evaluated with real programs: one algorithm based on tabu search (MTabu), one based on Genetic Algorithm (NSGA-II) and the third one based on Ant Colony Optimization (PACO). The results are compared with traditional strategies and contrasted with essential operators obtained in the unit testing level.
{"title":"Reducing Interface Mutation Costs with Multiobjective Optimization Algorithms","authors":"Tiago Nobre, S. Vergilio, A. Pozo","doi":"10.4018/jncr.2012070102","DOIUrl":"https://doi.org/10.4018/jncr.2012070102","url":null,"abstract":"To reduce mutation test costs, different strategies were proposed to find a set of essential operators that generates a reduced number of mutants without decreasing the mutation score. However, the operator selection is influenced by other factors, such as: number of test data, execution time, number of revealed faults, etc. In fact this is a multiobjective problem. For that, different good solutions exist. To properly deal with this problem, a selection strategy based on multiobjective algorithms was proposed and investigated for unit testing. This work explores the use of such strategy in the integration testing phase. Three multiobjective algorithms are used and evaluated with real programs: one algorithm based on tabu search (MTabu), one based on Genetic Algorithm (NSGA-II) and the third one based on Ant Colony Optimization (PACO). The results are compared with traditional strategies and contrasted with essential operators obtained in the unit testing level.","PeriodicalId":369881,"journal":{"name":"Int. J. Nat. Comput. Res.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129851178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Extreme Learning Machine (ELM) is a new learning method for single-hidden layer feedforward neural network (SLFN) training. ELM approach increases the learning speed by means of randomly generating input weights and biases for hidden nodes rather than tuning network parameters, making this approach much faster than traditional gradient-based ones. However, ELM random generation may lead to non-optimal performance. Particle Swarm Optimization (PSO) technique was introduced as a stochastic search through an n-dimensional problem space aiming the minimization (or the maximization) of the objective function of the problem. In this paper, two new hybrid approaches are proposed based on PSO to select input weights and hidden biases for ELM. Experimental results show that the proposed methods are able to achieve better generalization performance than traditional ELM in real benchmark datasets.
{"title":"Improved Evolutionary Extreme Learning Machines Based on Particle Swarm Optimization and Clustering Approaches","authors":"L. Pacífico, Teresa B Ludermir","doi":"10.4018/JNCR.2012070101","DOIUrl":"https://doi.org/10.4018/JNCR.2012070101","url":null,"abstract":"Extreme Learning Machine (ELM) is a new learning method for single-hidden layer feedforward neural network (SLFN) training. ELM approach increases the learning speed by means of randomly generating input weights and biases for hidden nodes rather than tuning network parameters, making this approach much faster than traditional gradient-based ones. However, ELM random generation may lead to non-optimal performance. Particle Swarm Optimization (PSO) technique was introduced as a stochastic search through an n-dimensional problem space aiming the minimization (or the maximization) of the objective function of the problem. In this paper, two new hybrid approaches are proposed based on PSO to select input weights and hidden biases for ELM. Experimental results show that the proposed methods are able to achieve better generalization performance than traditional ELM in real benchmark datasets.","PeriodicalId":369881,"journal":{"name":"Int. J. Nat. Comput. Res.","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121872125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adriano Soares Koshiyama, Tatiana Escovedo, D. Dias, M. Vellasco, M. Pacheco
Combining forecasts is a common practice in time series analysis. This technique involves weighing each estimate of different models in order to minimize the error between the resulting output and the target. This work presents a novel methodology, aiming to combine forecasts using genetic programming, a metaheuristic that searches for a nonlinear combination and selection of forecasters simultaneously. To present the method, the authors made three different tests comparing with the linear forecasting combination, evaluating both in terms of RMSE and MAPE. The statistical analysis shows that the genetic programming combination outperforms the linear combination in two of the three tests evaluated.
{"title":"Combining Forecasts: A Genetic Programming Approach","authors":"Adriano Soares Koshiyama, Tatiana Escovedo, D. Dias, M. Vellasco, M. Pacheco","doi":"10.4018/jncr.2012070103","DOIUrl":"https://doi.org/10.4018/jncr.2012070103","url":null,"abstract":"Combining forecasts is a common practice in time series analysis. This technique involves weighing each estimate of different models in order to minimize the error between the resulting output and the target. This work presents a novel methodology, aiming to combine forecasts using genetic programming, a metaheuristic that searches for a nonlinear combination and selection of forecasters simultaneously. To present the method, the authors made three different tests comparing with the linear forecasting combination, evaluating both in terms of RMSE and MAPE. The statistical analysis shows that the genetic programming combination outperforms the linear combination in two of the three tests evaluated.","PeriodicalId":369881,"journal":{"name":"Int. J. Nat. Comput. Res.","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121001234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Classification is an important task in time series mining. It is often reported in the literature that nearest neighbor classifiers perform quite well in time series classification, especially if the distance measure properly deals with invariances required by the domain. Complexity invariance was recently introduced, aiming to compensate from a bias towards classes with simple time series representatives in nearest neighbor classification. To this end, a complexity correcting factor based on the ratio of the more complex to the simpler series was proposed. The original formulation uses the length of the rectified time series to estimate its complexity. In this paper the authors investigate an alternative complexity estimate, based on fractal dimension. Results show that this alternative is very competitive with the original proposal, and has a broader application as it does neither depend on the number of points in the series nor on a previous normalization. Furthermore, these results also verify, using a different formulation, the validity of complexity invariance in time series classification.
{"title":"A Complexity-Invariant Measure Based on Fractal Dimension for Time Series Classification","authors":"R. Prati, Gustavo E. A. P. A. Batista","doi":"10.4018/jncr.2012070104","DOIUrl":"https://doi.org/10.4018/jncr.2012070104","url":null,"abstract":"Classification is an important task in time series mining. It is often reported in the literature that nearest neighbor classifiers perform quite well in time series classification, especially if the distance measure properly deals with invariances required by the domain. Complexity invariance was recently introduced, aiming to compensate from a bias towards classes with simple time series representatives in nearest neighbor classification. To this end, a complexity correcting factor based on the ratio of the more complex to the simpler series was proposed. The original formulation uses the length of the rectified time series to estimate its complexity. In this paper the authors investigate an alternative complexity estimate, based on fractal dimension. Results show that this alternative is very competitive with the original proposal, and has a broader application as it does neither depend on the number of points in the series nor on a previous normalization. Furthermore, these results also verify, using a different formulation, the validity of complexity invariance in time series classification.","PeriodicalId":369881,"journal":{"name":"Int. J. Nat. Comput. Res.","volume":"75 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134101415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}