The aim of this study is to develop a new evolutionary computation paradigm in terms of molecular biology. Standard genetic algorithms are heuristics inspired by the simplified model of natural evolution and genetics. The latest discoveries and innovations from molecular biology, related to the conventional central dogma of molecular biology, generate the necessity of updating the genetic algorithms, although successfully applied in various complex tasks. In this direction, the research in Evolutionary Computation requires a reconsideration of the concepts and theories underlying the development of these popular optimization techniques. Since the emergence of the new features is important in the evolution, the DNA code requires progress. Evolutionary Computation which is based on the mutation and the natural selection can be reconsidered in terms of protein synthesis and reverse transcription. From the computational perspective, a biological phenomenon might be interpreted in various forms in order to obtain reliable computational techniques.
{"title":"Central Dogma of Molecular Biology - New Paradigm in Evolutionary Computation","authors":"C. Rotar","doi":"10.1109/SYNASC.2014.46","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.46","url":null,"abstract":"The aim of this study is to develop a new evolutionary computation paradigm in terms of molecular biology. Standard genetic algorithms are heuristics inspired by the simplified model of natural evolution and genetics. The latest discoveries and innovations from molecular biology, related to the conventional central dogma of molecular biology, generate the necessity of updating the genetic algorithms, although successfully applied in various complex tasks. In this direction, the research in Evolutionary Computation requires a reconsideration of the concepts and theories underlying the development of these popular optimization techniques. Since the emergence of the new features is important in the evolution, the DNA code requires progress. Evolutionary Computation which is based on the mutation and the natural selection can be reconsidered in terms of protein synthesis and reverse transcription. From the computational perspective, a biological phenomenon might be interpreted in various forms in order to obtain reliable computational techniques.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121799588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Lambert W function possesses branches labelled by an index k. The value of W therefore depends upon the value of its argument z and the value of its branch index. Given two branches, labelled n and m, the branch difference is the difference between the two branches, when both are evaluated at the same argument z. It is shown that elementary inverse functions have trivial branch differences, but Lambert W has nontrivial differences. The inverse sine function has real-valued branch differences for real arguments, and the natural logarithm function has purely imaginary branch differences. The Lambert W function, however, has both real-valued differences and complex-valued differences. Applications and representations of the branch differences of W are given.
{"title":"Branch Differences and Lambert W","authors":"D. J. Jeffrey, J. Jankowski","doi":"10.1109/SYNASC.2014.16","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.16","url":null,"abstract":"The Lambert W function possesses branches labelled by an index k. The value of W therefore depends upon the value of its argument z and the value of its branch index. Given two branches, labelled n and m, the branch difference is the difference between the two branches, when both are evaluated at the same argument z. It is shown that elementary inverse functions have trivial branch differences, but Lambert W has nontrivial differences. The inverse sine function has real-valued branch differences for real arguments, and the natural logarithm function has purely imaginary branch differences. The Lambert W function, however, has both real-valued differences and complex-valued differences. Applications and representations of the branch differences of W are given.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125130677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The response of a computational system to support medical diagnosis should simultaneously be accurate, comprehensible, flexible and prompt in order to be qualified as a reliable second opinion. Based on the above characteristics, the current paper examines the behaviour of two evolutionary algorithms that discover prototypes for each possible diagnosis outcome. The discovered centroids provide understandable thresholds of differentiation among the decision classes. The goal of this paper is to inspect alternative architectures for prototype representation to reach the centroids with desired accuracy and in acceptable time.
{"title":"Investigation of Alternative Evolutionary Prototype Generation in Medical Classification","authors":"C. Stoean, R. Stoean, Adrian Sandita","doi":"10.1109/SYNASC.2014.77","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.77","url":null,"abstract":"The response of a computational system to support medical diagnosis should simultaneously be accurate, comprehensible, flexible and prompt in order to be qualified as a reliable second opinion. Based on the above characteristics, the current paper examines the behaviour of two evolutionary algorithms that discover prototypes for each possible diagnosis outcome. The discovered centroids provide understandable thresholds of differentiation among the decision classes. The goal of this paper is to inspect alternative architectures for prototype representation to reach the centroids with desired accuracy and in acceptable time.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130289033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Input noise is common in situations when data either is coming from unreliable sensors or previous outputs are used as current inputs. Nevertheless, most regression algorithms do not model input noise, inducing thus bias in the regression. We present a method that corrects this bias by repeated regression estimations. In simulation extrapolation we perturb the inputs with additional input noise and by observing the effect of this addition on the result, we estimate what would the prediction be without the input noise. We extend the examination to a non-parametric probabilistic regression, inference using Gaussian processes. We conducted experiments on both synthetic data and in robotics, i.e., Learning the transition dynamics of a dynamical system, showing significant improvements in the accuracy of the prediction.
{"title":"Simulation-Extrapolation Gaussian Processes for Input Noise Modeling","authors":"B. Bócsi, Hunor Jakab, L. Csató","doi":"10.1109/SYNASC.2014.33","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.33","url":null,"abstract":"Input noise is common in situations when data either is coming from unreliable sensors or previous outputs are used as current inputs. Nevertheless, most regression algorithms do not model input noise, inducing thus bias in the regression. We present a method that corrects this bias by repeated regression estimations. In simulation extrapolation we perturb the inputs with additional input noise and by observing the effect of this addition on the result, we estimate what would the prediction be without the input noise. We extend the examination to a non-parametric probabilistic regression, inference using Gaussian processes. We conducted experiments on both synthetic data and in robotics, i.e., Learning the transition dynamics of a dynamical system, showing significant improvements in the accuracy of the prediction.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128624880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Local Fourier analysis is a strong and well-established tool for analyzing the convergence of numerical methods for partial differential equations. The key idea of local Fourier analysis is to represent the occurring functions in terms of a Fourier series and to use this representation to study certain properties of the particular numerical method, like the convergence rate or an error estimate. In the process of applying a local Fourier analysis, it is typically necessary to determine the supremum of a more or less complicated term with respect to all frequencies and, potentially, other variables. The problem of computing such a supremum can be rewritten as a quantifier elimination problem, which can be solved with cylindrical algebraic decomposition, a well-known tool from symbolic computation. The combination of local Fourier analysis and cylindrical algebraic decomposition is a machinery that can be applied to a wide class of problems. In the present paper, we will discuss two examples. The first example is to compute the convergence rate of a multigrid method. As second example we will see that the machinery can also be used to do something rather different: We will compare approximation error estimates for different kinds of discretizations.
{"title":"Using Cylindrical Algebraic Decomposition and Local Fourier Analysis to Study Numerical Methods: Two Examples","authors":"Stefan Takacs","doi":"10.1109/SYNASC.2014.14","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.14","url":null,"abstract":"Local Fourier analysis is a strong and well-established tool for analyzing the convergence of numerical methods for partial differential equations. The key idea of local Fourier analysis is to represent the occurring functions in terms of a Fourier series and to use this representation to study certain properties of the particular numerical method, like the convergence rate or an error estimate. In the process of applying a local Fourier analysis, it is typically necessary to determine the supremum of a more or less complicated term with respect to all frequencies and, potentially, other variables. The problem of computing such a supremum can be rewritten as a quantifier elimination problem, which can be solved with cylindrical algebraic decomposition, a well-known tool from symbolic computation. The combination of local Fourier analysis and cylindrical algebraic decomposition is a machinery that can be applied to a wide class of problems. In the present paper, we will discuss two examples. The first example is to compute the convergence rate of a multigrid method. As second example we will see that the machinery can also be used to do something rather different: We will compare approximation error estimates for different kinds of discretizations.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127402286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. A. D. Silva, D. Ardagna, Nicolas Ferry, Juan F. Pérez
The competition between cloud providers has led to an impressive set of cloud solutions offered to consumers. The ability to properly design and deploy multi-cloud applications (i.e., Applications deployed on multiple clouds) allows exploiting the peculiarities of each cloud solution and hence optimizing the performance and cost of the applications. However, this is hindered by the large heterogeneity of the existing cloud offerings. In this work, we present the model-driven methodology and tools developed within the MODA Clouds European project to support the design of multi-cloud applications. In particular, the proposed framework promotes a model driven approach to help reducing vendor "lock-in", support multi-cloud deployments, and provide solutions for estimating and optimizing the performance of multi-cloud applications at design-time.
{"title":"Model-Driven Design of Cloud Applications with Quality-of-Service Guarantees: The MODAClouds Approach, MICAS Tutorial","authors":"M. A. D. Silva, D. Ardagna, Nicolas Ferry, Juan F. Pérez","doi":"10.1109/SYNASC.2014.8","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.8","url":null,"abstract":"The competition between cloud providers has led to an impressive set of cloud solutions offered to consumers. The ability to properly design and deploy multi-cloud applications (i.e., Applications deployed on multiple clouds) allows exploiting the peculiarities of each cloud solution and hence optimizing the performance and cost of the applications. However, this is hindered by the large heterogeneity of the existing cloud offerings. In this work, we present the model-driven methodology and tools developed within the MODA Clouds European project to support the design of multi-cloud applications. In particular, the proposed framework promotes a model driven approach to help reducing vendor \"lock-in\", support multi-cloud deployments, and provide solutions for estimating and optimizing the performance of multi-cloud applications at design-time.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132019584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We show how to generate and validate logical proofs of unsatisfiability from delta-complete decision procedures that rely on error-prone numerical algorithms. Solving this problem is important for ensuring correctness of the decision procedures. At the same time, it is a new approach for automated theorem proving over real numbers. We design a first-order calculus, and transform the computational steps of constraint solving into logic proofs, which are then validated using proof-checking algorithms. As an application, we demonstrate how proofs generated from our solver can establish many nonlinear lemmas in the theormal proof of the Kepler Conjecture.
{"title":"Proof Generation from Delta-Decisions","authors":"Sicun Gao, Soonho Kong, E. Clarke","doi":"10.1109/SYNASC.2014.29","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.29","url":null,"abstract":"We show how to generate and validate logical proofs of unsatisfiability from delta-complete decision procedures that rely on error-prone numerical algorithms. Solving this problem is important for ensuring correctness of the decision procedures. At the same time, it is a new approach for automated theorem proving over real numbers. We design a first-order calculus, and transform the computational steps of constraint solving into logic proofs, which are then validated using proof-checking algorithms. As an application, we demonstrate how proofs generated from our solver can establish many nonlinear lemmas in the theormal proof of the Kepler Conjecture.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132136511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The current trend in designing power-efficient devices is concerning not only Personal Computer-like (PC) systems, but also real-time embedded systems. While a lot of research has been done on minimizing the total energy of a system, adapting the scheduling techniques for lower energy consumption has been less popular. Nevertheless, this can prove highly efficient, as the Central Processing Units (CPUs) are usually responsible for the largest part of the whole system's energy consumption. This paper presents an approach on improving the energy consumption of a real-time system. Starting with a given feasible schedule for a non-preemptive, single-instance, n-task set, power saving is achieved by reducing the CPU frequency whenever possible, without breaking the task deadlines. The goal can be described in analytical terms as a multivariate optimization problem. Due to the complexity of the resulting problem, the use of heuristic techniques provides good chances for finding the desired optimum. To the best of our knowledge, the use of these methods for the power-aware scheduling problem has not been attempted.
{"title":"A Heuristic-Based Approach for Reducing the Power Consumption of Real-Time Embedded Systems","authors":"V. Radulescu, S. Andrei, A. Cheng","doi":"10.1109/SYNASC.2014.31","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.31","url":null,"abstract":"The current trend in designing power-efficient devices is concerning not only Personal Computer-like (PC) systems, but also real-time embedded systems. While a lot of research has been done on minimizing the total energy of a system, adapting the scheduling techniques for lower energy consumption has been less popular. Nevertheless, this can prove highly efficient, as the Central Processing Units (CPUs) are usually responsible for the largest part of the whole system's energy consumption. This paper presents an approach on improving the energy consumption of a real-time system. Starting with a given feasible schedule for a non-preemptive, single-instance, n-task set, power saving is achieved by reducing the CPU frequency whenever possible, without breaking the task deadlines. The goal can be described in analytical terms as a multivariate optimization problem. Due to the complexity of the resulting problem, the use of heuristic techniques provides good chances for finding the desired optimum. To the best of our knowledge, the use of these methods for the power-aware scheduling problem has not been attempted.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124488479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Customarily, Genetic Algorithms use lowprobability mutation operators. In an effort to increase their performance, this paper presents a study of Genetic Algorithms with very high mutation rates (≈ 95%) . A comparison is drawn, relative to the low-probability (≈ 1%) mutation GA, on two large classes of problems: numerical functions (well-known test functions such as Rosenbrock's, Six-Hump Camel Back) and bit-block functions (e.g. Royal Road, Trap Functions). A large number of experimental runs combined with parameter variation provide statistical significance for the comparison. The high-probability mutation is found to perform well on most tested functions, outperforming low-probability mutation on some of them. These results are then explained in terms of dynamic dual encoding and selection pressure reduction, and placed in the context of the No Free Lunch theorem.
{"title":"High-Probability Mutation in Basic Genetic Algorithms","authors":"Nicolae-Eugen Croitoru","doi":"10.1109/SYNASC.2014.48","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.48","url":null,"abstract":"Customarily, Genetic Algorithms use lowprobability mutation operators. In an effort to increase their performance, this paper presents a study of Genetic Algorithms with very high mutation rates (≈ 95%) . A comparison is drawn, relative to the low-probability (≈ 1%) mutation GA, on two large classes of problems: numerical functions (well-known test functions such as Rosenbrock's, Six-Hump Camel Back) and bit-block functions (e.g. Royal Road, Trap Functions). A large number of experimental runs combined with parameter variation provide statistical significance for the comparison. The high-probability mutation is found to perform well on most tested functions, outperforming low-probability mutation on some of them. These results are then explained in terms of dynamic dual encoding and selection pressure reduction, and placed in the context of the No Free Lunch theorem.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121935556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Despite many years of research, pedestrian recognition is still a difficult, but very important task. We present a multi-modality approach, that combines features extracted from three type of images: intensity, depth and flow. For the feature extraction phase we use Kernel Descriptors, which are optimised independently on each type of image, and for the learning phase we use Support Vector Machines. Numerical experiments are performed on a benchmark dataset consisting of pedestrian and non-pedestrian (labelled) images captured in outdoor urban environments and indicate that the model built by combining features extracted with Kernel Descriptors from multi-modality images performs better than using single modality images.
{"title":"Pedestrian Recognition by Using a Kernel-Based Multi-modality Approach","authors":"A. Sirbu, A. Rogozan, L. Dioşan, A. Bensrhair","doi":"10.1109/SYNASC.2014.42","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.42","url":null,"abstract":"Despite many years of research, pedestrian recognition is still a difficult, but very important task. We present a multi-modality approach, that combines features extracted from three type of images: intensity, depth and flow. For the feature extraction phase we use Kernel Descriptors, which are optimised independently on each type of image, and for the learning phase we use Support Vector Machines. Numerical experiments are performed on a benchmark dataset consisting of pedestrian and non-pedestrian (labelled) images captured in outdoor urban environments and indicate that the model built by combining features extracted with Kernel Descriptors from multi-modality images performs better than using single modality images.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121206991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}