Pub Date : 2018-09-01DOI: 10.1109/SYNASC.2018.00047
Csaba Sulyok
Genetic operators represent the alterations applied to entities within an evolutionary algorithm; they help create a new generation from an existing one, ensuring genetic diversity while also preserving the emergent overall strengths of a population. In this paper, we investigate different approaches to hyperparameter configuration of genetic operators within a linear genetic programming framework. We analyze the benefits of adaptively setting operator distributions and rates using hill climbing. A comparison is drawn between the constant and adaptive methodologies. This research is part of our ongoing work on evolutionary music composition, where we cast the actions of a virtual composer as instructions on a Turing-complete virtual register machine. The created music is assessed by statistical similarity to a given corpus. The frailty to change of our genotype dictates fine-tuning of the genetic operators to help convergence. Our results show that adaptive methods only provide a marginal improvement over constant settings and only in select cases, such as globally altering operator hyperparameters without changing the distribution. In other cases, they prove detrimental to the final grades.
{"title":"Genetic Operators in Evolutionary Music Composition","authors":"Csaba Sulyok","doi":"10.1109/SYNASC.2018.00047","DOIUrl":"https://doi.org/10.1109/SYNASC.2018.00047","url":null,"abstract":"Genetic operators represent the alterations applied to entities within an evolutionary algorithm; they help create a new generation from an existing one, ensuring genetic diversity while also preserving the emergent overall strengths of a population. In this paper, we investigate different approaches to hyperparameter configuration of genetic operators within a linear genetic programming framework. We analyze the benefits of adaptively setting operator distributions and rates using hill climbing. A comparison is drawn between the constant and adaptive methodologies. This research is part of our ongoing work on evolutionary music composition, where we cast the actions of a virtual composer as instructions on a Turing-complete virtual register machine. The created music is assessed by statistical similarity to a given corpus. The frailty to change of our genotype dictates fine-tuning of the genetic operators to help convergence. Our results show that adaptive methods only provide a marginal improvement over constant settings and only in select cases, such as globally altering operator hyperparameters without changing the distribution. In other cases, they prove detrimental to the final grades.","PeriodicalId":273805,"journal":{"name":"2018 20th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128986741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/SYNASC.2018.00019
Tateaki Sasaki, D. Inaba
Given a set of m+1 multivariate polynomials, with m > 1, in main variables x_1,...,x_m and sub-variables u_1,...,u_n, we can usually eliminate x_1,...,x_m and obtain a polynomial in u_1,...,u_n only. There are basically two methods to perform this elimination. One is the so-called resultant method and the other is the Groebner basis method. The Groebner basis method gives the lowest-order element haS(u) of the elimination ideal, where (u) = (u_1,...,u_n), but it is often very slow. The resultant method is quite fast, but the resulting polynomial R(u) often contains many more terms than haS(u). In this paper, we present a simple method of computing haS(u) by the repeated computation of PRSs (polynomial remainder sequences). The idea is to compute PRSs by changing their arguments systematically and obtain polynomials R_1(u),...,R_k(u), k > 1, in the sub-variables only. Let baS(u) be the GCD of R_1,...,R_k. Then, our main theorem asserts that baS(u) is a multiple of haS(u): baS(u) = tie(u)haS(u). We call tie(u) the extraneous factor and it often consists of a small number of terms. We present three conditions and one sub-method to remove tie(u) from baS(u).
给定一组m+1个多元多项式,其中m > 1,主变量为x_1,…,x_m和子变量u_1,…,u_n,我们通常可以消去x_1,…,x_m,得到一个多项式在u_1,…,只u_n。基本上有两种方法来执行这种消除。一种是所谓的合成法,另一种是格罗布纳基法。Groebner基方法给出了消元理想的最低阶元素haS(u),其中(u) = (u_1,…,u_n),但它通常很慢。得到的方法相当快,但是得到的多项式R(u)通常比haS(u)包含更多的项。本文提出了一种通过重复计算多项式余数序列来计算haS(u)的简单方法。其思想是通过系统地改变它们的参数来计算prs,并仅在子变量中获得多项式R_1(u),…,R_k(u), k > 1。设baS(u)为R_1,…,R_k的GCD。然后,我们的主要定理断言baS(u)是haS(u)的倍数:baS(u) = tie(u)haS(u)。我们称tie(u)为无关因子,它通常由少量项组成。我们提出了从baS(u)中去除tie(u)的三个条件和一个子方法。
{"title":"Computing the Lowest-Order Element of a Multivariate Elimination Ideal by Using Remainder Sequences","authors":"Tateaki Sasaki, D. Inaba","doi":"10.1109/SYNASC.2018.00019","DOIUrl":"https://doi.org/10.1109/SYNASC.2018.00019","url":null,"abstract":"Given a set of m+1 multivariate polynomials, with m > 1, in main variables x_1,...,x_m and sub-variables u_1,...,u_n, we can usually eliminate x_1,...,x_m and obtain a polynomial in u_1,...,u_n only. There are basically two methods to perform this elimination. One is the so-called resultant method and the other is the Groebner basis method. The Groebner basis method gives the lowest-order element haS(u) of the elimination ideal, where (u) = (u_1,...,u_n), but it is often very slow. The resultant method is quite fast, but the resulting polynomial R(u) often contains many more terms than haS(u). In this paper, we present a simple method of computing haS(u) by the repeated computation of PRSs (polynomial remainder sequences). The idea is to compute PRSs by changing their arguments systematically and obtain polynomials R_1(u),...,R_k(u), k > 1, in the sub-variables only. Let baS(u) be the GCD of R_1,...,R_k. Then, our main theorem asserts that baS(u) is a multiple of haS(u): baS(u) = tie(u)haS(u). We call tie(u) the extraneous factor and it often consists of a small number of terms. We present three conditions and one sub-method to remove tie(u) from baS(u).","PeriodicalId":273805,"journal":{"name":"2018 20th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130553611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-19DOI: 10.1109/SYNASC.2018.00032
Flavio Ferrarotti, Senén González, K. Schewe, José Maria Turull Torres
Let SO^plog denote the restriction of second-order logic, where second-order quantification ranges over relations of size at most poly-logarithmic in the size of the structure. In this article we investigate the problem, which Turing machine complexity class is captured by Boolean queries over ordered relational structures that can be expressed in this logic. For this we define a hierarchy of fragments Σ^plog_m (and Σ^plog_m) defined by formulae with alternating blocks of existential and universal second-order quantifiers in quantifier-prenex normal form. We first show that the existential fragment Σ^plog_1 captures npolylog, i.e. the class of Boolean queries that can be accepted by a non-deterministic Turing machine with random access to the input in time O((log n)^k) for some k ≥ 0. Using alternating Turing machines with random access input allows us to characterize also the fragments Σ^plog_m (and Σ^plog_m) as those Boolean queries with at most m alternating blocks of second-order quantifiers that are accepted by an alternating Turing machine. Consequently, SO^plog captures the whole poly-logarithmic time hierarchy. We demonstrate the relevance of this logic and complexity class by several problems in database theory.
{"title":"The Polylog-Time Hierarchy Captured by Restricted Second-Order Logic","authors":"Flavio Ferrarotti, Senén González, K. Schewe, José Maria Turull Torres","doi":"10.1109/SYNASC.2018.00032","DOIUrl":"https://doi.org/10.1109/SYNASC.2018.00032","url":null,"abstract":"Let SO^plog denote the restriction of second-order logic, where second-order quantification ranges over relations of size at most poly-logarithmic in the size of the structure. In this article we investigate the problem, which Turing machine complexity class is captured by Boolean queries over ordered relational structures that can be expressed in this logic. For this we define a hierarchy of fragments Σ^plog_m (and Σ^plog_m) defined by formulae with alternating blocks of existential and universal second-order quantifiers in quantifier-prenex normal form. We first show that the existential fragment Σ^plog_1 captures npolylog, i.e. the class of Boolean queries that can be accepted by a non-deterministic Turing machine with random access to the input in time O((log n)^k) for some k ≥ 0. Using alternating Turing machines with random access input allows us to characterize also the fragments Σ^plog_m (and Σ^plog_m) as those Boolean queries with at most m alternating blocks of second-order quantifiers that are accepted by an alternating Turing machine. Consequently, SO^plog captures the whole poly-logarithmic time hierarchy. We demonstrate the relevance of this logic and complexity class by several problems in database theory.","PeriodicalId":273805,"journal":{"name":"2018 20th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127899748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-18DOI: 10.1109/SYNASC.2018.00033
G. Ducoffe, Ruxandra Marinescu-Ghemeci, Camelia Obreja, Alexandru Popa, Rozica-Maria Tache
Topological indices (TIs) play an important role in studying properties of molecules. A main problem in mathematical chemistry is finding extreme graphs with respect to a given TI. In this paper extremal graphs with respect to the modified first Zagreb connection index for trees in general and for trees with given number of pendants, for unicyclic graphs with or without a fixed girth and connected graphs are determined, using methods with higher degree of generality with respect to the transformation techniques usually used in such context. These graphs are relevant for chemical studies.
{"title":"Extremal Graphs with Respect to the Modified First Zagreb Connection Index","authors":"G. Ducoffe, Ruxandra Marinescu-Ghemeci, Camelia Obreja, Alexandru Popa, Rozica-Maria Tache","doi":"10.1109/SYNASC.2018.00033","DOIUrl":"https://doi.org/10.1109/SYNASC.2018.00033","url":null,"abstract":"Topological indices (TIs) play an important role in studying properties of molecules. A main problem in mathematical chemistry is finding extreme graphs with respect to a given TI. In this paper extremal graphs with respect to the modified first Zagreb connection index for trees in general and for trees with given number of pendants, for unicyclic graphs with or without a fixed girth and connected graphs are determined, using methods with higher degree of generality with respect to the transformation techniques usually used in such context. These graphs are relevant for chemical studies.","PeriodicalId":273805,"journal":{"name":"2018 20th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130348254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-04-25DOI: 10.1109/SYNASC.2018.00034
Olivier Bournez, S. Ouazzani
Non standard Analysis is an area of Mathematics dealing with notions of infinitesimal and infinitely large numbers, in which many statements from classical Analysis can be expressed very naturally. Cheap non-standard analysis introduced by Terence Tao in 2012 is based on the idea that considering that a property holds eventually is sufficient to give the essence of many of its statements. Cheap non-standard analysis provides constructivity but at some (acceptable) price. Computable Analysis is a very natural tool for discussing computations over the reals, and more general constructivity in Mathematics. In a recent article, we considered computability in cheap non-standard analysis. We proved that many concepts from computable analysis as well as several concepts from computability can be very elegantly and alternatively presented in this framework. We discuss in the current article several applications of this framework: We provide alternative proofs based on this approach of several statements from computable analysis. This includes intermediate value theorem, and computability of zeros, of maximum points and of a theorem from Rice.
{"title":"Cheap Non-Standard Analysis and Computability: Some Applications","authors":"Olivier Bournez, S. Ouazzani","doi":"10.1109/SYNASC.2018.00034","DOIUrl":"https://doi.org/10.1109/SYNASC.2018.00034","url":null,"abstract":"Non standard Analysis is an area of Mathematics dealing with notions of infinitesimal and infinitely large numbers, in which many statements from classical Analysis can be expressed very naturally. Cheap non-standard analysis introduced by Terence Tao in 2012 is based on the idea that considering that a property holds eventually is sufficient to give the essence of many of its statements. Cheap non-standard analysis provides constructivity but at some (acceptable) price. Computable Analysis is a very natural tool for discussing computations over the reals, and more general constructivity in Mathematics. In a recent article, we considered computability in cheap non-standard analysis. We proved that many concepts from computable analysis as well as several concepts from computability can be very elegantly and alternatively presented in this framework. We discuss in the current article several applications of this framework: We provide alternative proofs based on this approach of several statements from computable analysis. This includes intermediate value theorem, and computability of zeros, of maximum points and of a theorem from Rice.","PeriodicalId":273805,"journal":{"name":"2018 20th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"13 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120913999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-03-23DOI: 10.1109/SYNASC.2018.00041
Petru Soviany, Radu Tudor Ionescu
There are mainly two types of state-of-the-art object detectors. On one hand, we have two-stage detectors, such as Faster R-CNN (Region-based Convolutional Neural Networks) or Mask R-CNN, that (i) use a Region Proposal Network to generate regions of interests in the first stage and (ii) send the region proposals down the pipeline for object classification and bounding-box regression. Such models reach the highest accuracy rates, but are typically slower. On the other hand, we have single-stage detectors, such as YOLO (You Only Look Once) and SSD (Singe Shot MultiBox Detector), that treat object detection as a simple regression problem by taking an input image and learning the class probabilities and bounding box coordinates. Such models reach lower accuracy rates, but are much faster than two-stage object detectors. In this paper, we propose to use an image difficulty predictor to achieve an optimal trade-off between accuracy and speed in object detection. The image difficulty predictor is applied on the test images to split them into easy versus hard images. Once separated, the easy images are sent to the faster single-stage detector, while the hard images are sent to the more accurate two-stage detector. Our experiments on PASCAL VOC 2007 show that using image difficulty compares favorably to a random split of the images. Our method is flexible, in that it allows to choose a desired threshold for splitting the images into easy versus hard.
{"title":"Optimizing the Trade-Off between Single-Stage and Two-Stage Deep Object Detectors using Image Difficulty Prediction","authors":"Petru Soviany, Radu Tudor Ionescu","doi":"10.1109/SYNASC.2018.00041","DOIUrl":"https://doi.org/10.1109/SYNASC.2018.00041","url":null,"abstract":"There are mainly two types of state-of-the-art object detectors. On one hand, we have two-stage detectors, such as Faster R-CNN (Region-based Convolutional Neural Networks) or Mask R-CNN, that (i) use a Region Proposal Network to generate regions of interests in the first stage and (ii) send the region proposals down the pipeline for object classification and bounding-box regression. Such models reach the highest accuracy rates, but are typically slower. On the other hand, we have single-stage detectors, such as YOLO (You Only Look Once) and SSD (Singe Shot MultiBox Detector), that treat object detection as a simple regression problem by taking an input image and learning the class probabilities and bounding box coordinates. Such models reach lower accuracy rates, but are much faster than two-stage object detectors. In this paper, we propose to use an image difficulty predictor to achieve an optimal trade-off between accuracy and speed in object detection. The image difficulty predictor is applied on the test images to split them into easy versus hard images. Once separated, the easy images are sent to the faster single-stage detector, while the hard images are sent to the more accurate two-stage detector. Our experiments on PASCAL VOC 2007 show that using image difficulty compares favorably to a random split of the images. Our method is flexible, in that it allows to choose a desired threshold for splitting the images into easy versus hard.","PeriodicalId":273805,"journal":{"name":"2018 20th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125484084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}