R. Becker, Michael Sagraloff, Vikram Sharma, Juan Xu, C. Yap
Let F(z) be an arbitrary complex polynomial. We introduce the {local root clustering problem}, to compute a set of natural epsilon-clusters of roots of F(z) in some box region B0 in the complex plane. This may be viewed as an extension of the classical root isolation problem. Our contribution is two-fold: we provide an efficient certified subdivision algorithm for this problem, and we provide a bit-complexity analysis based on the local geometry of the root clusters. Our computational model assumes that arbitrarily good approximations of the coefficients of F(z) are provided by means of an oracle at the cost of reading the coefficients. Our algorithmic techniques come from a companion paper [3] and are based on the Pellet test, Graeffe and Newton iterations, and are independent of Schonhage's splitting circle method. Our algorithm is relatively simple and promises to be efficient in practice.
{"title":"Complexity Analysis of Root Clustering for a Complex Polynomial","authors":"R. Becker, Michael Sagraloff, Vikram Sharma, Juan Xu, C. Yap","doi":"10.1145/2930889.2930939","DOIUrl":"https://doi.org/10.1145/2930889.2930939","url":null,"abstract":"Let F(z) be an arbitrary complex polynomial. We introduce the {local root clustering problem}, to compute a set of natural epsilon-clusters of roots of F(z) in some box region B0 in the complex plane. This may be viewed as an extension of the classical root isolation problem. Our contribution is two-fold: we provide an efficient certified subdivision algorithm for this problem, and we provide a bit-complexity analysis based on the local geometry of the root clusters. Our computational model assumes that arbitrarily good approximations of the coefficients of F(z) are provided by means of an oracle at the cost of reading the coefficients. Our algorithmic techniques come from a companion paper [3] and are based on the Pellet test, Graeffe and Newton iterations, and are independent of Schonhage's splitting circle method. Our algorithm is relatively simple and promises to be efficient in practice.","PeriodicalId":169557,"journal":{"name":"Proceedings of the ACM on International Symposium on Symbolic and Algebraic Computation","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122755533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ankur Moitra in his paper at STOC 2015 has given an in-depth analysis of how oversampling improves the conditioning of the arising Prony systems for sparse interpolation and signal recovery from numeric data. Moitra assumes that oversampling is done for a number of samples beyond the actual sparsity of the polynomial/signal. We give an algorithm that can be used to compute the sparsity and estimate the minimal number of samples needed in numerical sparse interpolation. The early termination strategy of polynomial interpolation has been incorporated in the algorithm: by oversampling at a small number of extra sample points we can diagnose that the sparsity has not been reached. Our algorithm still has to make a guess, the number ζ of oversamples, and we show by example that if ζ is guessed too small, premature termination can occur, but our criterion is numerically more accurate than that by Kaltofen, Lee and Yang (Proc. SNC 2011, ACM [12]), but not as efficiently computable. For heuristic justification one has available the multivariate early termination theorem by Kaltofen and Lee (JSC vol. 36(3--4) 2003 [11]) for exact arithmetic, and the numeric Schwartz-Zippel Lemma by Kaltofen, Yang and Zhi (Proc. SNC 2007, ACM [13]). A main contribution here is a modified proof of the Theorem by Kaltofen and Lee that permits starting the sequence at the point (1,...,1), for scalar fields of characteristic ≠ 2 (in characteristic 2 counter-examples are given).
{"title":"Numerical Sparsity Determination and Early Termination","authors":"Z. Hao, E. Kaltofen, L. Zhi","doi":"10.1145/2930889.2930924","DOIUrl":"https://doi.org/10.1145/2930889.2930924","url":null,"abstract":"Ankur Moitra in his paper at STOC 2015 has given an in-depth analysis of how oversampling improves the conditioning of the arising Prony systems for sparse interpolation and signal recovery from numeric data. Moitra assumes that oversampling is done for a number of samples beyond the actual sparsity of the polynomial/signal. We give an algorithm that can be used to compute the sparsity and estimate the minimal number of samples needed in numerical sparse interpolation. The early termination strategy of polynomial interpolation has been incorporated in the algorithm: by oversampling at a small number of extra sample points we can diagnose that the sparsity has not been reached. Our algorithm still has to make a guess, the number ζ of oversamples, and we show by example that if ζ is guessed too small, premature termination can occur, but our criterion is numerically more accurate than that by Kaltofen, Lee and Yang (Proc. SNC 2011, ACM [12]), but not as efficiently computable. For heuristic justification one has available the multivariate early termination theorem by Kaltofen and Lee (JSC vol. 36(3--4) 2003 [11]) for exact arithmetic, and the numeric Schwartz-Zippel Lemma by Kaltofen, Yang and Zhi (Proc. SNC 2007, ACM [13]). A main contribution here is a modified proof of the Theorem by Kaltofen and Lee that permits starting the sequence at the point (1,...,1), for scalar fields of characteristic ≠ 2 (in characteristic 2 counter-examples are given).","PeriodicalId":169557,"journal":{"name":"Proceedings of the ACM on International Symposium on Symbolic and Algebraic Computation","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127982147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study discrete logarithms in the setting of group actions. Suppose that G is a group that acts on a set S. When r and s are elements of S, a solution g to rg = s can be thought of as a kind of logarithm. In this paper, we study the case where G = Sn, and develop analogs to the Shanks baby-step / giant-step procedure for ordinary discrete logarithms. Specifically, we compute two subsets A and B of Sn, such that every permutation in Sn can be written as a product ab of elements from A and B. Our deterministic procedure is close to optimal, in the sense that A and B can be computed efficiently and |A| and |B| are not too far from sqrt(n!) in size. We also analyze randomized "collision" algorithms for the same problem.
{"title":"Baby-Step Giant-Step Algorithms for the Symmetric Group","authors":"E. Bach, Bryce Sandlund","doi":"10.1145/2930889.2930930","DOIUrl":"https://doi.org/10.1145/2930889.2930930","url":null,"abstract":"We study discrete logarithms in the setting of group actions. Suppose that G is a group that acts on a set S. When r and s are elements of S, a solution g to rg = s can be thought of as a kind of logarithm. In this paper, we study the case where G = Sn, and develop analogs to the Shanks baby-step / giant-step procedure for ordinary discrete logarithms. Specifically, we compute two subsets A and B of Sn, such that every permutation in Sn can be written as a product ab of elements from A and B. Our deterministic procedure is close to optimal, in the sense that A and B can be computed efficiently and |A| and |B| are not too far from sqrt(n!) in size. We also analyze randomized \"collision\" algorithms for the same problem.","PeriodicalId":169557,"journal":{"name":"Proceedings of the ACM on International Symposium on Symbolic and Algebraic Computation","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123007253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An algorithm for computing comprehensive Gröbner systems (CGS) is introduced in rings of linear partial differential operators. Their applications to b-functions are considered. The resulting algorithm designed for a wide use of computing comprehensive Gröbner systems can be used to compute all the roots of b-functions and relevant holonomic D-modules. Furthermore, with our implementation, effective methods are illustrated for computing holonomic D-modules associated with hypersurface singularities. It is shown that the proposed algorithm is full of versatility.
{"title":"Comprehensive Gröbner Systems in Rings of Differential Operators, Holonomic D-modules and B-functions","authors":"Katsusuke Nabeshima, Katsuyoshi Ohara, S. Tajima","doi":"10.1145/2930889.2930918","DOIUrl":"https://doi.org/10.1145/2930889.2930918","url":null,"abstract":"An algorithm for computing comprehensive Gröbner systems (CGS) is introduced in rings of linear partial differential operators. Their applications to b-functions are considered. The resulting algorithm designed for a wide use of computing comprehensive Gröbner systems can be used to compute all the roots of b-functions and relevant holonomic D-modules. Furthermore, with our implementation, effective methods are illustrated for computing holonomic D-modules associated with hypersurface singularities. It is shown that the proposed algorithm is full of versatility.","PeriodicalId":169557,"journal":{"name":"Proceedings of the ACM on International Symposium on Symbolic and Algebraic Computation","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121252433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An infinite ±1-sequence is called Apwenian if its Hankel determinant of order n divided by 2n-1 is an odd number for every positive integer n. In 1998, Allouche, Peyriere, Wen and Wen discovered and proved that the Thue--Morse sequence is an Apwenian sequence by direct determinant manipulations. Recently, Bugeaud and Han re-proved the latter result by means of an appropriate combinatorial method. By significantly improving the combinatorial method, we find several new Apwenian sequences with Computer Assistance. This research has application in Number Theory to determining the irrationality exponents of some transcendental numbers.
{"title":"Computer Assisted Proof for Apwenian Sequences","authors":"Hao Fu, Guo-Niu Han","doi":"10.1145/2930889.2930891","DOIUrl":"https://doi.org/10.1145/2930889.2930891","url":null,"abstract":"An infinite ±1-sequence is called Apwenian if its Hankel determinant of order n divided by 2n-1 is an odd number for every positive integer n. In 1998, Allouche, Peyriere, Wen and Wen discovered and proved that the Thue--Morse sequence is an Apwenian sequence by direct determinant manipulations. Recently, Bugeaud and Han re-proved the latter result by means of an appropriate combinatorial method. By significantly improving the combinatorial method, we find several new Apwenian sequences with Computer Assistance. This research has application in Number Theory to determining the irrationality exponents of some transcendental numbers.","PeriodicalId":169557,"journal":{"name":"Proceedings of the ACM on International Symposium on Symbolic and Algebraic Computation","volume":"110 8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127685727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce and study a multiple gcd algorithm that is a natural extension of the usual Euclid algorithm, and coincides with it for two entries; it performs Euclidean divisions, between the largest entry and the second largest entry, and then re-orderings. This is the discrete version of a multidimensional continued fraction algorithm due to Brun. We perform the average-case analysis of this algorithm, and prove that the mean number of steps is linear with respect to the size of the entry. The method relies on dynamical analysis, and is based on the study of the underlying Brun dynamical system. The dominant constant of the analysis is related to the entropy of the system. We also compare this algorithm to another extension of the Euclid algorithm, proposed by Knuth, and already analyzed by the authors.
{"title":"Analysis of the Brun Gcd Algorithm","authors":"V. Berthé, Loïck Lhote, B. Vallée","doi":"10.1145/2930889.2930899","DOIUrl":"https://doi.org/10.1145/2930889.2930899","url":null,"abstract":"We introduce and study a multiple gcd algorithm that is a natural extension of the usual Euclid algorithm, and coincides with it for two entries; it performs Euclidean divisions, between the largest entry and the second largest entry, and then re-orderings. This is the discrete version of a multidimensional continued fraction algorithm due to Brun. We perform the average-case analysis of this algorithm, and prove that the mean number of steps is linear with respect to the size of the entry. The method relies on dynamical analysis, and is based on the study of the underlying Brun dynamical system. The dominant constant of the analysis is related to the entropy of the system. We also compare this algorithm to another extension of the Euclid algorithm, proposed by Knuth, and already analyzed by the authors.","PeriodicalId":169557,"journal":{"name":"Proceedings of the ACM on International Symposium on Symbolic and Algebraic Computation","volume":"319 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133399605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Can post-Schönhage-Strassen multiplication algorithms be competitive in practice for large input sizes? So far, the GMP library still outperforms all implementations of the recent, asymptotically more efficient algorithms for integer multiplication by Fürer, De--Kurur--Saha--Saptharishi, and ourselves. In this paper, we show how central ideas of our recent asymptotically fast algorithms turn out to be of practical interest for multiplication of polynomials over finite fields of characteristic two. Our Mathemagix implementation is based on the automatic generation of assembly codelets. It outperforms existing implementations in large degree, especially for polynomial matrix multiplication over finite fields.
{"title":"Fast Polynomial Multiplication over F260","authors":"David Harvey, J. Hoeven, Grégoire Lecerf","doi":"10.1145/2930889.2930920","DOIUrl":"https://doi.org/10.1145/2930889.2930920","url":null,"abstract":"Can post-Schönhage-Strassen multiplication algorithms be competitive in practice for large input sizes? So far, the GMP library still outperforms all implementations of the recent, asymptotically more efficient algorithms for integer multiplication by Fürer, De--Kurur--Saha--Saptharishi, and ourselves. In this paper, we show how central ideas of our recent asymptotically fast algorithms turn out to be of practical interest for multiplication of polynomials over finite fields of characteristic two. Our Mathemagix implementation is based on the automatic generation of assembly codelets. It outperforms existing implementations in large degree, especially for polynomial matrix multiplication over finite fields.","PeriodicalId":169557,"journal":{"name":"Proceedings of the ACM on International Symposium on Symbolic and Algebraic Computation","volume":"2015 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116966021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Given several n-dimensional sequences, we first present an algorithm for computing the Grobner basis of their module of linear recurrence relations. A P-recursive sequence (ui)i ∈ Nn satisfies linear recurrence relations with polynomial coefficients in i, as defined by Stanley in 1980. Calling directly the aforementioned algorithm on the tuple of sequences ((ij, ui)i ∈ Nn)j for retrieving the relations yields redundant relations. Since the module of relations of a P-recursive sequence also has an extra structure of a 0-dimensional right ideal of an Ore algebra, we design a more efficient algorithm that takes advantage of this extra structure for computing the relations. Finally, we show how to incorporate Grobner bases computations in an Ore algebra K t1,...,tn,x1,...,xn, with commutators xk,xl-xl,xk=tk,tl-tl,tk= tk,xl-xl,tk=0 for k ≠ l and tk,xk-xk,tk=xk, into the algorithm designed for P-recursive sequences. This allows us to compute faster the elements of the Grobner basis of which are in the ideal spanned by the first relations, such as in 2D/3D-space walks examples.
{"title":"Guessing Linear Recurrence Relations of Sequence Tuplesand P-recursive Sequences with Linear Algebra","authors":"Jérémy Berthomieu, J. Faugère","doi":"10.1145/2930889.2930926","DOIUrl":"https://doi.org/10.1145/2930889.2930926","url":null,"abstract":"Given several n-dimensional sequences, we first present an algorithm for computing the Grobner basis of their module of linear recurrence relations. A P-recursive sequence (ui)i ∈ Nn satisfies linear recurrence relations with polynomial coefficients in i, as defined by Stanley in 1980. Calling directly the aforementioned algorithm on the tuple of sequences ((ij, ui)i ∈ Nn)j for retrieving the relations yields redundant relations. Since the module of relations of a P-recursive sequence also has an extra structure of a 0-dimensional right ideal of an Ore algebra, we design a more efficient algorithm that takes advantage of this extra structure for computing the relations. Finally, we show how to incorporate Grobner bases computations in an Ore algebra K t1,...,tn,x1,...,xn, with commutators xk,xl-xl,xk=tk,tl-tl,tk= tk,xl-xl,tk=0 for k ≠ l and tk,xk-xk,tk=xk, into the algorithm designed for P-recursive sequences. This allows us to compute faster the elements of the Grobner basis of which are in the ideal spanned by the first relations, such as in 2D/3D-space walks examples.","PeriodicalId":169557,"journal":{"name":"Proceedings of the ACM on International Symposium on Symbolic and Algebraic Computation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121785135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. Emiris, Angelos Mantzaflaris, Elias P. Tsigaridas
We bound the Boolean complexity of computing isolating hyperboxes for all complex roots of systems of bilinear polynomials. The resultant of such systems admits a family of determinantal Sylvester-type formulas, which we make explicit by means of homological complexes. The computation of the determinant of the resultant matrix is a bottleneck for the overall complexity. We exploit the quasi-Toeplitz structure to reduce the problem to efficient matrix-vector products, corresponding to multivariate polynomial multiplication. For zero-dimensional systems, we arrive at a primitive element and a rational univariate representation of the roots. The overall bit complexity of our probabilistic algorithm is OB(n4 D4 + n2D4 τ), where n is the number of variables, D equals the bilinear Bezout bound, and τ is the maximum coefficient bitsize. Finally, a careful infinitesimal symbolic perturbation of the system allows us to treat degenerate and positive dimensional systems, thus making our algorithms and complexity analysis applicable to the general case.
{"title":"On the Bit Complexity of Solving Bilinear Polynomial Systems","authors":"I. Emiris, Angelos Mantzaflaris, Elias P. Tsigaridas","doi":"10.1145/2930889.2930919","DOIUrl":"https://doi.org/10.1145/2930889.2930919","url":null,"abstract":"We bound the Boolean complexity of computing isolating hyperboxes for all complex roots of systems of bilinear polynomials. The resultant of such systems admits a family of determinantal Sylvester-type formulas, which we make explicit by means of homological complexes. The computation of the determinant of the resultant matrix is a bottleneck for the overall complexity. We exploit the quasi-Toeplitz structure to reduce the problem to efficient matrix-vector products, corresponding to multivariate polynomial multiplication. For zero-dimensional systems, we arrive at a primitive element and a rational univariate representation of the roots. The overall bit complexity of our probabilistic algorithm is OB(n4 D4 + n2D4 τ), where n is the number of variables, D equals the bilinear Bezout bound, and τ is the maximum coefficient bitsize. Finally, a careful infinitesimal symbolic perturbation of the system allows us to treat degenerate and positive dimensional systems, thus making our algorithms and complexity analysis applicable to the general case.","PeriodicalId":169557,"journal":{"name":"Proceedings of the ACM on International Symposium on Symbolic and Algebraic Computation","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133681493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Bender, J. Faugère, Ludovic Perret, Elias P. Tsigaridas
Symmetric Tensor Decomposition is a major problem that arises in areas such as signal processing, statistics, data analysis and computational neuroscience. It is equivalent to write a homogeneous polynomial in $n$ variables of degree $D$ as a sum of $D$-th powers of linear forms, using the minimal number of summands. This minimal number is called the rank of the polynomial/tensor. We consider the decomposition of binary forms, that corresponds to the decomposition of symmetric tensors of dimension $2$ and order $D$. This problem has its roots in Invariant Theory, where the decompositions are known as canonical forms. As part of that theory, different algorithms were proposed for the binary forms. In recent years, those algorithms were extended for the general symmetric tensor decomposition problem. We present a new randomized algorithm that enhances the previous approaches with results from structured linear algebra and techniques from linear recurrent sequences. It achieves a softly linear arithmetic complexity bound. To the best of our knowledge, the previously known algorithms have quadratic complexity bounds. We compute a symbolic minimal decomposition in O(M(D) log(D)) arithmetic operations, where M(D) is the complexity of multiplying two polynomials of degree D. We approximate the terms of the decomposition with an error of 2-ε, in O(D log2(D) (log2(D) + log(ε))) arithmetic operations. To bound the size of the representation of the coefficients involved in the decomposition, we bound the algebraic degree of the problem by min(rank, D-rank+1). When the input polynomial has integer coefficients, our algorithm performs, up to poly-logarithmic factors, OB(D l + D4 + D3 τ) bit operations, where τ is the maximum bitsize of the coefficients and 2-l is the relative error of the terms in the decomposition.
{"title":"A Superfast Randomized Algorithm to Decompose Binary Forms","authors":"M. Bender, J. Faugère, Ludovic Perret, Elias P. Tsigaridas","doi":"10.1145/2930889.2930896","DOIUrl":"https://doi.org/10.1145/2930889.2930896","url":null,"abstract":"Symmetric Tensor Decomposition is a major problem that arises in areas such as signal processing, statistics, data analysis and computational neuroscience. It is equivalent to write a homogeneous polynomial in $n$ variables of degree $D$ as a sum of $D$-th powers of linear forms, using the minimal number of summands. This minimal number is called the rank of the polynomial/tensor. We consider the decomposition of binary forms, that corresponds to the decomposition of symmetric tensors of dimension $2$ and order $D$. This problem has its roots in Invariant Theory, where the decompositions are known as canonical forms. As part of that theory, different algorithms were proposed for the binary forms. In recent years, those algorithms were extended for the general symmetric tensor decomposition problem. We present a new randomized algorithm that enhances the previous approaches with results from structured linear algebra and techniques from linear recurrent sequences. It achieves a softly linear arithmetic complexity bound. To the best of our knowledge, the previously known algorithms have quadratic complexity bounds. We compute a symbolic minimal decomposition in O(M(D) log(D)) arithmetic operations, where M(D) is the complexity of multiplying two polynomials of degree D. We approximate the terms of the decomposition with an error of 2-ε, in O(D log2(D) (log2(D) + log(ε))) arithmetic operations. To bound the size of the representation of the coefficients involved in the decomposition, we bound the algebraic degree of the problem by min(rank, D-rank+1). When the input polynomial has integer coefficients, our algorithm performs, up to poly-logarithmic factors, OB(D l + D4 + D3 τ) bit operations, where τ is the maximum bitsize of the coefficients and 2-l is the relative error of the terms in the decomposition.","PeriodicalId":169557,"journal":{"name":"Proceedings of the ACM on International Symposium on Symbolic and Algebraic Computation","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124832927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}