In this poster we present the results of [10]. We consider the problem of finding the common roots of a set of polynomial functions defining a zero-dimensional ideal I in a ring R of polynomials over C. We propose a general algebraic framework to find the solutions and to compute the structure of the quotient ring R/I from the cokernel of a resultant map. This leads to what we call Truncated Normal Forms (TNFs). Algorithms for generic dense and sparse systems follow from the classical resultant constructions. In the presented framework, the concept of a border basis is generalized by relaxing the conditions on the set of basis elements. This allows for algorithms to adapt the choice of basis in order to enhance the numerical stability. We present such an algorithm. The numerical experiments show that the methods allow to compute all zeros of challenging systems (high degree, with a large number of solutions) in small dimensions with high accuracy.
{"title":"Truncated normal forms for solving polynomial systems","authors":"Simon Telen, B. Mourrain, M. Barel","doi":"10.1145/3313880.3313888","DOIUrl":"https://doi.org/10.1145/3313880.3313888","url":null,"abstract":"In this poster we present the results of [10]. We consider the problem of finding the common roots of a set of polynomial functions defining a zero-dimensional ideal I in a ring R of polynomials over C. We propose a general algebraic framework to find the solutions and to compute the structure of the quotient ring R/I from the cokernel of a resultant map. This leads to what we call Truncated Normal Forms (TNFs). Algorithms for generic dense and sparse systems follow from the classical resultant constructions. In the presented framework, the concept of a border basis is generalized by relaxing the conditions on the set of basis elements. This allows for algorithms to adapt the choice of basis in order to enhance the numerical stability. We present such an algorithm. The numerical experiments show that the methods allow to compute all zeros of challenging systems (high degree, with a large number of solutions) in small dimensions with high accuracy.","PeriodicalId":7093,"journal":{"name":"ACM Commun. Comput. Algebra","volume":"23 1","pages":"78-81"},"PeriodicalIF":0.0,"publicationDate":"2019-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88286255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sparse interpolation from at least 2n uniformly spaced interpolation points tj can be traced back to the exponential fitting method [MATH HERE] of de Prony from the 18-th century [5]. Almost 200 years later this basic problem is also reformulated as a generalized eigenvalue problem [8]. We generalize (1) to sparse interpolation problems of the form [MATH HERE] and some multivariate formulations thereof, from corresponding regular interpolation point patterns. Concurrently we introduce the wavelet inspired paradigm of dilation and translation for the analysis (2) of these complex-valued structured univariate or multivariate samples. The new method is the result of a search on how to solve ambiguity problems in exponential analysis, such as aliasing which arises from too coarsely sampled data, or collisions which may occur when handling projected data.
{"title":"A scale and shift paradigm for sparse interpolation in one and more dimensions","authors":"A. Cuyt, Wen-shin Lee","doi":"10.1145/3313880.3313887","DOIUrl":"https://doi.org/10.1145/3313880.3313887","url":null,"abstract":"Sparse interpolation from at least 2n uniformly spaced interpolation points tj can be traced back to the exponential fitting method\u0000 [MATH HERE]\u0000 of de Prony from the 18-th century [5]. Almost 200 years later this basic problem is also reformulated as a generalized eigenvalue problem [8]. We generalize (1) to sparse interpolation problems of the form\u0000 [MATH HERE]\u0000 and some multivariate formulations thereof, from corresponding regular interpolation point patterns. Concurrently we introduce the wavelet inspired paradigm of dilation and translation for the analysis (2) of these complex-valued structured univariate or multivariate samples. The new method is the result of a search on how to solve ambiguity problems in exponential analysis, such as aliasing which arises from too coarsely sampled data, or collisions which may occur when handling projected data.","PeriodicalId":7093,"journal":{"name":"ACM Commun. Comput. Algebra","volume":"19 1","pages":"75-77"},"PeriodicalIF":0.0,"publicationDate":"2019-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87250461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The SymbolicData Project on testing an benchmarking Computer Algebra software grew up from the Special Session on Benchmarking at the 1998 ISSAC conference. During 20 years we collected reserach data and meta information, developed a test framework along the "cross cutting concerns" of modern software engineering and experimented with semantic technologies as a building block of a modern distributed socio-technical research infrastructure in the area of Computer Algebra. This paper presents a comprehensive survey of the most important motivations, concepts, steps, efforts and practical achievements of the SymbolicData Project to contribute to the formation of such a research infrastructure.
{"title":"20 Years SymbolicData","authors":"Hans-Gert Gräbe","doi":"10.1145/3313880.3313881","DOIUrl":"https://doi.org/10.1145/3313880.3313881","url":null,"abstract":"The SymbolicData Project on testing an benchmarking Computer Algebra software grew up from the Special Session on Benchmarking at the 1998 ISSAC conference. During 20 years we collected reserach data and meta information, developed a test framework along the \"cross cutting concerns\" of modern software engineering and experimented with semantic technologies as a building block of a modern distributed socio-technical research infrastructure in the area of Computer Algebra. This paper presents a comprehensive survey of the most important motivations, concepts, steps, efforts and practical achievements of the SymbolicData Project to contribute to the formation of such a research infrastructure.","PeriodicalId":7093,"journal":{"name":"ACM Commun. Comput. Algebra","volume":"11 3 1","pages":"45-54"},"PeriodicalIF":0.0,"publicationDate":"2019-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81118845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Caravantes, M. Fioravanti, L. González-Vega, G. Díaz-Toca
A new determinantal presentation of the implicit equation for offsets to non degenerate conics and quadrics is introduced which is specially well suited for intersection purposes.
介绍了非退化二次曲线和二次曲线的隐式位移方程的一种新的行列式表示,它特别适合于求交问题。
{"title":"Offsets to conics and quadrics: a new determinantal representation for their implicit equation","authors":"J. Caravantes, M. Fioravanti, L. González-Vega, G. Díaz-Toca","doi":"10.1145/3313880.3313890","DOIUrl":"https://doi.org/10.1145/3313880.3313890","url":null,"abstract":"A new determinantal presentation of the implicit equation for offsets to non degenerate conics and quadrics is introduced which is specially well suited for intersection purposes.","PeriodicalId":7093,"journal":{"name":"ACM Commun. Comput. Algebra","volume":"65 1","pages":"85-88"},"PeriodicalIF":0.0,"publicationDate":"2019-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77401618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Let K be a field of characteristic zero, x an independent variable, E the shift operator with respect to x, i.e., Ef(x) = f(x + 1) for an arbitrary f(x). Recall that a nonzero expression F(x) is called a hypergeometric term over K if there exists a rational function r(x) ∈ K(x) such that F(x + 1)/F(x) = r(x). Usually r(x) is called the rational certificate of F(x). The problem of indefinite hypergeometric summation (anti-differencing) is: given a hypergeometric term F(x), find a hypergeometric term G(x) which satisfies the first order linear difference equation (E − 1)G(x) = F(x). (1) If found, write ΣxF(x) = G(x) + c, where c is an arbitrary constant.
{"title":"Accelerating indefinite hypergeometric summation algorithms","authors":"E. Zima","doi":"10.1145/3313880.3313893","DOIUrl":"https://doi.org/10.1145/3313880.3313893","url":null,"abstract":"Let K be a field of characteristic zero, <i>x</i> an independent variable, <i>E</i> the shift operator with respect to <i>x,</i> i.e., <i>Ef</i>(<i>x</i>) = <i>f</i>(<i>x</i> + 1) for an arbitrary <i>f</i>(<i>x</i>). Recall that a nonzero expression <i>F</i>(<i>x</i>) is called a hypergeometric term over K if there exists a rational function <i>r</i>(<i>x</i>) ∈ K(<i>x</i>) such that <i>F</i>(<i>x</i> + 1)/<i>F</i>(<i>x</i>) = <i>r</i>(<i>x</i>). Usually <i>r</i>(<i>x</i>) is called the rational <i>certificate</i> of <i>F</i>(<i>x</i>). The problem of indefinite hypergeometric summation (anti-differencing) is: given a hypergeometric term <i>F</i>(<i>x</i>), find a hypergeometric term <i>G</i>(<i>x</i>) which satisfies the first order linear difference equation\u0000 (<i>E</i> − 1)<i>G</i>(<i>x</i>) = <i>F</i>(<i>x</i>). (1)\u0000 If found, write Σ<i><sub>x</sub></i> <i>F</i>(<i>x</i>) = <i>G</i>(<i>x</i>) + <i>c</i>, where <i>c</i> is an arbitrary constant.","PeriodicalId":7093,"journal":{"name":"ACM Commun. Comput. Algebra","volume":"108 1","pages":"96-99"},"PeriodicalIF":0.0,"publicationDate":"2019-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79217614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When a pairing e : G1 x G2 → GT, on an elliptic curve E defined over Fq, is exploited in a cryptographic protocol, there is often the need to hash binary strings into G1 and G2. Traditionally, if E admits a twist Ẽ of order d, then G1 = E(Fq)⋂E[r], where r is a prime integer, and G2 = Ẽ(Fqk/d)⋂Ẽ[r], where k is the embedding degree of E w.r.t. r. The standard approach for hashing a binary string into G1 and G2 is to map it to general points P∈E(Fq) and P′ ∈ Ẽ(Fqk/d), and then multiply them by the cofactors c = #E(Fq)/r and c′ = #Ẽ(Fqk/d)/r respectively. Usually, the multiplication by c′ is computationally expensive. In order to speed up such a computation, two different methods (by Scott et al. and by Fuentes et al.) have been proposed. In this poster we consider these two methods for BLS pairing-friendly curves having k ∈ {12, 24, 30, 42,48}, providing efficiency comparisons. When k = 42,48, the Fuentes et al. method requires an expensive one-off pre-computation which was infeasible for the computational power at our disposal. In these cases, we theoretically obtain hashing maps that follow Fuentes et al. idea.
{"title":"Hashing to G2 on BLS pairing-friendly curves","authors":"Alessandro Budroni, Federico Pintore","doi":"10.1145/3313880.3313884","DOIUrl":"https://doi.org/10.1145/3313880.3313884","url":null,"abstract":"When a pairing <i>e</i> : G<sub>1</sub> x G<sub>2</sub> → G<sub>T</sub>, on an elliptic curve <i>E</i> defined over F<sub>q</sub>, is exploited in a cryptographic protocol, there is often the need to hash binary strings into G<sub>1</sub> and G<sub>2</sub>. Traditionally, if <i>E</i> admits a twist Ẽ of order <i>d,</i> then G<sub>1</sub> = <i>E</i>(F<sub><i>q</i></sub>)⋂<i>E</i>[<i>r</i>], where <i>r</i> is a prime integer, and G<sub>2</sub> = Ẽ(F<i><sub>q</sub><sup>k/d</sup></i>)⋂<i>Ẽ</i>[<i>r</i>], where <i>k</i> is the embedding degree of <i>E</i> w.r.t. r. The standard approach for hashing a binary string into G<sub>1</sub> and G<sub>2</sub> is to map it to general points <i>P∈E</i>(<i>F<sub>q</sub></i>) and <i>P′ ∈ Ẽ</i>(F<i><sub>q</sub><sup>k/d</sup></i>), and then multiply them by the cofactors <i>c</i> = <i>#E</i>(F<i><sub>q</sub></i>)/<i>r</i> and <i>c</i>′ = <i>#Ẽ</i>(F<i><sub>q</sub><sup>k/d</sup></i>)/<i>r</i> respectively. Usually, the multiplication by c′ is computationally expensive. In order to speed up such a computation, two different methods (by Scott <i>et al.</i> and by Fuentes <i>et al.</i>) have been proposed. In this poster we consider these two methods for BLS pairing-friendly curves having <i>k</i> ∈ {12, 24, 30, 42,48}, providing efficiency comparisons. When <i>k</i> = 42,48, the Fuentes <i>et al.</i> method requires an expensive one-off pre-computation which was infeasible for the computational power at our disposal. In these cases, we theoretically obtain hashing maps that follow Fuentes <i>et al.</i> idea.","PeriodicalId":7093,"journal":{"name":"ACM Commun. Comput. Algebra","volume":"22 1","pages":"63-66"},"PeriodicalIF":0.0,"publicationDate":"2019-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80050498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We establish a connection between the hypergeometric solutions of a first order linear recurrence systems and the determinant of the system matrix. This enables us to find hypergeometric solutions for systems in a way similar to the scalar case. Our result works in the in the single basic and in the multibasic case.
{"title":"Towards a direct method for finding hypergeometric solutions of linear first order recurrence systems","authors":"J. Middeke, Carsten Schneider","doi":"10.1145/3313880.3313891","DOIUrl":"https://doi.org/10.1145/3313880.3313891","url":null,"abstract":"We establish a connection between the hypergeometric solutions of a first order linear recurrence systems and the determinant of the system matrix. This enables us to find hypergeometric solutions for systems in a way similar to the scalar case. Our result works in the in the single basic and in the multibasic case.","PeriodicalId":7093,"journal":{"name":"ACM Commun. Comput. Algebra","volume":"33 1","pages":"89-91"},"PeriodicalIF":0.0,"publicationDate":"2019-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89077644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bresinsky defined a class of monomial curves in A4 with the property that the minimal number of generators or the first Betti number of the defining ideal is unbounded above. We prove that the same behaviour of unboundedness is true for all the Betti numbers and construct an explicit minimal free resolution for this class. We also propose a general construction of such curves in arbitrary embedding dimension.
{"title":"Unboundedness of Betti numbers of curves","authors":"R. Mehta, Joydip Saha, I. Sengupta","doi":"10.1145/3313880.3313895","DOIUrl":"https://doi.org/10.1145/3313880.3313895","url":null,"abstract":"Bresinsky defined a class of monomial curves in A4 with the property that the minimal number of generators or the first Betti number of the defining ideal is unbounded above. We prove that the same behaviour of unboundedness is true for all the Betti numbers and construct an explicit minimal free resolution for this class. We also propose a general construction of such curves in arbitrary embedding dimension.","PeriodicalId":7093,"journal":{"name":"ACM Commun. Comput. Algebra","volume":"108 3 1","pages":"104-107"},"PeriodicalIF":0.0,"publicationDate":"2019-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79418214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Combining methods from satisfiability checking with methods from symbolic computation promises to solve challenging problems in various areas of theory and application. We look at the basically equivalent problem of proving statements directly in a non-clausal setting, when additional information on the underlying domain is available in form of specific properties and algorithms. We demonstrate on a concrete example several heuristic techniques for the automation of natural-style proving of statements from elementary analysis. The purpose of this work in progress is to generate proofs similar to those produced by humans, by combining automated reasoning methods with techniques from computer algebra. Our techniques include: the S-decomposition method for formulae with alternating quantifiers, quantifier elimination by cylindrical algebraic decomposition, analysis of terms behaviour in zero, bounding the ∈-bounds, rewriting of expressions involving absolute value, algebraic manipulations, and identification of equal terms under unknown functions. These techniques are being implemented in the Theorema system and are able to construct automatically natural-style proofs for numerous examples including: convergence of sequences, limits and continuity of functions, uniform continuity, and other.
{"title":"Techniques for natural-style proofs in elementary analysis","authors":"T. Jebelean","doi":"10.1145/3313880.3313892","DOIUrl":"https://doi.org/10.1145/3313880.3313892","url":null,"abstract":"Combining methods from satisfiability checking with methods from symbolic computation promises to solve challenging problems in various areas of theory and application. We look at the basically equivalent problem of proving statements directly in a non-clausal setting, when additional information on the underlying domain is available in form of specific properties and algorithms. We demonstrate on a concrete example several heuristic techniques for the automation of natural-style proving of statements from elementary analysis. The purpose of this work in progress is to generate proofs similar to those produced by humans, by combining automated reasoning methods with techniques from computer algebra. Our techniques include: the S-decomposition method for formulae with alternating quantifiers, quantifier elimination by cylindrical algebraic decomposition, analysis of terms behaviour in zero, bounding the ∈-bounds, rewriting of expressions involving absolute value, algebraic manipulations, and identification of equal terms under unknown functions. These techniques are being implemented in the Theorema system and are able to construct automatically natural-style proofs for numerous examples including: convergence of sequences, limits and continuity of functions, uniform continuity, and other.","PeriodicalId":7093,"journal":{"name":"ACM Commun. Comput. Algebra","volume":"18 1","pages":"92-95"},"PeriodicalIF":0.0,"publicationDate":"2019-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73343614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce homogenized funtf (finite tight unit norm frames) varieties and study the degrees of their coordinate projections. These varieties compactify the affine funtf variety differently from the projectivizations studied in [12]. However, each are the closures (Zariski) of the set of finite tight unit norm frames. Our motivation comes from studying the algebraic frame completion problem.
{"title":"Homogenized funtf varieties and algebraic frame completion","authors":"Cameron Farnsworth, J. Rodriguez","doi":"10.1145/3313880.3313896","DOIUrl":"https://doi.org/10.1145/3313880.3313896","url":null,"abstract":"We introduce homogenized funtf (finite tight unit norm frames) varieties and study the degrees of their coordinate projections. These varieties compactify the affine funtf variety differently from the projectivizations studied in [12]. However, each are the closures (Zariski) of the set of finite tight unit norm frames. Our motivation comes from studying the algebraic frame completion problem.","PeriodicalId":7093,"journal":{"name":"ACM Commun. Comput. Algebra","volume":"1932 1","pages":"108-111"},"PeriodicalIF":0.0,"publicationDate":"2019-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91168867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}