For standard algorithms verifying positive definiteness of a matrix A ∈ Mn(R) based on Sylvester’s criterion, the computationally pessimistic case is this when A is positive definite. We present an algorithm realizing the same task for A ∈ Mn(Z), for which the case when A is positive definite is the optimistic one. The algorithm relies on performing certain edge transformations, called inflations, on the signed graph (bigraph) Δ = Δ(A) associated with A. We provide few variants of the algorithm, including Las Vegas type randomized ones (with precisely described maximal number of steps). The algorithms work very well in practice, in many cases with a better speed than the standard tests. On the other hand, our results provide an interesting example of an application of symbolic computing methods originally developed for different purposes, with a big potential for further generalizations in matrix problems.
{"title":"Effective Nondeterministic Positive Definiteness Test for Unidiagonal Integral Matrices","authors":"Andrzej Mróz","doi":"10.1109/SYNASC.2016.023","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.023","url":null,"abstract":"For standard algorithms verifying positive definiteness of a matrix A ∈ Mn(R) based on Sylvester’s criterion, the computationally pessimistic case is this when A is positive definite. We present an algorithm realizing the same task for A ∈ Mn(Z), for which the case when A is positive definite is the optimistic one. The algorithm relies on performing certain edge transformations, called inflations, on the signed graph (bigraph) Δ = Δ(A) associated with A. We provide few variants of the algorithm, including Las Vegas type randomized ones (with precisely described maximal number of steps). The algorithms work very well in practice, in many cases with a better speed than the standard tests. On the other hand, our results provide an interesting example of an application of symbolic computing methods originally developed for different purposes, with a big potential for further generalizations in matrix problems.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126717638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes the use of software agents to represent decision makers and simulate their actions. Agents operate in a prior devised multi-agent system that has the goal of automating the logistics brokering of freight transport. The focus of this paper is to present an initial design and implementation of the negotiation processes. To this end, the Iterated Contract Net negotiation protocol has been adapted for our particular purpose. The experiments we have conducted illustrate how agents with different personalities would behave in this scenario.
{"title":"Automated Negotiation Framework for the Transport Logistics Service","authors":"Lucian Luncean, A. Mocanu, A. Becheru","doi":"10.1109/SYNASC.2016.066","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.066","url":null,"abstract":"This paper proposes the use of software agents to represent decision makers and simulate their actions. Agents operate in a prior devised multi-agent system that has the goal of automating the logistics brokering of freight transport. The focus of this paper is to present an initial design and implementation of the negotiation processes. To this end, the Iterated Contract Net negotiation protocol has been adapted for our particular purpose. The experiments we have conducted illustrate how agents with different personalities would behave in this scenario.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121414654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The collection and aggregation of monitoring data from distributed applications are an extremely important topic. The scale of these applications, such as those designed for Big Data, makes the performance of the services responsible for parsing and aggregating logs a key issue. Logstash is a well-known open source framework for centralizing and parsing both structured and unstructured monitoring data. As with many parsing applications, throttling is a common issue due to the incoming data exceeding Logstash processing ability. The conventional approach for improving performance usually entails increasing the number of workers as well as the buffer size. However, it is unknown whether these approaches might tackle the issue when scaling to thousands of nodes. In this paper, by profiling Java virtual machine, we optimize Garbage Collection in order to fine tune a Logstash instance in DICE monitoring platform to increase its throughput. A Logstash shipper simulation tool was developed to transfer simulated data to the Logstash instance. It is capable of simulating thousands of monitored nodes. The obtained results show that with our suggestion of minimizing Garbage Collection impact, the Logtash throughput increases considerably.
{"title":"Tuning Logstash Garbage Collection for High Throughput in a Monitoring Platform","authors":"Dong Nguyen Doan, Gabriel Iuhasz","doi":"10.1109/SYNASC.2016.063","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.063","url":null,"abstract":"The collection and aggregation of monitoring data from distributed applications are an extremely important topic. The scale of these applications, such as those designed for Big Data, makes the performance of the services responsible for parsing and aggregating logs a key issue. Logstash is a well-known open source framework for centralizing and parsing both structured and unstructured monitoring data. As with many parsing applications, throttling is a common issue due to the incoming data exceeding Logstash processing ability. The conventional approach for improving performance usually entails increasing the number of workers as well as the buffer size. However, it is unknown whether these approaches might tackle the issue when scaling to thousands of nodes. In this paper, by profiling Java virtual machine, we optimize Garbage Collection in order to fine tune a Logstash instance in DICE monitoring platform to increase its throughput. A Logstash shipper simulation tool was developed to transfer simulated data to the Logstash instance. It is capable of simulating thousands of monitored nodes. The obtained results show that with our suggestion of minimizing Garbage Collection impact, the Logtash throughput increases considerably.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127177664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we explore how numerical calculations can be accelerated by implementing several numerical methods of fractional-order systems using parallel computing techniques. We investigate the feasibility of parallel computing algorithms and their efficiency in reducing the computational costs over a large time interval. Particularly, we present the case of Adams-Bashforth-Mouhlton predictor-corrector method and measure the speedup of two parallel approaches by using GPU and HPC cluster implementations.
{"title":"Parallel Simulations for Fractional-Order Systems","authors":"A. Baban, C. Bonchis, A. Fikl, F. Rosu","doi":"10.1109/SYNASC.2016.033","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.033","url":null,"abstract":"In this paper, we explore how numerical calculations can be accelerated by implementing several numerical methods of fractional-order systems using parallel computing techniques. We investigate the feasibility of parallel computing algorithms and their efficiency in reducing the computational costs over a large time interval. Particularly, we present the case of Adams-Bashforth-Mouhlton predictor-corrector method and measure the speedup of two parallel approaches by using GPU and HPC cluster implementations.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134434323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper is an extended abstract of an invited talk of the samename, given at SYNASC 2016. It describes a sort of case study of howideas from computational logic (specifically Satisfiability ModuloTheory solving) provide new algorithms in symbolic computing. Inparticular, it describes how ideas from the NLSAT solver led to a newkind of Cylindrical Algebraic Decomposition.
{"title":"Bridging Two Communities to Solve Real Problems","authors":"Christopher W. Brown","doi":"10.1109/SYNASC.2016.015","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.015","url":null,"abstract":"This paper is an extended abstract of an invited talk of the samename, given at SYNASC 2016. It describes a sort of case study of howideas from computational logic (specifically Satisfiability ModuloTheory solving) provide new algorithms in symbolic computing. Inparticular, it describes how ideas from the NLSAT solver led to a newkind of Cylindrical Algebraic Decomposition.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132591574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Learning and backjumping are essential features in search-based decision procedures for Quantified Boolean Formulas (QBF). To obtain a better understanding of such procedures, we present a formal framework, which allows to simultaneously reason on prenex conjunctive and disjunctive normal form. It captures both satisfying and falsifying search statesin a symmetric way. This symmetry simplifies the framework and offers potential for further variants.
{"title":"A Duality-Aware Calculus for Quantified Boolean Formulas","authors":"Katalin Fazekas, M. Seidl, Armin Biere","doi":"10.1109/SYNASC.2016.038","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.038","url":null,"abstract":"Learning and backjumping are essential features in search-based decision procedures for Quantified Boolean Formulas (QBF). To obtain a better understanding of such procedures, we present a formal framework, which allows to simultaneously reason on prenex conjunctive and disjunctive normal form. It captures both satisfying and falsifying search statesin a symmetric way. This symmetry simplifies the framework and offers potential for further variants.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122229155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Handwritten digit recognition is a subproblem of the well-known optical recognition topic. In this work, we propose a new feature extraction method for offline handwritten digit recognition. The method combines basic image processing techniques such as rotations and edge filtering in order to extract digit characteristics. As classifiers, we use k-NN (k Nearest Neighbor) and Support Vector Machines (SVM). The methods are tested on a commonly employed database of handwritten digits' images, MNIST (Mixed National Institute of Standards and Technology) on which the classification rate is over 99%.
手写体数字识别是众所周知的光学识别领域的一个子问题。在这项工作中,我们提出了一种新的离线手写数字识别特征提取方法。该方法结合了旋转和边缘滤波等基本图像处理技术来提取数字特征。作为分类器,我们使用k- nn (k最近邻)和支持向量机(SVM)。这些方法在一个常用的手写数字图像数据库MNIST(混合国家标准与技术研究所)上进行了测试,其分类率超过99%。
{"title":"Handwritten Digit Recognition Using Rotations","authors":"A. Ignat, Bogdan Aciobanitei","doi":"10.1109/SYNASC.2016.054","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.054","url":null,"abstract":"Handwritten digit recognition is a subproblem of the well-known optical recognition topic. In this work, we propose a new feature extraction method for offline handwritten digit recognition. The method combines basic image processing techniques such as rotations and edge filtering in order to extract digit characteristics. As classifiers, we use k-NN (k Nearest Neighbor) and Support Vector Machines (SVM). The methods are tested on a commonly employed database of handwritten digits' images, MNIST (Mixed National Institute of Standards and Technology) on which the classification rate is over 99%.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117084103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The complexity of the virus detection problem has received much attention from researchers over the past years. Using a grammar-based formalism for polymorphic viruses and based on the recognition problem for a fixed grammar we provide two examples of polymorphic engines, corresponding to bounded length viruses, whose reliable detection problem is NP-complete and PSPACE-complete. Thus, by giving an example of a fixed context sensitive grammar whose recognition problem is PSPACE-complete, we show that the detection problem for bounded length polymorphic viruses is not NP-complete as previously believed.
{"title":"On Complexity of the Detection Problem for Bounded Length Polymorphic Viruses","authors":"Catalin-Valeriu Lita","doi":"10.1109/SYNASC.2016.064","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.064","url":null,"abstract":"The complexity of the virus detection problem has received much attention from researchers over the past years. Using a grammar-based formalism for polymorphic viruses and based on the recognition problem for a fixed grammar we provide two examples of polymorphic engines, corresponding to bounded length viruses, whose reliable detection problem is NP-complete and PSPACE-complete. Thus, by giving an example of a fixed context sensitive grammar whose recognition problem is PSPACE-complete, we show that the detection problem for bounded length polymorphic viruses is not NP-complete as previously believed.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117012973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Changbo Chen, S. Covanov, Farnam Mansouri, M. M. Maza, Ning Xie, Yuzhen Xie
We propose a new algorithm for multiplying densepolynomials with integer coefficients in a parallel fashion, targetingmulti-core processor architectures. Complexity estimates andexperimental comparisons demonstrate the advantages of this newapproach.
{"title":"Parallel Integer Polynomial Multiplication","authors":"Changbo Chen, S. Covanov, Farnam Mansouri, M. M. Maza, Ning Xie, Yuzhen Xie","doi":"10.1109/SYNASC.2016.024","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.024","url":null,"abstract":"We propose a new algorithm for multiplying densepolynomials with integer coefficients in a parallel fashion, targetingmulti-core processor architectures. Complexity estimates andexperimental comparisons demonstrate the advantages of this newapproach.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":" 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113951348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The social learning process of birds and fishesinspired the development of the heuristic Particle Swarm Optimization (PSO) search algorithm. The advancement of GraphicsProcessing Units (GPU) and the Compute Unified Device Architecture (CUDA) platform plays a significant role to reduce thecomputational time in search algorithm development. This paperpresents a good implementation for the Standard Particle SwarmOptimization (SPSO) on a GPU based on the CUDA architecture, which uses coalescing memory access. The algorithm is evaluatedon a suite of well-known benchmark optimization functions. Theexperiments are performed on an NVIDIA GeForce GTX 980GPU and a single core of 3.20 GHz Intel Core i5 4570 CPUand the test results demonstrate that the GPU algorithm runsabout maximum 46 times faster than the corresponding CPUalgorithm. Therefore, this proposed algorithm can be used toimprove required time to solve optimization problems.
{"title":"A CUDA Implementation of the Standard Particle Swarm Optimization","authors":"M. M. Hussain, H. Hattori, N. Fujimoto","doi":"10.1109/SYNASC.2016.043","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.043","url":null,"abstract":"The social learning process of birds and fishesinspired the development of the heuristic Particle Swarm Optimization (PSO) search algorithm. The advancement of GraphicsProcessing Units (GPU) and the Compute Unified Device Architecture (CUDA) platform plays a significant role to reduce thecomputational time in search algorithm development. This paperpresents a good implementation for the Standard Particle SwarmOptimization (SPSO) on a GPU based on the CUDA architecture, which uses coalescing memory access. The algorithm is evaluatedon a suite of well-known benchmark optimization functions. Theexperiments are performed on an NVIDIA GeForce GTX 980GPU and a single core of 3.20 GHz Intel Core i5 4570 CPUand the test results demonstrate that the GPU algorithm runsabout maximum 46 times faster than the corresponding CPUalgorithm. Therefore, this proposed algorithm can be used toimprove required time to solve optimization problems.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127040174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}