For standard algorithms verifying positive definiteness of a matrix A ∈ Mn(R) based on Sylvester’s criterion, the computationally pessimistic case is this when A is positive definite. We present an algorithm realizing the same task for A ∈ Mn(Z), for which the case when A is positive definite is the optimistic one. The algorithm relies on performing certain edge transformations, called inflations, on the signed graph (bigraph) Δ = Δ(A) associated with A. We provide few variants of the algorithm, including Las Vegas type randomized ones (with precisely described maximal number of steps). The algorithms work very well in practice, in many cases with a better speed than the standard tests. On the other hand, our results provide an interesting example of an application of symbolic computing methods originally developed for different purposes, with a big potential for further generalizations in matrix problems.
{"title":"Effective Nondeterministic Positive Definiteness Test for Unidiagonal Integral Matrices","authors":"Andrzej Mróz","doi":"10.1109/SYNASC.2016.023","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.023","url":null,"abstract":"For standard algorithms verifying positive definiteness of a matrix A ∈ Mn(R) based on Sylvester’s criterion, the computationally pessimistic case is this when A is positive definite. We present an algorithm realizing the same task for A ∈ Mn(Z), for which the case when A is positive definite is the optimistic one. The algorithm relies on performing certain edge transformations, called inflations, on the signed graph (bigraph) Δ = Δ(A) associated with A. We provide few variants of the algorithm, including Las Vegas type randomized ones (with precisely described maximal number of steps). The algorithms work very well in practice, in many cases with a better speed than the standard tests. On the other hand, our results provide an interesting example of an application of symbolic computing methods originally developed for different purposes, with a big potential for further generalizations in matrix problems.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126717638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes the use of software agents to represent decision makers and simulate their actions. Agents operate in a prior devised multi-agent system that has the goal of automating the logistics brokering of freight transport. The focus of this paper is to present an initial design and implementation of the negotiation processes. To this end, the Iterated Contract Net negotiation protocol has been adapted for our particular purpose. The experiments we have conducted illustrate how agents with different personalities would behave in this scenario.
{"title":"Automated Negotiation Framework for the Transport Logistics Service","authors":"Lucian Luncean, A. Mocanu, A. Becheru","doi":"10.1109/SYNASC.2016.066","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.066","url":null,"abstract":"This paper proposes the use of software agents to represent decision makers and simulate their actions. Agents operate in a prior devised multi-agent system that has the goal of automating the logistics brokering of freight transport. The focus of this paper is to present an initial design and implementation of the negotiation processes. To this end, the Iterated Contract Net negotiation protocol has been adapted for our particular purpose. The experiments we have conducted illustrate how agents with different personalities would behave in this scenario.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121414654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The collection and aggregation of monitoring data from distributed applications are an extremely important topic. The scale of these applications, such as those designed for Big Data, makes the performance of the services responsible for parsing and aggregating logs a key issue. Logstash is a well-known open source framework for centralizing and parsing both structured and unstructured monitoring data. As with many parsing applications, throttling is a common issue due to the incoming data exceeding Logstash processing ability. The conventional approach for improving performance usually entails increasing the number of workers as well as the buffer size. However, it is unknown whether these approaches might tackle the issue when scaling to thousands of nodes. In this paper, by profiling Java virtual machine, we optimize Garbage Collection in order to fine tune a Logstash instance in DICE monitoring platform to increase its throughput. A Logstash shipper simulation tool was developed to transfer simulated data to the Logstash instance. It is capable of simulating thousands of monitored nodes. The obtained results show that with our suggestion of minimizing Garbage Collection impact, the Logtash throughput increases considerably.
{"title":"Tuning Logstash Garbage Collection for High Throughput in a Monitoring Platform","authors":"Dong Nguyen Doan, Gabriel Iuhasz","doi":"10.1109/SYNASC.2016.063","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.063","url":null,"abstract":"The collection and aggregation of monitoring data from distributed applications are an extremely important topic. The scale of these applications, such as those designed for Big Data, makes the performance of the services responsible for parsing and aggregating logs a key issue. Logstash is a well-known open source framework for centralizing and parsing both structured and unstructured monitoring data. As with many parsing applications, throttling is a common issue due to the incoming data exceeding Logstash processing ability. The conventional approach for improving performance usually entails increasing the number of workers as well as the buffer size. However, it is unknown whether these approaches might tackle the issue when scaling to thousands of nodes. In this paper, by profiling Java virtual machine, we optimize Garbage Collection in order to fine tune a Logstash instance in DICE monitoring platform to increase its throughput. A Logstash shipper simulation tool was developed to transfer simulated data to the Logstash instance. It is capable of simulating thousands of monitored nodes. The obtained results show that with our suggestion of minimizing Garbage Collection impact, the Logtash throughput increases considerably.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127177664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we explore how numerical calculations can be accelerated by implementing several numerical methods of fractional-order systems using parallel computing techniques. We investigate the feasibility of parallel computing algorithms and their efficiency in reducing the computational costs over a large time interval. Particularly, we present the case of Adams-Bashforth-Mouhlton predictor-corrector method and measure the speedup of two parallel approaches by using GPU and HPC cluster implementations.
{"title":"Parallel Simulations for Fractional-Order Systems","authors":"A. Baban, C. Bonchis, A. Fikl, F. Rosu","doi":"10.1109/SYNASC.2016.033","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.033","url":null,"abstract":"In this paper, we explore how numerical calculations can be accelerated by implementing several numerical methods of fractional-order systems using parallel computing techniques. We investigate the feasibility of parallel computing algorithms and their efficiency in reducing the computational costs over a large time interval. Particularly, we present the case of Adams-Bashforth-Mouhlton predictor-corrector method and measure the speedup of two parallel approaches by using GPU and HPC cluster implementations.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134434323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper is an extended abstract of an invited talk of the samename, given at SYNASC 2016. It describes a sort of case study of howideas from computational logic (specifically Satisfiability ModuloTheory solving) provide new algorithms in symbolic computing. Inparticular, it describes how ideas from the NLSAT solver led to a newkind of Cylindrical Algebraic Decomposition.
{"title":"Bridging Two Communities to Solve Real Problems","authors":"Christopher W. Brown","doi":"10.1109/SYNASC.2016.015","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.015","url":null,"abstract":"This paper is an extended abstract of an invited talk of the samename, given at SYNASC 2016. It describes a sort of case study of howideas from computational logic (specifically Satisfiability ModuloTheory solving) provide new algorithms in symbolic computing. Inparticular, it describes how ideas from the NLSAT solver led to a newkind of Cylindrical Algebraic Decomposition.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132591574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Learning and backjumping are essential features in search-based decision procedures for Quantified Boolean Formulas (QBF). To obtain a better understanding of such procedures, we present a formal framework, which allows to simultaneously reason on prenex conjunctive and disjunctive normal form. It captures both satisfying and falsifying search statesin a symmetric way. This symmetry simplifies the framework and offers potential for further variants.
{"title":"A Duality-Aware Calculus for Quantified Boolean Formulas","authors":"Katalin Fazekas, M. Seidl, Armin Biere","doi":"10.1109/SYNASC.2016.038","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.038","url":null,"abstract":"Learning and backjumping are essential features in search-based decision procedures for Quantified Boolean Formulas (QBF). To obtain a better understanding of such procedures, we present a formal framework, which allows to simultaneously reason on prenex conjunctive and disjunctive normal form. It captures both satisfying and falsifying search statesin a symmetric way. This symmetry simplifies the framework and offers potential for further variants.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122229155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Changbo Chen, S. Covanov, Farnam Mansouri, M. M. Maza, Ning Xie, Yuzhen Xie
We propose a new algorithm for multiplying densepolynomials with integer coefficients in a parallel fashion, targetingmulti-core processor architectures. Complexity estimates andexperimental comparisons demonstrate the advantages of this newapproach.
{"title":"Parallel Integer Polynomial Multiplication","authors":"Changbo Chen, S. Covanov, Farnam Mansouri, M. M. Maza, Ning Xie, Yuzhen Xie","doi":"10.1109/SYNASC.2016.024","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.024","url":null,"abstract":"We propose a new algorithm for multiplying densepolynomials with integer coefficients in a parallel fashion, targetingmulti-core processor architectures. Complexity estimates andexperimental comparisons demonstrate the advantages of this newapproach.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":" 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113951348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Handwritten digit recognition is a subproblem of the well-known optical recognition topic. In this work, we propose a new feature extraction method for offline handwritten digit recognition. The method combines basic image processing techniques such as rotations and edge filtering in order to extract digit characteristics. As classifiers, we use k-NN (k Nearest Neighbor) and Support Vector Machines (SVM). The methods are tested on a commonly employed database of handwritten digits' images, MNIST (Mixed National Institute of Standards and Technology) on which the classification rate is over 99%.
手写体数字识别是众所周知的光学识别领域的一个子问题。在这项工作中,我们提出了一种新的离线手写数字识别特征提取方法。该方法结合了旋转和边缘滤波等基本图像处理技术来提取数字特征。作为分类器,我们使用k- nn (k最近邻)和支持向量机(SVM)。这些方法在一个常用的手写数字图像数据库MNIST(混合国家标准与技术研究所)上进行了测试,其分类率超过99%。
{"title":"Handwritten Digit Recognition Using Rotations","authors":"A. Ignat, Bogdan Aciobanitei","doi":"10.1109/SYNASC.2016.054","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.054","url":null,"abstract":"Handwritten digit recognition is a subproblem of the well-known optical recognition topic. In this work, we propose a new feature extraction method for offline handwritten digit recognition. The method combines basic image processing techniques such as rotations and edge filtering in order to extract digit characteristics. As classifiers, we use k-NN (k Nearest Neighbor) and Support Vector Machines (SVM). The methods are tested on a commonly employed database of handwritten digits' images, MNIST (Mixed National Institute of Standards and Technology) on which the classification rate is over 99%.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117084103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The complexity of the virus detection problem has received much attention from researchers over the past years. Using a grammar-based formalism for polymorphic viruses and based on the recognition problem for a fixed grammar we provide two examples of polymorphic engines, corresponding to bounded length viruses, whose reliable detection problem is NP-complete and PSPACE-complete. Thus, by giving an example of a fixed context sensitive grammar whose recognition problem is PSPACE-complete, we show that the detection problem for bounded length polymorphic viruses is not NP-complete as previously believed.
{"title":"On Complexity of the Detection Problem for Bounded Length Polymorphic Viruses","authors":"Catalin-Valeriu Lita","doi":"10.1109/SYNASC.2016.064","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.064","url":null,"abstract":"The complexity of the virus detection problem has received much attention from researchers over the past years. Using a grammar-based formalism for polymorphic viruses and based on the recognition problem for a fixed grammar we provide two examples of polymorphic engines, corresponding to bounded length viruses, whose reliable detection problem is NP-complete and PSPACE-complete. Thus, by giving an example of a fixed context sensitive grammar whose recognition problem is PSPACE-complete, we show that the detection problem for bounded length polymorphic viruses is not NP-complete as previously believed.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117012973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we analyse the variation of the gesture recognition accuracy of several time series classifiers, based on input provided by two different sensors: Kinect for XBox 360 (Kinect 1) and its improved, newer version, Kinect for XBox One (Kinect 2). This work builds upon a previous study analysing classifiers' performance on pose recognition, considering multiple factors, such as the machine learning methods applied, the sensors used for data collection, as well as data interpretation and sample size. As for the classification of time series gestures, we analyse similar factors, by constructing several one-hand gesture databases that are used to train and test the Dynamic Time Warping (DTW) and Hidden Markov Models (HMM) algorithms. We observed no significant difference in classification accuracy between the results obtained with the two sensors on time series data, although Kinect 2 performs better in pose recognition. Overall, DTW obtained the best accuracy for Kinect 1 time series data, on datasets with fewer samples per class (about 15), the accuracy decreasing drastically with the increase of the number of samples for each class (from 97.8% drops to 66.6%). However, for HMM the accuracy is similar or higher (between 90.7% and 94.9%) for databases with more samples per class (up to 90 entries) than for those with fewer, which makes it preferable to use in a dynamic system.
在本文中,我们基于两种不同传感器提供的输入,分析了几种时间序列分类器的手势识别精度的变化:Kinect for XBox 360 (Kinect 1)及其改进的新版本Kinect for XBox One (Kinect 2)。这项工作建立在先前的研究基础上,分析了分类器在姿势识别方面的性能,考虑了多种因素,如应用的机器学习方法,用于数据收集的传感器,以及数据解释和样本量。对于时间序列手势的分类,我们通过构建几个单手手势数据库来分析相似的因素,这些数据库用于训练和测试动态时间扭曲(DTW)和隐马尔可夫模型(HMM)算法。我们观察到两种传感器在时间序列数据上的分类准确率没有显著差异,尽管Kinect 2在姿势识别方面表现更好。总的来说,DTW在Kinect 1时间序列数据上获得了最好的准确率,在每类样本较少的数据集上(约15个),准确率随着每类样本数量的增加而急剧下降(从97.8%下降到66.6%)。然而,对于每个类有更多样本的数据库(最多90个条目),HMM的准确率与那些类有更少样本的数据库相似或更高(在90.7%和94.9%之间),这使得它更适合用于动态系统。
{"title":"Gesture Recognition on Kinect Time Series Data Using Dynamic Time Warping and Hidden Markov Models","authors":"A. Călin","doi":"10.1109/SYNASC.2016.049","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.049","url":null,"abstract":"In this paper we analyse the variation of the gesture recognition accuracy of several time series classifiers, based on input provided by two different sensors: Kinect for XBox 360 (Kinect 1) and its improved, newer version, Kinect for XBox One (Kinect 2). This work builds upon a previous study analysing classifiers' performance on pose recognition, considering multiple factors, such as the machine learning methods applied, the sensors used for data collection, as well as data interpretation and sample size. As for the classification of time series gestures, we analyse similar factors, by constructing several one-hand gesture databases that are used to train and test the Dynamic Time Warping (DTW) and Hidden Markov Models (HMM) algorithms. We observed no significant difference in classification accuracy between the results obtained with the two sensors on time series data, although Kinect 2 performs better in pose recognition. Overall, DTW obtained the best accuracy for Kinect 1 time series data, on datasets with fewer samples per class (about 15), the accuracy decreasing drastically with the increase of the number of samples for each class (from 97.8% drops to 66.6%). However, for HMM the accuracy is similar or higher (between 90.7% and 94.9%) for databases with more samples per class (up to 90 entries) than for those with fewer, which makes it preferable to use in a dynamic system.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130632165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}