In this paper we analyse the variation of the gesture recognition accuracy of several time series classifiers, based on input provided by two different sensors: Kinect for XBox 360 (Kinect 1) and its improved, newer version, Kinect for XBox One (Kinect 2). This work builds upon a previous study analysing classifiers' performance on pose recognition, considering multiple factors, such as the machine learning methods applied, the sensors used for data collection, as well as data interpretation and sample size. As for the classification of time series gestures, we analyse similar factors, by constructing several one-hand gesture databases that are used to train and test the Dynamic Time Warping (DTW) and Hidden Markov Models (HMM) algorithms. We observed no significant difference in classification accuracy between the results obtained with the two sensors on time series data, although Kinect 2 performs better in pose recognition. Overall, DTW obtained the best accuracy for Kinect 1 time series data, on datasets with fewer samples per class (about 15), the accuracy decreasing drastically with the increase of the number of samples for each class (from 97.8% drops to 66.6%). However, for HMM the accuracy is similar or higher (between 90.7% and 94.9%) for databases with more samples per class (up to 90 entries) than for those with fewer, which makes it preferable to use in a dynamic system.
在本文中,我们基于两种不同传感器提供的输入,分析了几种时间序列分类器的手势识别精度的变化:Kinect for XBox 360 (Kinect 1)及其改进的新版本Kinect for XBox One (Kinect 2)。这项工作建立在先前的研究基础上,分析了分类器在姿势识别方面的性能,考虑了多种因素,如应用的机器学习方法,用于数据收集的传感器,以及数据解释和样本量。对于时间序列手势的分类,我们通过构建几个单手手势数据库来分析相似的因素,这些数据库用于训练和测试动态时间扭曲(DTW)和隐马尔可夫模型(HMM)算法。我们观察到两种传感器在时间序列数据上的分类准确率没有显著差异,尽管Kinect 2在姿势识别方面表现更好。总的来说,DTW在Kinect 1时间序列数据上获得了最好的准确率,在每类样本较少的数据集上(约15个),准确率随着每类样本数量的增加而急剧下降(从97.8%下降到66.6%)。然而,对于每个类有更多样本的数据库(最多90个条目),HMM的准确率与那些类有更少样本的数据库相似或更高(在90.7%和94.9%之间),这使得它更适合用于动态系统。
{"title":"Gesture Recognition on Kinect Time Series Data Using Dynamic Time Warping and Hidden Markov Models","authors":"A. Călin","doi":"10.1109/SYNASC.2016.049","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.049","url":null,"abstract":"In this paper we analyse the variation of the gesture recognition accuracy of several time series classifiers, based on input provided by two different sensors: Kinect for XBox 360 (Kinect 1) and its improved, newer version, Kinect for XBox One (Kinect 2). This work builds upon a previous study analysing classifiers' performance on pose recognition, considering multiple factors, such as the machine learning methods applied, the sensors used for data collection, as well as data interpretation and sample size. As for the classification of time series gestures, we analyse similar factors, by constructing several one-hand gesture databases that are used to train and test the Dynamic Time Warping (DTW) and Hidden Markov Models (HMM) algorithms. We observed no significant difference in classification accuracy between the results obtained with the two sensors on time series data, although Kinect 2 performs better in pose recognition. Overall, DTW obtained the best accuracy for Kinect 1 time series data, on datasets with fewer samples per class (about 15), the accuracy decreasing drastically with the increase of the number of samples for each class (from 97.8% drops to 66.6%). However, for HMM the accuracy is similar or higher (between 90.7% and 94.9%) for databases with more samples per class (up to 90 entries) than for those with fewer, which makes it preferable to use in a dynamic system.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130632165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Chifu, I. Salomie, Laura Petrisor, E. Chifu, Dorin Moldovan
This paper presents a Hybrid Clonal Selection based method for generating healthy meals as starting from a given user request, a diet recommendation, and a set of food offers. The method proposed is based on a hybrid model, which consists of one core component and two hybridization components. The core component uses the CLONAG algorithm. One of the hybridization components is based on flower pollination, whereas the other utilizes tabu search and reinforcement learning. The flower pollination component is used for modifying the generated clones, while the tabu search and reinforcement learning component aims to improve the search capabilities of the core component by means of long-term and short-term memory structures. We integrated our method into an experimental prototype and we evaluated it on different older adult profiles.
{"title":"Hybrid Immune Based Method for Generating Healthy Meals for Older Adults","authors":"V. Chifu, I. Salomie, Laura Petrisor, E. Chifu, Dorin Moldovan","doi":"10.1109/SYNASC.2016.047","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.047","url":null,"abstract":"This paper presents a Hybrid Clonal Selection based method for generating healthy meals as starting from a given user request, a diet recommendation, and a set of food offers. The method proposed is based on a hybrid model, which consists of one core component and two hybridization components. The core component uses the CLONAG algorithm. One of the hybridization components is based on flower pollination, whereas the other utilizes tabu search and reinforcement learning. The flower pollination component is used for modifying the generated clones, while the tabu search and reinforcement learning component aims to improve the search capabilities of the core component by means of long-term and short-term memory structures. We integrated our method into an experimental prototype and we evaluated it on different older adult profiles.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131938350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Irrelevance, a notion which was first put forward by this author jointly with A. Sgarro, is a convenient tool to speed up computations in the arithmetic of interactive fuzzy numbers. In this paper we are trying to understand what happens if the fuzzy quantities one is considering are incomplete, or sub-normal, that is if one allows that a fuzzy quantity is "cut" at a height h which is less than 1. We motivate the reasons why we deem it important to extend fuzzy arithmetic to fuzzy quantities which may be incomplete, and we show that irrelevance keeps proving a convenient tool. Interactivity is described by suitable monotone joins, which generalize t-norms.
{"title":"Irrelevance in Incomplete Fuzzy Arithmetic","authors":"Laura Franzoi","doi":"10.1109/SYNASC.2016.052","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.052","url":null,"abstract":"Irrelevance, a notion which was first put forward by this author jointly with A. Sgarro, is a convenient tool to speed up computations in the arithmetic of interactive fuzzy numbers. In this paper we are trying to understand what happens if the fuzzy quantities one is considering are incomplete, or sub-normal, that is if one allows that a fuzzy quantity is \"cut\" at a height h which is less than 1. We motivate the reasons why we deem it important to extend fuzzy arithmetic to fuzzy quantities which may be incomplete, and we show that irrelevance keeps proving a convenient tool. Interactivity is described by suitable monotone joins, which generalize t-norms.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125359624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper investigates a novel update rule formulti–state Cellular Automata (CA) in the context of greyscaleimage segmentation. The update rule is parameterized and takesinto account the features of neighbouring cells compared to thefeatures of the current cell. We use the resulting CA to segmentseveral real–world images. During this process we also studythe influence of the rule parameters and neighbourhood schemeusing different evaluation measures.
{"title":"Parameterized Cellular Automata in Image Segmentation","authors":"A. Andreica, L. Dioşan, I. Voiculescu","doi":"10.1109/SYNASC.2016.040","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.040","url":null,"abstract":"This paper investigates a novel update rule formulti–state Cellular Automata (CA) in the context of greyscaleimage segmentation. The update rule is parameterized and takesinto account the features of neighbouring cells compared to thefeatures of the current cell. We use the resulting CA to segmentseveral real–world images. During this process we also studythe influence of the rule parameters and neighbourhood schemeusing different evaluation measures.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"66 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123187919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Liviu Octavian Mafteiu-Scai, Calin Alexandru Cornigeanu
This paper proposes a parallel hybrid heuristic aiming the reduction of the bandwidth of sparse matrices. Mainly based on the geometry of the matrix, the proposed method uses a greedy selection of rows/columns to be interchanged, depending on the nonzero extremities and other parameters of the matrix. Experimental results obtained on an IBM Blue Gene/P supercomputer illustrate the fact that the proposed parallel heuristic leads to better results, with respect to time efficiency, speedup, efficiency and quality of solution, in comparison with serial variants and of course in comparison with other reported results.
提出了一种以减少稀疏矩阵带宽为目标的并行混合启发式算法。该方法主要基于矩阵的几何特性,根据矩阵的非零极值和其他参数,贪婪地选择待交换的行/列。在IBM Blue Gene/P超级计算机上获得的实验结果表明,与串行变量相比,当然也与其他报告的结果相比,所提出的并行启发式在时间效率、加速、效率和解决方案质量方面都有更好的结果。
{"title":"A Parallel Heuristic for Bandwidth Reduction Based on Matrix Geometry","authors":"Liviu Octavian Mafteiu-Scai, Calin Alexandru Cornigeanu","doi":"10.1109/SYNASC.2016.071","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.071","url":null,"abstract":"This paper proposes a parallel hybrid heuristic aiming the reduction of the bandwidth of sparse matrices. Mainly based on the geometry of the matrix, the proposed method uses a greedy selection of rows/columns to be interchanged, depending on the nonzero extremities and other parameters of the matrix. Experimental results obtained on an IBM Blue Gene/P supercomputer illustrate the fact that the proposed parallel heuristic leads to better results, with respect to time efficiency, speedup, efficiency and quality of solution, in comparison with serial variants and of course in comparison with other reported results.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125428028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Numerical reproducibility failures rise in parallel computation because of the non-associativity of floating-point summation. Optimizations on massively parallel systems dynamically modify the floating-point operation order. Hence, numerical results may change from one run to another. We propose to ensure reproducibility by extending as far as possible the IEEE-754 correct rounding property to larger operation sequences. Our RARE-BLAS (Reproducible, Accurately Rounded and Efficient BLAS) benefits from recent accurate and efficient summation algorithms. Solutions for level 1 (asum, dot and nrm2) and level 2 (gemv) routines are provided. We compare their performance to the Intel MKL library and to other existing reproducible algorithms. For both shared and distributed memory parallel systems, we exhibit an extra-cost of 2× in the worst case scenario, which is satisfying for a wide range of applications. For Intel Xeon Phi accelerator a larger extra-cost (4× to 6×) is observed, which is still helpful at least for debugging and validation.
{"title":"Parallel Experiments with RARE-BLAS","authors":"Chemseddine Chohra, P. Langlois, David Parello","doi":"10.1109/SYNASC.2016.032","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.032","url":null,"abstract":"Numerical reproducibility failures rise in parallel computation because of the non-associativity of floating-point summation. Optimizations on massively parallel systems dynamically modify the floating-point operation order. Hence, numerical results may change from one run to another. We propose to ensure reproducibility by extending as far as possible the IEEE-754 correct rounding property to larger operation sequences. Our RARE-BLAS (Reproducible, Accurately Rounded and Efficient BLAS) benefits from recent accurate and efficient summation algorithms. Solutions for level 1 (asum, dot and nrm2) and level 2 (gemv) routines are provided. We compare their performance to the Intel MKL library and to other existing reproducible algorithms. For both shared and distributed memory parallel systems, we exhibit an extra-cost of 2× in the worst case scenario, which is satisfying for a wide range of applications. For Intel Xeon Phi accelerator a larger extra-cost (4× to 6×) is observed, which is still helpful at least for debugging and validation.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124868902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a hybrid test generation approach from extended finite state machines combining genetic algorithms with local search techniques. Many test generation methods (both functional and structural testing methods) use genetic algorithms. Genetic algorithms may take a long time to converge to a global optimum and for a huge neighborhood they can be inefficient or unsuccessful. In this paper we use hybrid genetic algorithms to generate test data for some chosen paths for extended finite state machines. Local search is applied to improve the best individual for each generation of the genetic algorithm.
{"title":"A Hybrid Test Generation Approach Based on Extended Finite State Machines","authors":"Ana Turlea, F. Ipate, R. Lefticaru","doi":"10.1109/SYNASC.2016.037","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.037","url":null,"abstract":"This paper presents a hybrid test generation approach from extended finite state machines combining genetic algorithms with local search techniques. Many test generation methods (both functional and structural testing methods) use genetic algorithms. Genetic algorithms may take a long time to converge to a global optimum and for a huge neighborhood they can be inefficient or unsuccessful. In this paper we use hybrid genetic algorithms to generate test data for some chosen paths for extended finite state machines. Local search is applied to improve the best individual for each generation of the genetic algorithm.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127308606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The extended Hensel construction (EHC) is a direct extension of the generalized Hensel construction (GHC), and it targets sparse multivariate polynomials for which the GHC breaks down. The EHC consists of two Hensel constructions which we call separation of "maximal" and "minimal" Hensel factors (see the text). As for the minimal Hensel factor separation, very recently, we enhanced the old algorithm largely by using Groebner basis of two initial factors and syzygies for the elements of the basis. In this paper, we first improve the old algorithm for maximal Hensel factors. We then enhance further the Groebner basis computation in our recent algorithm. The latter is based on a theoretical analysis of the Groebner bases. Simple experiments show that the improved part for the minimal Hensel factors is much faster than the recent one.
{"title":"Various Enhancements for Extended Hensel Construction of Sparse Multivariate Polynomials","authors":"Tateaki Sasaki, D. Inaba","doi":"10.1109/SYNASC.2016.025","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.025","url":null,"abstract":"The extended Hensel construction (EHC) is a direct extension of the generalized Hensel construction (GHC), and it targets sparse multivariate polynomials for which the GHC breaks down. The EHC consists of two Hensel constructions which we call separation of \"maximal\" and \"minimal\" Hensel factors (see the text). As for the minimal Hensel factor separation, very recently, we enhanced the old algorithm largely by using Groebner basis of two initial factors and syzygies for the elements of the basis. In this paper, we first improve the old algorithm for maximal Hensel factors. We then enhance further the Groebner basis computation in our recent algorithm. The latter is based on a theoretical analysis of the Groebner bases. Simple experiments show that the improved part for the minimal Hensel factors is much faster than the recent one.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"52 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126764753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hybrid sets are generalizations of sets and multisets, in which the multiplicities of elements can take any integers. This construction was proposed by Whitney in 1933 in terms of characteristic functions. Hybrid sets have been used by combinatorists to give combinatorial interpretationsfor several generalizations of binomial coefficients and Stirling numbers and by computer scientists to design fast algorithms for symbolic domain decompositions. We present in this paper some combinatorial results on subsets and partitions of hybrid sets.
{"title":"Combinatorics of Hybrid Sets","authors":"Shaoshi Chen, S. Watt","doi":"10.1109/SYNASC.2016.022","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.022","url":null,"abstract":"Hybrid sets are generalizations of sets and multisets, in which the multiplicities of elements can take any integers. This construction was proposed by Whitney in 1933 in terms of characteristic functions. Hybrid sets have been used by combinatorists to give combinatorial interpretationsfor several generalizations of binomial coefficients and Stirling numbers and by computer scientists to design fast algorithms for symbolic domain decompositions. We present in this paper some combinatorial results on subsets and partitions of hybrid sets.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125978524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As with many other areas of study, mathematical knowledge has been produced for centuries and will continue to be produced for centuries to come. The records have taken many forms, from manuscripts, to printed journals, and now digital media. Unlike many other fields, however, much of mathematical knowledge has a high degree of precision and objectivity that both gives it permanent utility and makes it susceptible to mechanized treatment. We outline a path toward assembling the world’s mathematical knowledge. While initially in the form of a comprehensive digital library of page images, we expect evolution toward a knowledge base supporting sophisticated queries and automated reasoning. It is the aim of the nascent International Mathematical Knowledge Trust to provide a framework and to foster a community to make progress in this direction. We can foresee that such a knowledge base will enhance the capacity of individual mathematicians, accelerate discovery and allow new kinds of collaboration.
{"title":"How to Build a Global Digital Mathematics Library","authors":"S. Watt","doi":"10.1109/SYNASC.2016.019","DOIUrl":"https://doi.org/10.1109/SYNASC.2016.019","url":null,"abstract":"As with many other areas of study, mathematical knowledge has been produced for centuries and will continue to be produced for centuries to come. The records have taken many forms, from manuscripts, to printed journals, and now digital media. Unlike many other fields, however, much of mathematical knowledge has a high degree of precision and objectivity that both gives it permanent utility and makes it susceptible to mechanized treatment. We outline a path toward assembling the world’s mathematical knowledge. While initially in the form of a comprehensive digital library of page images, we expect evolution toward a knowledge base supporting sophisticated queries and automated reasoning. It is the aim of the nascent International Mathematical Knowledge Trust to provide a framework and to foster a community to make progress in this direction. We can foresee that such a knowledge base will enhance the capacity of individual mathematicians, accelerate discovery and allow new kinds of collaboration.","PeriodicalId":268635,"journal":{"name":"2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132277897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}