The amount of gene expression data gathered in the last decade has increased exponentially due to modern technologies like micro array and next-generation sequencing, which allow measuring the levels of expression of thousands of genes simultaneously. Clustering is a data mining technique often used for analysing this kind of data, as it is able to discover patterns in genes that are very important for understanding functional genomics. To study biological processes which are dynamic by nature, researchers must analyse data gradually, as the processes evolve. There are two ways to achieve this: perform re-clustering from scratch every time new gene expression levels are available, or adapt the previously obtained partition using a dynamic clustering algorithm, which is more efficient. In this paper we propose a fuzzy approach for dynamic clustering of gene expression data and we prove its effectiveness through a set of experimental evaluations performed on a real-life data set.
{"title":"Dynamic Clustering of Gene Expression Data Using a Fuzzy Approach","authors":"A. Sirbu, G. Czibula, Maria-Iuliana Bocicor","doi":"10.1109/SYNASC.2014.37","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.37","url":null,"abstract":"The amount of gene expression data gathered in the last decade has increased exponentially due to modern technologies like micro array and next-generation sequencing, which allow measuring the levels of expression of thousands of genes simultaneously. Clustering is a data mining technique often used for analysing this kind of data, as it is able to discover patterns in genes that are very important for understanding functional genomics. To study biological processes which are dynamic by nature, researchers must analyse data gradually, as the processes evolve. There are two ways to achieve this: perform re-clustering from scratch every time new gene expression levels are available, or adapt the previously obtained partition using a dynamic clustering algorithm, which is more efficient. In this paper we propose a fuzzy approach for dynamic clustering of gene expression data and we prove its effectiveness through a set of experimental evaluations performed on a real-life data set.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131044651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tools and methods able to simplify the development process of parallel software, but also to assure a high level of performance and robustness, are necessary. Power lists and their variants are data structures that can be successfully used in a simple, provably correct, functional description of parallel programs, which are divide-and-conquer in nature. The paper presents how programs defined based on power lists could be implemented in the functional language OCaml plus calls to the parallel functional programming library Bulk Synchronous Parallel ML. BSML functions follow the BSP model requirements, and so its advantages are introduced in OCaml parallel code. In order to write power list programs in BSML we provide a data type for power lists and a set of skeletons (higher-order functions implemented in parallel) to manipulate them. Examples are given and concrete experiments for their executions are conducted.
{"title":"Implementing Powerlists with Bulk Synchronous Parallel ML","authors":"F. Loulergue, Virginia Niculescu, J. Tesson","doi":"10.1109/SYNASC.2014.51","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.51","url":null,"abstract":"Tools and methods able to simplify the development process of parallel software, but also to assure a high level of performance and robustness, are necessary. Power lists and their variants are data structures that can be successfully used in a simple, provably correct, functional description of parallel programs, which are divide-and-conquer in nature. The paper presents how programs defined based on power lists could be implemented in the functional language OCaml plus calls to the parallel functional programming library Bulk Synchronous Parallel ML. BSML functions follow the BSP model requirements, and so its advantages are introduced in OCaml parallel code. In order to write power list programs in BSML we provide a data type for power lists and a set of skeletons (higher-order functions implemented in parallel) to manipulate them. Examples are given and concrete experiments for their executions are conducted.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134108063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider the problem of efficiently computing homology with Z coefficients as well as homology generators for simplicial complexes of arbitrary dimension. We analyze, compare and discuss the equivalence of different methods based on combining reductions, co reductions and discrete Morse theory. We show that the combination of these methods produces theoretically sound approaches which are mutually equivalent. One of these methods has been implemented for simplicial complexes by using a compact data structure for representing the complex and a compact encoding of the discrete Morse gradient. We present experimental results and discuss further developments.
{"title":"Efficient Computation of Simplicial Homology through Acyclic Matching","authors":"Ulderico Fugacci, F. Iuricich, L. Floriani","doi":"10.1109/SYNASC.2014.84","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.84","url":null,"abstract":"We consider the problem of efficiently computing homology with Z coefficients as well as homology generators for simplicial complexes of arbitrary dimension. We analyze, compare and discuss the equivalence of different methods based on combining reductions, co reductions and discrete Morse theory. We show that the combination of these methods produces theoretically sound approaches which are mutually equivalent. One of these methods has been implemented for simplicial complexes by using a compact data structure for representing the complex and a compact encoding of the discrete Morse gradient. We present experimental results and discuss further developments.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132095692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a new strategy for combinatorial push-relabel algorithm used in sub modular flows and matroid optimization. In the case of matroid optimization, in contrast with other known algorithms, our strategy needs no lexicographic order of the elements. Combined with a reduction of the number of active basis the resulting procedure gives a time complexity of O(n6). Moreover our rule offers more interesting properties of the treated elements and suggests the adaptation of this rule to the sub modular flow algorithm. The above strategy applied for sub modular flows gives an O(n5) time complexity procedure, which is the same with the known best complexity given by a procedure based on highest level rule. This method starts a way for a simpler algorithm for finding a feasible sub modular flow which is described in the second part of the paper. Our method for sub modular flow is based on a lowest level rule combined with a bfs-like traversal. The lowest level rule does not work alone because new (ψ- or g-) larger nodes on lower levels can appear during treatment of the current node. Therefore, it is reinforced with a bfs traversal: the new larger nodes are added to a queue - restarted with a lowest level, larger node, whenever it becomes empty. The O(n5) time complexity is the same as the best known. Our strategy brings a forest structure of the treated nodes, where the basic operations (pushes and liftings) can be easily numbered and for this reason has a better potential for future improvements.
{"title":"A Lowest Level Rule Push-Relabel Algorithm for Submodular Flows and Matroid Optimization","authors":"E. F. Olariu, Cristian Frasinaru","doi":"10.1109/SYNASC.2014.21","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.21","url":null,"abstract":"We present a new strategy for combinatorial push-relabel algorithm used in sub modular flows and matroid optimization. In the case of matroid optimization, in contrast with other known algorithms, our strategy needs no lexicographic order of the elements. Combined with a reduction of the number of active basis the resulting procedure gives a time complexity of O(n6). Moreover our rule offers more interesting properties of the treated elements and suggests the adaptation of this rule to the sub modular flow algorithm. The above strategy applied for sub modular flows gives an O(n5) time complexity procedure, which is the same with the known best complexity given by a procedure based on highest level rule. This method starts a way for a simpler algorithm for finding a feasible sub modular flow which is described in the second part of the paper. Our method for sub modular flow is based on a lowest level rule combined with a bfs-like traversal. The lowest level rule does not work alone because new (ψ- or g-) larger nodes on lower levels can appear during treatment of the current node. Therefore, it is reinforced with a bfs traversal: the new larger nodes are added to a queue - restarted with a lowest level, larger node, whenever it becomes empty. The O(n5) time complexity is the same as the best known. Our strategy brings a forest structure of the treated nodes, where the basic operations (pushes and liftings) can be easily numbered and for this reason has a better potential for future improvements.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133374838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a solution to a specific version of one of the most fundamental computer science problem - the nearest neighbour problem (NN). The new, proposed variant of the NN problem is the multispace, dynamic, fixed-radius, all nearest neighbours problem, where the NN data structure handles queries that concern different subsets of input dimensions. In other words, solutions to this problem allow searching for closest points in terms of different features. This is an important issue in the context of practical applications of incremental state abstraction techniques for high dimensional Markov Decision Processes (MDP). The proposed solution is a set of simple, one-dimensional structures, that can handle range queries for arbitrary subset of input dimensions for the Chebyshev distance. We also provide version for other metrics, and a simplified version of the algorithm that yields approximate results but runs faster. The proposed approximation is deterministic in a way that ensures that the most important (in the context of the considered state abstraction task) parts of the result are returned with no accuracy loss. The presented experimental study demonstrates improvement in comparison to some state-of-the-art algorithms on uniformly random and MDP-generated data.
{"title":"Multispace, Dynamic, Fixed-Radius, All Nearest Neighbours Problem","authors":"B. Papis, A. Pacut","doi":"10.1109/SYNASC.2014.40","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.40","url":null,"abstract":"We present a solution to a specific version of one of the most fundamental computer science problem - the nearest neighbour problem (NN). The new, proposed variant of the NN problem is the multispace, dynamic, fixed-radius, all nearest neighbours problem, where the NN data structure handles queries that concern different subsets of input dimensions. In other words, solutions to this problem allow searching for closest points in terms of different features. This is an important issue in the context of practical applications of incremental state abstraction techniques for high dimensional Markov Decision Processes (MDP). The proposed solution is a set of simple, one-dimensional structures, that can handle range queries for arbitrary subset of input dimensions for the Chebyshev distance. We also provide version for other metrics, and a simplified version of the algorithm that yields approximate results but runs faster. The proposed approximation is deterministic in a way that ensures that the most important (in the context of the considered state abstraction task) parts of the result are returned with no accuracy loss. The presented experimental study demonstrates improvement in comparison to some state-of-the-art algorithms on uniformly random and MDP-generated data.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"410 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123563827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The World Wide Web evolved so rapidly that it is no longer considered a luxury, but a necessity. That is why currently the most popular infection vectors used by cyber criminals are either web pages or commonly used documents (such as pdf files). In both of these cases, the malicious actions performed are written in Java Script. Because of this, Java Script has become the preferred language for spreading malware. In order to be able to stop malicious content from executing, detection of its infection vector is crucial. In this paper we propose various methods for detecting Java Script-based attack vectors. For achieving our goal we first need to fight metamorphism techniques usually used in Java Script malicious code, which are by no means trivial: garbage instruction insertion, variable renaming, equivalent instruction substitution, function permutation, instruction reordering, and so on. Our approach to deal with metamorphism starts with splitting the Java Script content in components and filtering the insignificant ones. We then use a data set, consisting in over one million Java Script files in order to test several machine learning algorithms such as Hidden Markov Models, linear classifiers and hybrid approaches for malware detection. Finally, we analyze these detection methods from a practical point of view, emphasizing the need for a very low false positive rate and the ability to be trained on large datasets.
{"title":"A Practical Guide for Detecting the Java Script-Based Malware Using Hidden Markov Models and Linear Classifiers","authors":"Doina Cosovan, Razvan Benchea, Dragos Gavrilut","doi":"10.1109/SYNASC.2014.39","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.39","url":null,"abstract":"The World Wide Web evolved so rapidly that it is no longer considered a luxury, but a necessity. That is why currently the most popular infection vectors used by cyber criminals are either web pages or commonly used documents (such as pdf files). In both of these cases, the malicious actions performed are written in Java Script. Because of this, Java Script has become the preferred language for spreading malware. In order to be able to stop malicious content from executing, detection of its infection vector is crucial. In this paper we propose various methods for detecting Java Script-based attack vectors. For achieving our goal we first need to fight metamorphism techniques usually used in Java Script malicious code, which are by no means trivial: garbage instruction insertion, variable renaming, equivalent instruction substitution, function permutation, instruction reordering, and so on. Our approach to deal with metamorphism starts with splitting the Java Script content in components and filtering the insignificant ones. We then use a data set, consisting in over one million Java Script files in order to test several machine learning algorithms such as Hidden Markov Models, linear classifiers and hybrid approaches for malware detection. Finally, we analyze these detection methods from a practical point of view, emphasizing the need for a very low false positive rate and the ability to be trained on large datasets.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126006688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper is introducing the principles of a new global optimization strategy, Imperialistic Strategy (IS), applied to the Continuous Global Optimization Problem (CGOP). Inspired from existing multi-population strategies, like the Island Model (IM) approaches to parallel Evolutionary Algorithms (EA) and the Imperialistic Competitive Algorithm (ICA), the proposed IS method is considered an optimization strategy for the reason that it can integrate other well-known optimization methods, which in the context are regarded as sub-methods (although in other contexts they are prominent global optimization methods). Four optimization methods were implemented and tested in the roles of sub-methods: Genetic Algorithm (GA) (a floating-point representation variant), Differential Evolution (DE), Quantum Particle Swarm Optimization (QPSO) and Artificial Bee Colony (ABC). The optimization performances of the proposed optimization methods were compared on a test bed of 9 known multimodal optimization problems by applying an appropriate testing methodology. The obtained increased success rates of IS multi-population variants compared to the success rates of the optimization sub-methods run separately, combined with the increased computing efficiencies possible to be perceived for parallel and distributed implementations, demonstrated that IS is a promising approach to CGOP.
{"title":"An Imperialistic Strategy Approach to Continuous Global Optimization Problem","authors":"George Anescu","doi":"10.1109/SYNASC.2014.79","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.79","url":null,"abstract":"The paper is introducing the principles of a new global optimization strategy, Imperialistic Strategy (IS), applied to the Continuous Global Optimization Problem (CGOP). Inspired from existing multi-population strategies, like the Island Model (IM) approaches to parallel Evolutionary Algorithms (EA) and the Imperialistic Competitive Algorithm (ICA), the proposed IS method is considered an optimization strategy for the reason that it can integrate other well-known optimization methods, which in the context are regarded as sub-methods (although in other contexts they are prominent global optimization methods). Four optimization methods were implemented and tested in the roles of sub-methods: Genetic Algorithm (GA) (a floating-point representation variant), Differential Evolution (DE), Quantum Particle Swarm Optimization (QPSO) and Artificial Bee Colony (ABC). The optimization performances of the proposed optimization methods were compared on a test bed of 9 known multimodal optimization problems by applying an appropriate testing methodology. The obtained increased success rates of IS multi-population variants compared to the success rates of the optimization sub-methods run separately, combined with the increased computing efficiencies possible to be perceived for parallel and distributed implementations, demonstrated that IS is a promising approach to CGOP.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130402455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The aim of this paper is to propose a method to enhance the acquisition resolution of dental radiographic images to facilitate the clinical examination and interpretation. The algorithm is based on the approximation properties of spline-type spaces with multiple generators. These spaces are obtained by applying a discrete group of translation operators to a finite set of smooth functions, forming a Riesz basis for its closed linear span within the Hilbert space L2(R2). For computational efficiency a parallel approach to the algorithm is also proposed. The experiments show that the algorithm allows to increase the resolution of dental radiographic images to sub -- pixel levels.
{"title":"Enhancing Dental Radiographic Images in Spline-Type Spaces","authors":"D. Onchis, S. Gotia","doi":"10.1109/SYNASC.2014.80","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.80","url":null,"abstract":"The aim of this paper is to propose a method to enhance the acquisition resolution of dental radiographic images to facilitate the clinical examination and interpretation. The algorithm is based on the approximation properties of spline-type spaces with multiple generators. These spaces are obtained by applying a discrete group of translation operators to a finite set of smooth functions, forming a Riesz basis for its closed linear span within the Hilbert space L2(R2). For computational efficiency a parallel approach to the algorithm is also proposed. The experiments show that the algorithm allows to increase the resolution of dental radiographic images to sub -- pixel levels.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130493218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adrian Groza, Irina Dragoste, Iulia Sincai, Ioana Jimborean, Vasile Moraru
Selecting the desired ontology from a collection of available ones is essential for ontology reuse. We address the problem of evaluating, ranking and selecting ontologies according to user preferences. We exploit the Analytic Hierarchy Process (AHP) to solve the multiple-criteria decision problem and to model the preferences of the users. We use AHP to analyze the available ontologies from different perspectives and at different abstraction levels. The decision is based on the concrete end-node measurements and their relative importance at higher levels. For supporting the selection decision, we developed an ontology representation, reasoning and management system. The system applies different metrics on ontologies in order to feed the Analytic Hierarchy Process with facts. The running scenario applies our method to the task of reusing ontologies from the tourism domain.
{"title":"An Ontology Selection and Ranking System Based on the Analytic Hierarchy Process","authors":"Adrian Groza, Irina Dragoste, Iulia Sincai, Ioana Jimborean, Vasile Moraru","doi":"10.1109/SYNASC.2014.47","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.47","url":null,"abstract":"Selecting the desired ontology from a collection of available ones is essential for ontology reuse. We address the problem of evaluating, ranking and selecting ontologies according to user preferences. We exploit the Analytic Hierarchy Process (AHP) to solve the multiple-criteria decision problem and to model the preferences of the users. We use AHP to analyze the available ontologies from different perspectives and at different abstraction levels. The decision is based on the concrete end-node measurements and their relative importance at higher levels. For supporting the selection decision, we developed an ontology representation, reasoning and management system. The system applies different metrics on ontologies in order to feed the Analytic Hierarchy Process with facts. The running scenario applies our method to the task of reusing ontologies from the tourism domain.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"237 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122630159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robert W. McGrail, James M. Belk, Solomon Garber, J. Wood, Benjamin Fish
In the 1990's, Jeavons showed that every finite algebra corresponds to a class of constraint satisfaction problems. Vardi later conjectured that idempotent algebras exhibit P/NP dichotomy: Every non NP-complete algebra in this class must be tractable. Here we discuss how tractability corresponds to connectivity in Cayley graphs. In particular, we show that dichotomy in finite idempotent, right quasi groups follows from a very strong notion of connectivity. Moreover, P/NP membership is first-order axiomatizable in involutory quandles.
{"title":"CSPs and Connectedness: P/NP Dichotomy for Idempotent, Right Quasigroups","authors":"Robert W. McGrail, James M. Belk, Solomon Garber, J. Wood, Benjamin Fish","doi":"10.1109/SYNASC.2014.56","DOIUrl":"https://doi.org/10.1109/SYNASC.2014.56","url":null,"abstract":"In the 1990's, Jeavons showed that every finite algebra corresponds to a class of constraint satisfaction problems. Vardi later conjectured that idempotent algebras exhibit P/NP dichotomy: Every non NP-complete algebra in this class must be tractable. Here we discuss how tractability corresponds to connectivity in Cayley graphs. In particular, we show that dichotomy in finite idempotent, right quasi groups follows from a very strong notion of connectivity. Moreover, P/NP membership is first-order axiomatizable in involutory quandles.","PeriodicalId":150575,"journal":{"name":"2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123065779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}