M. Benevides, Isaque M. S. Lima, R. Nader, P. Rougemont
In this paper we describe an approach to resolve strategic games in which players can assume different types along the game. Our goal is to infer which type the opponent is adopting at each moment so that we can increase the player's odds. To achieve that we use Markov games combined with hidden Markov model. We discuss a hypothetical example of a tennis game whose solution can be applied to any game with similar characteristics.
{"title":"Using HMM in Strategic Games","authors":"M. Benevides, Isaque M. S. Lima, R. Nader, P. Rougemont","doi":"10.4204/EPTCS.144.6","DOIUrl":"https://doi.org/10.4204/EPTCS.144.6","url":null,"abstract":"In this paper we describe an approach to resolve strategic games in which players can assume different types along the game. Our goal is to infer which type the opponent is adopting at each moment so that we can increase the player's odds. To achieve that we use Markov games combined with hidden Markov model. We discuss a hypothetical example of a tennis game whose solution can be applied to any game with similar characteristics.","PeriodicalId":88470,"journal":{"name":"Dialogues in cardiovascular medicine : DCM","volume":"67 1","pages":"73-84"},"PeriodicalIF":0.0,"publicationDate":"2014-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91192923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a new model of computation based on nonstandard analysis. Intuitively, the role of "algorithm" is played by a new notion of finite procedure, called Omega-invariance and inspired by physics, from nonstandard analysis. Moreover, the role of 'proof' is taken up by the Transfer Principle from nonstandard analysis. We obtain a number of results in Constructive Reverse Mathematics to illustrate the tight correspondence to Errett Bishop's Constructive Analysis and the associated Constructive Reverse Mathematics.
{"title":"Algorithm and proof as Ω-invariance and transfer: A new model of computation in nonstandard analysis","authors":"Sam Sanders","doi":"10.4204/EPTCS.143.9","DOIUrl":"https://doi.org/10.4204/EPTCS.143.9","url":null,"abstract":"We propose a new model of computation based on nonstandard analysis. Intuitively, the role of \"algorithm\" is played by a new notion of finite procedure, called Omega-invariance and inspired by physics, from nonstandard analysis. Moreover, the role of 'proof' is taken up by the Transfer Principle from nonstandard analysis. We obtain a number of results in Constructive Reverse Mathematics to illustrate the tight correspondence to Errett Bishop's Constructive Analysis and the associated Constructive Reverse Mathematics.","PeriodicalId":88470,"journal":{"name":"Dialogues in cardiovascular medicine : DCM","volume":"83 1","pages":"97-109"},"PeriodicalIF":0.0,"publicationDate":"2014-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78238087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Interval-valued computing is a relatively new computing paradigm. It uses finitely many interval segments over the unit interval in a computation as data structure. The satisfiability of Quantified Boolean formulae and other hard problems, like integer factorization, can be solved in an effective way by its massive parallelism. The discrete logarithm problem plays an important role in practice, there are cryptographical methods based on its computational hardness. In this paper we show that the discrete logarithm problem is computable by an interval-valued computing in a polynomial number of steps (within this paradigm).
{"title":"Computing discrete logarithm by interval-valued paradigm","authors":"B. Nagy, S. Vályi","doi":"10.4204/EPTCS.143.7","DOIUrl":"https://doi.org/10.4204/EPTCS.143.7","url":null,"abstract":"Interval-valued computing is a relatively new computing paradigm. It uses finitely many interval segments over the unit interval in a computation as data structure. The satisfiability of Quantified Boolean formulae and other hard problems, like integer factorization, can be solved in an effective way by its massive parallelism. The discrete logarithm problem plays an important role in practice, there are cryptographical methods based on its computational hardness. In this paper we show that the discrete logarithm problem is computable by an interval-valued computing in a polynomial number of steps (within this paradigm).","PeriodicalId":88470,"journal":{"name":"Dialogues in cardiovascular medicine : DCM","volume":"108 1","pages":"76-86"},"PeriodicalIF":0.0,"publicationDate":"2014-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77057200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We recall from previous work a model-independent framework of computational complexity theory. Notably for the present paper, the framework allows formalization of the issues of precision that present themselves when one considers physical, error-prone (especially analogue rather than digital) computational systems. We take as a case study the ray-tracing problem, a Turing-machine-incomputable problem that can, in apparent violation of the Church-Turing thesis, nonetheless be said to be solved by certain optical computers; however, we apply the framework of complexity theory so as to formalize the intuition that the purported super-Turing power of these computers in fact vanishes once precision is properly considered.
{"title":"Ray tracing - computing the incomputable?","authors":"Ed Blakey","doi":"10.4204/EPTCS.143.3","DOIUrl":"https://doi.org/10.4204/EPTCS.143.3","url":null,"abstract":"We recall from previous work a model-independent framework of computational complexity theory. Notably for the present paper, the framework allows formalization of the issues of precision that present themselves when one considers physical, error-prone (especially analogue rather than digital) computational systems. We take as a case study the ray-tracing problem, a Turing-machine-incomputable problem that can, in apparent violation of the Church-Turing thesis, nonetheless be said to be solved by certain optical computers; however, we apply the framework of complexity theory so as to formalize the intuition that the purported super-Turing power of these computers in fact vanishes once precision is properly considered.","PeriodicalId":88470,"journal":{"name":"Dialogues in cardiovascular medicine : DCM","volume":"1 1","pages":"32-40"},"PeriodicalIF":0.0,"publicationDate":"2014-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79952465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We formalize the intuitive idea of a labelled discrete surface which evolves in time, subject to two natural constraints: the evolution does not propagate information too fast; and it acts everywhere the same.
{"title":"Causal Dynamics of Discrete Surfaces","authors":"P. Arrighi, S. Martiel, Zizhu Wang","doi":"10.4204/EPTCS.144.3","DOIUrl":"https://doi.org/10.4204/EPTCS.144.3","url":null,"abstract":"We formalize the intuitive idea of a labelled discrete surface which evolves in time, subject to two natural constraints: the evolution does not propagate information too fast; and it acts everywhere the same.","PeriodicalId":88470,"journal":{"name":"Dialogues in cardiovascular medicine : DCM","volume":"9 1","pages":"30-40"},"PeriodicalIF":0.0,"publicationDate":"2014-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81733673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A denotational semantics of quantum Turing machines having a quantum control is defined in the dagger compact closed category of finite dimensional Hilbert spaces. Using the Moore-Penrose generalized inverse, a new additive trace is introduced on the restriction of this category to isometries, which trace is carried over to directed quantum Turing machines as monoidal automata. The Joyal-Street-Verity Int construction is then used to extend this structure to a reversible bidirectional one.
{"title":"Quantum Turing automata","authors":"M. Bartha","doi":"10.4204/EPTCS.143.2","DOIUrl":"https://doi.org/10.4204/EPTCS.143.2","url":null,"abstract":"A denotational semantics of quantum Turing machines having a quantum control is defined in the dagger compact closed category of finite dimensional Hilbert spaces. Using the Moore-Penrose generalized inverse, a new additive trace is introduced on the restriction of this category to isometries, which trace is carried over to directed quantum Turing machines as monoidal automata. The Joyal-Street-Verity Int construction is then used to extend this structure to a reversible bidirectional one.","PeriodicalId":88470,"journal":{"name":"Dialogues in cardiovascular medicine : DCM","volume":"1 1","pages":"17-31"},"PeriodicalIF":0.0,"publicationDate":"2014-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76993221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Compensation is a technique to roll-back a system to a consistent state in case of failure. Recovery mechanisms for compensating calculi specify the order of execution of compensation sequences. Dynamic recovery means that the order of execution is determined at runtime. In this paper, we define an extension of Compensating CSP, called DEcCSP, with general dynamic recovery. We provide a formal, operational semantics for the calculus, and illustrate its expressive power with a case study. In contrast with previous versions of Compensating CSP, DEcCSP provides mechanisms to replace or discard compensations at runtime. Additionally, we bring back to DEcCSP standard CSP operators that are not available in other compensating CSP calculi, and introduce channel communication.
{"title":"General dynamic recovery for compensating CSP","authors":"Abeer S. Al-Humaimeedy, M. Fernández","doi":"10.4204/EPTCS.143.1","DOIUrl":"https://doi.org/10.4204/EPTCS.143.1","url":null,"abstract":"Compensation is a technique to roll-back a system to a consistent state in case of failure. Recovery mechanisms for compensating calculi specify the order of execution of compensation sequences. Dynamic recovery means that the order of execution is determined at runtime. In this paper, we define an extension of Compensating CSP, called DEcCSP, with general dynamic recovery. We provide a formal, operational semantics for the calculus, and illustrate its expressive power with a case study. In contrast with previous versions of Compensating CSP, DEcCSP provides mechanisms to replace or discard compensations at runtime. Additionally, we bring back to DEcCSP standard CSP operators that are not available in other compensating CSP calculi, and introduce channel communication.","PeriodicalId":88470,"journal":{"name":"Dialogues in cardiovascular medicine : DCM","volume":"189 1","pages":"3-16"},"PeriodicalIF":0.0,"publicationDate":"2014-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79749454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present ingpu, a GPU-based evaluator for interaction nets that heavily utilizes their potential for parallel evaluation. We discuss advantages and challenges of the ongoing implementation of ingpu and compare its performance to existing interaction nets evaluators.
{"title":"Towards a GPU-based implementation of interaction nets","authors":"Eugen Jiresch","doi":"10.4204/EPTCS.143.4","DOIUrl":"https://doi.org/10.4204/EPTCS.143.4","url":null,"abstract":"We present ingpu, a GPU-based evaluator for interaction nets that heavily utilizes their potential for parallel evaluation. We discuss advantages and challenges of the ongoing implementation of ingpu and compare its performance to existing interaction nets evaluators.","PeriodicalId":88470,"journal":{"name":"Dialogues in cardiovascular medicine : DCM","volume":"30 1","pages":"41-53"},"PeriodicalIF":0.0,"publicationDate":"2014-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88951306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce the concept of effective dimension for a wide class of metric spaces that are not required to have a computable measure. Effective dimension was defined by Lutz in (Lutz 2003) for Cantor space and has also been extended to Euclidean space. Lutz effectivization uses the concept of gale and supergale, our extension of Hausdorff dimension to other metric spaces is also based on a supergale characterization of dimension, which in practice avoids an extra quantifier present in the classical definition of dimension that is based on Hausdorff measure and therefore allows effectivization for small time-bounds. We present here the concept of constructive dimension and its characterization in terms of Kolmogorov complexity, for which we extend the concept of Kolmogorov complexity to any metric space defining the Kolmogorov complexity of a point at a certain precision. Further research directions are indicated.
{"title":"Effective dimension in some general metric spaces","authors":"E. Mayordomo","doi":"10.4204/eptcs.143.6","DOIUrl":"https://doi.org/10.4204/eptcs.143.6","url":null,"abstract":"We introduce the concept of effective dimension for a wide class of metric spaces that are not required to have a computable measure. Effective dimension was defined by Lutz in (Lutz 2003) for Cantor space and has also been extended to Euclidean space. Lutz effectivization uses the concept of gale and supergale, our extension of Hausdorff dimension to other metric spaces is also based on a supergale characterization of dimension, which in practice avoids an extra quantifier present in the classical definition of dimension that is based on Hausdorff measure and therefore allows effectivization for small time-bounds. \u0000We present here the concept of constructive dimension and its characterization in terms of Kolmogorov complexity, for which we extend the concept of Kolmogorov complexity to any metric space defining the Kolmogorov complexity of a point at a certain precision. Further research directions are indicated.","PeriodicalId":88470,"journal":{"name":"Dialogues in cardiovascular medicine : DCM","volume":"89 1","pages":"67-75"},"PeriodicalIF":0.0,"publicationDate":"2014-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83814208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is well-known that the size of propositional classical proofs can be huge. Proof theoretical studies discovered exponential gaps between normal or cut free proofs and their respective non-normal proofs. The aim of this work is to study how to reduce the weight of propositional deductions. We present the formalism of proof-graphs for purely implicational logic, which are graphs of a specific shape that are intended to capture the logical structure of a deduction. The advantage of this formalism is that formulas can be shared in the reduced proof. In the present paper we give a precise definition of proof-graphs for the minimal implicational logic, together with a normalization procedure for these proof-graphs. In contrast to standard tree-like formalisms, our normalization does not increase the number of nodes, when applied to the corresponding minimal proof-graph representations.
{"title":"Proof-graphs for Minimal Implicational Logic","authors":"Marcela Quispe-Cruz, E. Haeusler, L. Gordeev","doi":"10.4204/EPTCS.144.2","DOIUrl":"https://doi.org/10.4204/EPTCS.144.2","url":null,"abstract":"It is well-known that the size of propositional classical proofs can be huge. Proof theoretical studies discovered exponential gaps between normal or cut free proofs and their respective non-normal proofs. The aim of this work is to study how to reduce the weight of propositional deductions. We present the formalism of proof-graphs for purely implicational logic, which are graphs of a specific shape that are intended to capture the logical structure of a deduction. The advantage of this formalism is that formulas can be shared in the reduced proof. \u0000In the present paper we give a precise definition of proof-graphs for the minimal implicational logic, together with a normalization procedure for these proof-graphs. In contrast to standard tree-like formalisms, our normalization does not increase the number of nodes, when applied to the corresponding minimal proof-graph representations.","PeriodicalId":88470,"journal":{"name":"Dialogues in cardiovascular medicine : DCM","volume":"17 1","pages":"16-29"},"PeriodicalIF":0.0,"publicationDate":"2014-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73894769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}