In standard epistemic logic, knowing that p is the same as knowing that p is true, but it does not say anything about understanding p or knowing its meaning. In this paper, we present a conservative extension of Public Announcement Logic (PAL) in which agents have knowledge or belief about both the truth values and the meanings of propositions. We give a complete axiomatization of PAL with Boolean Definitions and discuss various examples. An agent may understand a proposition without knowing its truth value or the other way round. Moreover, multiple agents can agree on something without agreeing on its meaning and vice versa.
{"title":"How to Agree without Understanding Each Other: Public Announcement Logic with Boolean Definitions","authors":"Malvin Gattinger, Yanjing Wang","doi":"10.4204/EPTCS.297.14","DOIUrl":"https://doi.org/10.4204/EPTCS.297.14","url":null,"abstract":"In standard epistemic logic, knowing that p is the same as knowing that p is true, but it does not say anything about understanding p or knowing its meaning. In this paper, we present a conservative extension of Public Announcement Logic (PAL) in which agents have knowledge or belief about both the truth values and the meanings of propositions. We give a complete axiomatization of PAL with Boolean Definitions and discuss various examples. An agent may understand a proposition without knowing its truth value or the other way round. Moreover, multiple agents can agree on something without agreeing on its meaning and vice versa.","PeriodicalId":118894,"journal":{"name":"Theoretical Aspects of Rationality and Knowledge","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131672858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Whether it be in normal form games, or in fair allocations, or in voter preferences in voting systems, a certain pattern of reasoning is common. From a particular profile, an agent or a group of agents may have an incentive to shift to a new one. This induces a natural graph structure that we call the improvement graph on the strategy space of these systems. We suggest that the monadic fixed-point logic with counting, an extension of monadic first-order logic on graphs with fixed-point and counting quantifiers, is a natural specification language on improvement graphs, and thus for a class of properties that can be interpreted across these domains. The logic has an efficient model checking algorithm (in the size of the improvement graph).
{"title":"Reasoning about Social Choice and Games in Monadic Fixed-Point Logic","authors":"Ramit Das, R. Ramanujam, Sunil Simon","doi":"10.4204/EPTCS.297.8","DOIUrl":"https://doi.org/10.4204/EPTCS.297.8","url":null,"abstract":"Whether it be in normal form games, or in fair allocations, or in voter preferences in voting systems, a certain pattern of reasoning is common. From a particular profile, an agent or a group of agents may have an incentive to shift to a new one. This induces a natural graph structure that we call the improvement graph on the strategy space of these systems. We suggest that the monadic fixed-point logic with counting, an extension of monadic first-order logic on graphs with fixed-point and counting quantifiers, is a natural specification language on improvement graphs, and thus for a class of properties that can be interpreted across these domains. The logic has an efficient model checking algorithm (in the size of the improvement graph).","PeriodicalId":118894,"journal":{"name":"Theoretical Aspects of Rationality and Knowledge","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125636173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The early literature on epistemic logic in philosophy focused on reasoning about the knowledge or belief of a single agent, especially on controversies about "introspection axioms" such as the 4 and 5 axioms. By contrast, the later literature on epistemic logic in computer science and game theory has focused on multi-agent epistemic reasoning, with the single-agent 4 and 5 axioms largely taken for granted. In the relevant multi-agent scenarios, it is often important to reason about what agent A believes about what agent B believes about what agent A believes; but it is rarely important to reason just about what agent A believes about what agent A believes. This raises the question of the extent to which single-agent introspection axioms actually matter for multi-agent epistemic reasoning. In this paper, we formalize and answer this question. To formalize the question, we first define a set of multi-agent formulas that we call agent-alternating formulas, including formulas like Box_a Box_b Box_a p but not formulas like Box_a Box_a p. We then prove, for the case of belief, that if one starts with multi-agent K or KD, then adding both the 4 and 5 axioms (or adding the B axiom) does not allow the derivation of any new agent-alternating formulas -- in this sense, introspection axioms do not matter. By contrast, we show that such conservativity results fail for knowledge and multi-agent KT, though they hold with respect to a smaller class of agent-nonrepeating formulas.
{"title":"When Do Introspection Axioms Matter for Multi-Agent Epistemic Reasoning?","authors":"Yifeng Ding, W. Holliday, Cedegao Zhang","doi":"10.4204/EPTCS.297.9","DOIUrl":"https://doi.org/10.4204/EPTCS.297.9","url":null,"abstract":"The early literature on epistemic logic in philosophy focused on reasoning about the knowledge or belief of a single agent, especially on controversies about \"introspection axioms\" such as the 4 and 5 axioms. By contrast, the later literature on epistemic logic in computer science and game theory has focused on multi-agent epistemic reasoning, with the single-agent 4 and 5 axioms largely taken for granted. In the relevant multi-agent scenarios, it is often important to reason about what agent A believes about what agent B believes about what agent A believes; but it is rarely important to reason just about what agent A believes about what agent A believes. This raises the question of the extent to which single-agent introspection axioms actually matter for multi-agent epistemic reasoning. In this paper, we formalize and answer this question. To formalize the question, we first define a set of multi-agent formulas that we call agent-alternating formulas, including formulas like Box_a Box_b Box_a p but not formulas like Box_a Box_a p. We then prove, for the case of belief, that if one starts with multi-agent K or KD, then adding both the 4 and 5 axioms (or adding the B axiom) does not allow the derivation of any new agent-alternating formulas -- in this sense, introspection axioms do not matter. By contrast, we show that such conservativity results fail for knowledge and multi-agent KT, though they hold with respect to a smaller class of agent-nonrepeating formulas.","PeriodicalId":118894,"journal":{"name":"Theoretical Aspects of Rationality and Knowledge","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133711826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a proof of Arrow's theorem from social choice theory that uses a fixpoint argument. Specifically, we use Banach's result on the existence of a fixpoint of a contractive map defined on a complete metric space. Conceptually, our approach shows that dictatorships can be seen as fixpoints of a certain process.
{"title":"Arrow's Theorem Through a Fixpoint Argument","authors":"F. Feys, H. Hansen","doi":"10.4204/EPTCS.297.12","DOIUrl":"https://doi.org/10.4204/EPTCS.297.12","url":null,"abstract":"We present a proof of Arrow's theorem from social choice theory that uses a fixpoint argument. Specifically, we use Banach's result on the existence of a fixpoint of a contractive map defined on a complete metric space. Conceptually, our approach shows that dictatorships can be seen as fixpoints of a certain process.","PeriodicalId":118894,"journal":{"name":"Theoretical Aspects of Rationality and Knowledge","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121150366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Knowledge can be represented compactly in multiple ways, from a set of propositional formulas, to a Kripke model, to a database. In this paper we study the aggregation of information coming from multiple sources, each source submitting a database modelled as a first-order relational structure. In the presence of integrity constraints, we identify classes of aggregators that respect them in the aggregated database, provided these are satisfied in all individual databases. We also characterise languages for first-order queries on which the answer to a query on the aggregated database coincides with the aggregation of the answers to the query obtained on each individual database. This contribution is meant to be a first step on the application of techniques from social choice theory to knowledge representation in databases.
{"title":"Social Choice Methods for Database Aggregation","authors":"F. Belardinelli, Umberto Grandi","doi":"10.4204/EPTCS.297.4","DOIUrl":"https://doi.org/10.4204/EPTCS.297.4","url":null,"abstract":"Knowledge can be represented compactly in multiple ways, from a set of propositional formulas, to a Kripke model, to a database. In this paper we study the aggregation of information coming from multiple sources, each source submitting a database modelled as a first-order relational structure. In the presence of integrity constraints, we identify classes of aggregators that respect them in the aggregated database, provided these are satisfied in all individual databases. We also characterise languages for first-order queries on which the answer to a query on the aggregated database coincides with the aggregation of the answers to the query obtained on each individual database. This contribution is meant to be a first step on the application of techniques from social choice theory to knowledge representation in databases.","PeriodicalId":118894,"journal":{"name":"Theoretical Aspects of Rationality and Knowledge","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125653765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sujata Ghosh, A. Heifetz, R. Verbrugge, H. D. Weerd
In an earlier experiment, participants played a perfect information game against a computer, which was programmed to deviate often from its backward induction strategy right at the beginning of the game. Participants knew that in each game, the computer was nevertheless optimizing against some belief about the participant's future strategy. In the aggregate, it appeared that participants applied forward induction. However, cardinal effects seemed to play a role as well: a number of participants might have been trying to maximize expected utility. In order to find out how people really reason in such a game, we designed centipede-like turn-taking games with new payoff structures in order to make such cardinal effects less likely. We ran a new experiment with 50 participants, based on marble drop visualizations of these revised payoff structures. After participants played 48 test games, we asked a number of questions to gauge the participants' reasoning about their own and the opponent's strategy at all decision nodes of a sample game. We also checked how the verbalized strategies fit to the actual choices they made at all their decision points in the 48 test games. Even though in the aggregate, participants in the new experiment still tend to slightly favor the forward induction choice at their first decision node, their verbalized strategies most often depend on their own attitudes towards risk and those they assign to the computer opponent, sometimes in addition to considerations about cooperativeness and competitiveness.
{"title":"What Drives People's Choices in Turn-Taking Games, if not Game-Theoretic Rationality?","authors":"Sujata Ghosh, A. Heifetz, R. Verbrugge, H. D. Weerd","doi":"10.4204/EPTCS.251.19","DOIUrl":"https://doi.org/10.4204/EPTCS.251.19","url":null,"abstract":"In an earlier experiment, participants played a perfect information game against a computer, which was programmed to deviate often from its backward induction strategy right at the beginning of the game. Participants knew that in each game, the computer was nevertheless optimizing against some belief about the participant's future strategy. In the aggregate, it appeared that participants applied forward induction. However, cardinal effects seemed to play a role as well: a number of participants might have been trying to maximize expected utility. In order to find out how people really reason in such a game, we designed centipede-like turn-taking games with new payoff structures in order to make such cardinal effects less likely. We ran a new experiment with 50 participants, based on marble drop visualizations of these revised payoff structures. After participants played 48 test games, we asked a number of questions to gauge the participants' reasoning about their own and the opponent's strategy at all decision nodes of a sample game. We also checked how the verbalized strategies fit to the actual choices they made at all their decision points in the 48 test games. Even though in the aggregate, participants in the new experiment still tend to slightly favor the forward induction choice at their first decision node, their verbalized strategies most often depend on their own attitudes towards risk and those they assign to the computer opponent, sometimes in addition to considerations about cooperativeness and competitiveness.","PeriodicalId":118894,"journal":{"name":"Theoretical Aspects of Rationality and Knowledge","volume":"263 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114565067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
At the heart of the Bitcoin is a blockchain protocol, a protocol for achieving consensus on a public ledger that records bitcoin transactions. To the extent that a blockchain protocol is used for applications such as contract signing and making certain transactions (such as house sales) public, we need to understand what guarantees the protocol gives us in terms of agents' knowledge. Here, we provide a complete characterization of agent's knowledge when running a blockchain protocol using a variant of common knowledge that takes into account the fact that agents can enter and leave the system, it is not known which agents are in fact following the protocol (some agents may want to deviate if they can gain by doing so), and the fact that the guarantees provided by blockchain protocols are probabilistic. We then consider some scenarios involving contracts and show that this level of knowledge suffices for some scenarios, but not others.
{"title":"A Knowledge-Based Analysis of the Blockchain Protocol","authors":"Joseph Y. Halpern, R. Pass","doi":"10.4204/EPTCS.251.22","DOIUrl":"https://doi.org/10.4204/EPTCS.251.22","url":null,"abstract":"At the heart of the Bitcoin is a blockchain protocol, a protocol for achieving consensus on a public ledger that records bitcoin transactions. To the extent that a blockchain protocol is used for applications such as contract signing and making certain transactions (such as house sales) public, we need to understand what guarantees the protocol gives us in terms of agents' knowledge. Here, we provide a complete characterization of agent's knowledge when running a blockchain protocol using a variant of common knowledge that takes into account the fact that agents can enter and leave the system, it is not known which agents are in fact following the protocol (some agents may want to deviate if they can gain by doing so), and the fact that the guarantees provided by blockchain protocols are probabilistic. We then consider some scenarios involving contracts and show that this level of knowledge suffices for some scenarios, but not others.","PeriodicalId":118894,"journal":{"name":"Theoretical Aspects of Rationality and Knowledge","volume":"410 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132066822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We prove that every Condorcet-consistent voting rule can be manipulated by a voter who completely reverses their preference ranking, assuming that there are at least 4 alternatives. This corrects an error and improves a result of [Sanver, M. R. and Zwicker, W. S. (2009). One-way monotonicity as a form of strategy-proofness. Int J Game Theory 38(4), 553-574.] For the case of precisely 4 alternatives, we exactly characterise the number of voters for which this impossibility result can be proven. We also show analogues of our result for irresolute voting rules. We then leverage our result to state a strong form of the Gibbard-Satterthwaite Theorem.
我们证明了每个孔多塞一致的投票规则都可以被一个完全颠倒其偏好排序的选民操纵,假设至少有4个选择。这纠正了Sanver, M. R. and Zwicker, W. S.(2009)的一个错误并改进了结果。单向单调性作为策略证明的一种形式。[J]博弈论38(4),553-574。对于只有4种选择的情况,我们精确地描述了能够证明这种不可能结果的选民的数量。我们还展示了不确定投票规则的类似结果。然后我们利用我们的结果来陈述Gibbard-Satterthwaite定理的强形式。
{"title":"Condorcet's Principle and the Preference Reversal Paradox","authors":"Dominik Peters","doi":"10.4204/EPTCS.251.34","DOIUrl":"https://doi.org/10.4204/EPTCS.251.34","url":null,"abstract":"We prove that every Condorcet-consistent voting rule can be manipulated by a voter who completely reverses their preference ranking, assuming that there are at least 4 alternatives. This corrects an error and improves a result of [Sanver, M. R. and Zwicker, W. S. (2009). One-way monotonicity as a form of strategy-proofness. Int J Game Theory 38(4), 553-574.] For the case of precisely 4 alternatives, we exactly characterise the number of voters for which this impossibility result can be proven. We also show analogues of our result for irresolute voting rules. We then leverage our result to state a strong form of the Gibbard-Satterthwaite Theorem.","PeriodicalId":118894,"journal":{"name":"Theoretical Aspects of Rationality and Knowledge","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130243488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Arbitrary Arrow Update Logic with Common Knowledge (AAULC) is a dynamic epistemic logic with (i) an arrow update operator, which represents a particular type of information change and (ii) an arbitrary arrow update operator, which quantifies over arrow updates. By encoding the execution of a Turing machine in AAULC, we show that neither the valid formulas nor the satisfiable formulas of AAULC are recursively enumerable. In particular, it follows that AAULC does not have a recursive axiomatization.
{"title":"Arbitrary Arrow Update Logic with Common Knowledge is neither RE nor co-RE","authors":"Louwe B. Kuijer","doi":"10.4204/EPTCS.251.27","DOIUrl":"https://doi.org/10.4204/EPTCS.251.27","url":null,"abstract":"Arbitrary Arrow Update Logic with Common Knowledge (AAULC) is a dynamic epistemic logic with (i) an arrow update operator, which represents a particular type of information change and (ii) an arbitrary arrow update operator, which quantifies over arrow updates. \u0000By encoding the execution of a Turing machine in AAULC, we show that neither the valid formulas nor the satisfiable formulas of AAULC are recursively enumerable. In particular, it follows that AAULC does not have a recursive axiomatization.","PeriodicalId":118894,"journal":{"name":"Theoretical Aspects of Rationality and Knowledge","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115334110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Scott Garrabrant, Tsvi Benson-Tilsen, Andrew Critch, N. Soares, Jessica Taylor
We present the logical induction criterion for computable algorithms that assign probabilities to every logical statement in a given formal language, and refine those probabilities over time. The criterion is motivated by a series of stock trading analogies. Roughly speaking, each logical sentence phi is associated with a stock that is worth $1 per share if phi is true and nothing otherwise, and we interpret the belief-state of a logically uncertain reasoner as a set of market prices, where pt_N(phi)=50% means that on day N, shares of phi may be bought or sold from the reasoner for 50%. A market is then called a logical inductor if (very roughly) there is no polynomial-time computable trading strategy with finite risk tolerance that earns unbounded profits in that market over time. We then describe how this single criterion implies a number of desirable properties of bounded reasoners; for example, logical inductors outpace their underlying deductive process, perform universal empirical induction given enough time to think, and place strong trust in their own reasoning process.
{"title":"A Formal Approach to the Problem of Logical Non-Omniscience","authors":"Scott Garrabrant, Tsvi Benson-Tilsen, Andrew Critch, N. Soares, Jessica Taylor","doi":"10.4204/EPTCS.251.16","DOIUrl":"https://doi.org/10.4204/EPTCS.251.16","url":null,"abstract":"We present the logical induction criterion for computable algorithms that assign probabilities to every logical statement in a given formal language, and refine those probabilities over time. The criterion is motivated by a series of stock trading analogies. Roughly speaking, each logical sentence phi is associated with a stock that is worth $1 per share if phi is true and nothing otherwise, and we interpret the belief-state of a logically uncertain reasoner as a set of market prices, where pt_N(phi)=50% means that on day N, shares of phi may be bought or sold from the reasoner for 50%. A market is then called a logical inductor if (very roughly) there is no polynomial-time computable trading strategy with finite risk tolerance that earns unbounded profits in that market over time. We then describe how this single criterion implies a number of desirable properties of bounded reasoners; for example, logical inductors outpace their underlying deductive process, perform universal empirical induction given enough time to think, and place strong trust in their own reasoning process.","PeriodicalId":118894,"journal":{"name":"Theoretical Aspects of Rationality and Knowledge","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125292554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}