{"title":"Fragmentation and logical omniscience","authors":"A. Elga, A. Rayo","doi":"10.1111/NOUS.12381","DOIUrl":null,"url":null,"abstract":"It would be good to have a Bayesian decision theory that assesses our decisions and thinking according to everyday standards of rationality— standards that do not require logical omniscience (Garber 1983, Hacking 1967). To that end we develop a “fragmented” decision theory in which a single state of mind is represented by a family of credence functions, each associated with a distinct choice condition (Lewis 1982, Stalnaker 1984). The theory imposes a local coherence assumption guaranteeing that as an agent’s attention shifts, successive batches of “obvious” logical information become available to her. A rule of expected utility maximization can then be applied to the decision of what to attend to next during a train of thought. On the resulting theory, rationality requires ordinary agents to be logically competent and to often engage in trains of thought that increase the unification of their states of mind. But rationality does not require ordinary agents to be logically omniscient. ∗Forthcoming in Noûs. Both authors contributed equally to this work. Thanks to Diego Arana Segura, Sara Aronowitz, Alejandro Pérez Carballo, Ross Cameron, David Chalmers, Jonathan Cohen, Keith DeRose, Sinan Dogramaci, Cian Dorr, Kenny Easwaran, Hartry Field, Branden Fitelson, Peter Fritz, Jeremy Goodman, Daniel Hoek, Frank Jackson, Shivaram Lingamneni, Christopher Meacham, Patrick Miller, Molly O’Rourke-Friel, Michael Rescorla, Ted Sider, Mattias Skipper, Robert Stalnaker, Jason Stanley, Bruno Whittle, Robbie Williams, an anonymous Noûs referee; participants in the Corridor reading group (on three occasions), a graduate seminar session at Rutgers University, a Fall 2011 joint MIT/Princeton graduate seminar, and a Spring 2016 MIT/Princeton/Rutgers graduate seminar taught jointly with Andy Egan; audiences at several APA division meetings (2017 Eastern and Pacific, 2021 Eastern) the 2008 Arizona Ontology Conference, Brown University, Catholic University of Peru, CUNY, National Autonomous University of Mexico, Ohio State University, Syracuse, University, University of Bologna, UC Berkeley, UC Riverside, UC Santa Cruz, University of Connecticut at Storrs, University of Graz, University of Leeds, University of Paris (IHPST), University of Oslo (on two occasions), University of Texas at Austin, Yale University, MIT, and Rutgers University. The initial direction of this paper was enormously influenced by conversations with Andy Egan. Elga gratefully acknowledges support from a 2014-15 Deutsche Bank Membership at the Princeton Institute for Advanced Study. 1 Standard decision theory is incomplete Professor Moriarty has given John Watson a difficult logic problem and credibly threatened to explode a bomb unless Watson gives the correct answer by noon. Watson has never thought about that problem before, and even experienced logicians take hours to solve it. It is seconds before noon. Watson is then informed that Moriarity has accidentally left the answer to the problem on an easily accessible note. Watson’s options are to look at the note (which requires a tiny bit of extra effort) or to give an answer of his choice without looking at the note. Is it rationally permissible for Watson to look at the note? The answer is elementary: it is rationally permissible. Someone might object that only a logically omniscient agent could be fully rational, and therefore that Watson is required to be certain of the correct answer to the logic puzzle (and to give that answer). Even so, we hope the objector would agree that there is a sense in which, given Watson’s limited cognitive abilities, it is rational, reasonable, or smart for Watson look at the note.1,2 Unfortunately, standard Bayesian decision theory (as it is usually applied) fails to deliver any sense in which it is rationally permissible for Watson to look at the note. For it represents the degrees of belief of an agent as a probability function satisfying the standard probability axioms. And on the usual way of applying these axioms to a case like Watson’s, they entail that Watson assigns probability 1 to every logical truth, including the solution to Moriarity’s logic problem.3 But if Watson is certain of the solution from the 1The Watson case is structurally similar to the “bet my house” case from Christensen (2007, 8–9). For arguments that seek to differentiate between “ordinary standards of rationality” (according to which logical omniscience is not required) and “ideal standards” (according to which it is), see Smithies (2015). 2Compare: even an objective Bayesian who counts some prior probability functions as irrational might have use for a decision theory that says what decisions are rational, given a particular (perhaps irrational) prior. 3For important early discussions of how the assumption that logical truths gets probability 1 makes trouble for decision theory, see Savage (1967, 308) and Hacking (1967). For","PeriodicalId":48158,"journal":{"name":"NOUS","volume":"1 1","pages":""},"PeriodicalIF":1.8000,"publicationDate":"2021-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1111/NOUS.12381","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"NOUS","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1111/NOUS.12381","RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"PHILOSOPHY","Score":null,"Total":0}
引用次数: 8
Abstract
It would be good to have a Bayesian decision theory that assesses our decisions and thinking according to everyday standards of rationality— standards that do not require logical omniscience (Garber 1983, Hacking 1967). To that end we develop a “fragmented” decision theory in which a single state of mind is represented by a family of credence functions, each associated with a distinct choice condition (Lewis 1982, Stalnaker 1984). The theory imposes a local coherence assumption guaranteeing that as an agent’s attention shifts, successive batches of “obvious” logical information become available to her. A rule of expected utility maximization can then be applied to the decision of what to attend to next during a train of thought. On the resulting theory, rationality requires ordinary agents to be logically competent and to often engage in trains of thought that increase the unification of their states of mind. But rationality does not require ordinary agents to be logically omniscient. ∗Forthcoming in Noûs. Both authors contributed equally to this work. Thanks to Diego Arana Segura, Sara Aronowitz, Alejandro Pérez Carballo, Ross Cameron, David Chalmers, Jonathan Cohen, Keith DeRose, Sinan Dogramaci, Cian Dorr, Kenny Easwaran, Hartry Field, Branden Fitelson, Peter Fritz, Jeremy Goodman, Daniel Hoek, Frank Jackson, Shivaram Lingamneni, Christopher Meacham, Patrick Miller, Molly O’Rourke-Friel, Michael Rescorla, Ted Sider, Mattias Skipper, Robert Stalnaker, Jason Stanley, Bruno Whittle, Robbie Williams, an anonymous Noûs referee; participants in the Corridor reading group (on three occasions), a graduate seminar session at Rutgers University, a Fall 2011 joint MIT/Princeton graduate seminar, and a Spring 2016 MIT/Princeton/Rutgers graduate seminar taught jointly with Andy Egan; audiences at several APA division meetings (2017 Eastern and Pacific, 2021 Eastern) the 2008 Arizona Ontology Conference, Brown University, Catholic University of Peru, CUNY, National Autonomous University of Mexico, Ohio State University, Syracuse, University, University of Bologna, UC Berkeley, UC Riverside, UC Santa Cruz, University of Connecticut at Storrs, University of Graz, University of Leeds, University of Paris (IHPST), University of Oslo (on two occasions), University of Texas at Austin, Yale University, MIT, and Rutgers University. The initial direction of this paper was enormously influenced by conversations with Andy Egan. Elga gratefully acknowledges support from a 2014-15 Deutsche Bank Membership at the Princeton Institute for Advanced Study. 1 Standard decision theory is incomplete Professor Moriarty has given John Watson a difficult logic problem and credibly threatened to explode a bomb unless Watson gives the correct answer by noon. Watson has never thought about that problem before, and even experienced logicians take hours to solve it. It is seconds before noon. Watson is then informed that Moriarity has accidentally left the answer to the problem on an easily accessible note. Watson’s options are to look at the note (which requires a tiny bit of extra effort) or to give an answer of his choice without looking at the note. Is it rationally permissible for Watson to look at the note? The answer is elementary: it is rationally permissible. Someone might object that only a logically omniscient agent could be fully rational, and therefore that Watson is required to be certain of the correct answer to the logic puzzle (and to give that answer). Even so, we hope the objector would agree that there is a sense in which, given Watson’s limited cognitive abilities, it is rational, reasonable, or smart for Watson look at the note.1,2 Unfortunately, standard Bayesian decision theory (as it is usually applied) fails to deliver any sense in which it is rationally permissible for Watson to look at the note. For it represents the degrees of belief of an agent as a probability function satisfying the standard probability axioms. And on the usual way of applying these axioms to a case like Watson’s, they entail that Watson assigns probability 1 to every logical truth, including the solution to Moriarity’s logic problem.3 But if Watson is certain of the solution from the 1The Watson case is structurally similar to the “bet my house” case from Christensen (2007, 8–9). For arguments that seek to differentiate between “ordinary standards of rationality” (according to which logical omniscience is not required) and “ideal standards” (according to which it is), see Smithies (2015). 2Compare: even an objective Bayesian who counts some prior probability functions as irrational might have use for a decision theory that says what decisions are rational, given a particular (perhaps irrational) prior. 3For important early discussions of how the assumption that logical truths gets probability 1 makes trouble for decision theory, see Savage (1967, 308) and Hacking (1967). For
期刊介绍:
Noûs, a premier philosophy journal, publishes articles that address the whole range of topics at the center of philosophical debate, as well as long critical studies of important books. Subscribers to Noûs also receive two prestigious annual publications at no additional cost: Philosophical Issues and Philosophical Perspectives.