Expected value maximization gives plausible guidance for moral decision‐making under uncertainty in many situations. But it has unappetizing implications in ‘Pascalian’ situations involving tiny probabilities of extreme outcomes. This paper shows, first, that under realistic levels of ‘background uncertainty’ about sources of value independent of one's present choice, a widely accepted and apparently innocuous principle—stochastic dominance—requires that prospects be ranked by the expected value of their consequences in most ordinary choice situations. But second, this implication does not hold when differences in expected value are driven by tiny probabilities of extreme outcomes. Stochastic dominance therefore lets us draw a surprisingly principled line between ‘ordinary’ and ‘Pascalian’ situations, providing a powerful justification for de facto expected value maximization in the former context while permitting deviations in the latter. Drawing this distinction is incompatible with an in‐principle commitment to maximizing expected value, but does not require too much departure from decision‐theoretic orthodoxy: it is compatible, for instance, with the view that moral agents must maximize the expectation of a utility function that is an increasing function of moral value.
{"title":"Expected value, to a point: Moral decision‐making under background uncertainty","authors":"Christian Tarsney","doi":"10.1111/nous.12544","DOIUrl":"https://doi.org/10.1111/nous.12544","url":null,"abstract":"Expected value maximization gives plausible guidance for moral decision‐making under uncertainty in many situations. But it has unappetizing implications in ‘Pascalian’ situations involving tiny probabilities of extreme outcomes. This paper shows, first, that under realistic levels of ‘background uncertainty’ about sources of value independent of one's present choice, a widely accepted and apparently innocuous principle—stochastic dominance—requires that prospects be ranked by the expected value of their consequences in most ordinary choice situations. But second, this implication does <jats:italic>not</jats:italic> hold when differences in expected value are driven by tiny probabilities of extreme outcomes. Stochastic dominance therefore lets us draw a surprisingly principled line between ‘ordinary’ and ‘Pascalian’ situations, providing a powerful justification for <jats:italic>de facto</jats:italic> expected value maximization in the former context while permitting deviations in the latter. Drawing this distinction is incompatible with an in‐principle commitment to maximizing expected value, but does not require too much departure from decision‐theoretic orthodoxy: it is compatible, for instance, with the view that moral agents must maximize the expectation of a <jats:italic>utility</jats:italic> function that is an increasing function of moral value.","PeriodicalId":501006,"journal":{"name":"Noûs","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143528316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
If an agent can't live up to the demands of ideal rationality, fallback norms come into play that take into account the agent's limitations. A familiar human limitation is our tendency to lose information. How should we compensate for this tendency? The Seeping Beauty problem allows us to isolate this question, without the confounding influence of other human limitations. If the coin lands tails, Beauty can't preserve whatever information she has received on Monday: she is bound to violate the norms of ideal diachronic rationality. The considerations that support these norms, however, can still be used. I investigate how Beauty should update her beliefs so as to maximize the expected accuracy of her new beliefs. The investigation draws attention to important but neglected questions about the connection between rational belief and evidential support, about the status of ideal and non‐ideal norms, about the dependence of epistemic norms on descriptive facts, and about the precise formulation of expected accuracy measures. It also sheds light on the puzzle of higher‐order evidence.
{"title":"Sleeping Beauty and the demands of non‐ideal rationality","authors":"Wolfgang Schwarz","doi":"10.1111/nous.12545","DOIUrl":"https://doi.org/10.1111/nous.12545","url":null,"abstract":"If an agent can't live up to the demands of ideal rationality, fallback norms come into play that take into account the agent's limitations. A familiar human limitation is our tendency to lose information. How should we compensate for this tendency? The Seeping Beauty problem allows us to isolate this question, without the confounding influence of other human limitations. If the coin lands tails, Beauty can't preserve whatever information she has received on Monday: she is bound to violate the norms of ideal diachronic rationality. The considerations that support these norms, however, can still be used. I investigate how Beauty should update her beliefs so as to maximize the expected accuracy of her new beliefs. The investigation draws attention to important but neglected questions about the connection between rational belief and evidential support, about the status of ideal and non‐ideal norms, about the dependence of epistemic norms on descriptive facts, and about the precise formulation of expected accuracy measures. It also sheds light on the puzzle of higher‐order evidence.","PeriodicalId":501006,"journal":{"name":"Noûs","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143393053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Suppose your evil sibling travels back in time, intending to lethally poison your grandfather during his infancy. Determined to save grandpa, you grab two antidotes and follow your sibling through the wormhole. Under normal circumstances, each antidote has a 50% chance of curing a poisoning. Upon finding young grandpa, poisoned, you administer the first antidote. Alas, it has no effect. The second antidote is your last hope. You administer it—and success: the paleness vanishes from grandpa's face, he is healed. As you administered the first antidote, what was the chance that it would be effective? This essay offers a systematic account of this case, and others like it. The central question is this: Given a certain time travel structure, what are the chances? In particular, I'll develop a theory about the connection between these chances and the chances in ordinary, time‐travel‐free contexts. Central to the account is a Markov condition involving the boundaries of spacetime regions.
{"title":"Loops and the geometry of chance","authors":"Jens Jäger","doi":"10.1111/nous.12541","DOIUrl":"https://doi.org/10.1111/nous.12541","url":null,"abstract":"Suppose your evil sibling travels back in time, intending to lethally poison your grandfather during his infancy. Determined to save grandpa, you grab two antidotes and follow your sibling through the wormhole. Under normal circumstances, each antidote has a 50% chance of curing a poisoning. Upon finding young grandpa, poisoned, you administer the first antidote. Alas, it has no effect. The second antidote is your last hope. You administer it—and success: the paleness vanishes from grandpa's face, he is healed. As you administered the first antidote, what was the chance that it would be effective? This essay offers a systematic account of this case, and others like it. The central question is this: Given a certain time travel structure, what are the chances? In particular, I'll develop a theory about the connection between these chances and the chances in ordinary, time‐travel‐free contexts. Central to the account is a Markov condition involving the boundaries of spacetime regions.","PeriodicalId":501006,"journal":{"name":"Noûs","volume":"73 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142991138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One metaphysical problem about laws is how to find appropriate truthmakers for fully general functional laws. What makes it true, for instance, that an uninstantiated mass would interact with others as prescribed by laws concerning mass? This is the missing value problem. D. M. Armstrong attempted to solve it by appeal to determinable universals. I will offer a trope‐theoretic solution that, while in some ways more metaphysically adventurous than Armstrong's view, avoids commitment to universals and determinables (as different from their determinates). The solution makes use of a special conception of tropes as capable of intrinsic change. It also makes use of a distinction between two ways of having a causal power (a distinction we should make in any case). Existing powers‐based approaches to the problem struggle to avoid the idea that powers mysteriously point beyond themselves. But if tropes are capable of intrinsic change in the way I propose, they can account for the full generality of laws with minimal pointing beyond, and can do so while retaining natures that are credibly intrinsic.
{"title":"A trope‐theoretic solution to the missing value problem","authors":"Paul Audi","doi":"10.1111/nous.12543","DOIUrl":"https://doi.org/10.1111/nous.12543","url":null,"abstract":"One metaphysical problem about laws is how to find appropriate truthmakers for fully general functional laws. What makes it true, for instance, that an uninstantiated mass would interact with others as prescribed by laws concerning mass? This is the missing value problem. D. M. Armstrong attempted to solve it by appeal to determinable universals. I will offer a trope‐theoretic solution that, while in some ways more metaphysically adventurous than Armstrong's view, avoids commitment to universals and determinables (as different from their determinates). The solution makes use of a special conception of tropes as capable of intrinsic change. It also makes use of a distinction between two ways of having a causal power (a distinction we should make in any case). Existing powers‐based approaches to the problem struggle to avoid the idea that powers mysteriously point beyond themselves. But if tropes are capable of intrinsic change in the way I propose, they can account for the full generality of laws with minimal pointing beyond, and can do so while retaining natures that are credibly intrinsic.","PeriodicalId":501006,"journal":{"name":"Noûs","volume":"55 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142988715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Physical laws are strikingly simple, yet there is no a priori reason for them to be so. I propose that nomic realists—Humeans and non‐Humeans—should recognize simplicity as a fundamental epistemic guide for discovering and evaluating candidate physical laws. This proposal helps resolve several longstanding problems of nomic realism and simplicity. A key consequence is that the presumed epistemic advantage of Humeanism over non‐Humeanism dissolves, undermining a prominent epistemological argument for Humeanism. Moreover, simplicity is shown to be more connected to lawhood than to mere truth.
{"title":"The simplicity of physical laws","authors":"Eddy Keming Chen","doi":"10.1111/nous.12542","DOIUrl":"https://doi.org/10.1111/nous.12542","url":null,"abstract":"Physical laws are strikingly simple, yet there is no a priori reason for them to be so. I propose that nomic realists—Humeans and non‐Humeans—should recognize simplicity as a fundamental epistemic guide for discovering and evaluating candidate physical laws. This proposal helps resolve several longstanding problems of nomic realism and simplicity. A key consequence is that the presumed epistemic advantage of Humeanism over non‐Humeanism dissolves, undermining a prominent epistemological argument for Humeanism. Moreover, simplicity is shown to be more connected to lawhood than to mere truth.","PeriodicalId":501006,"journal":{"name":"Noûs","volume":"70 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142988716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The counterfactual comparative account of harm and benefit (CCA) has faced a barrage of objections from cases involving preemption, overdetermination, and choice. In this paper I provide a unified diagnosis of CCA's vulnerability to these objections: CCA is susceptible to them because it evaluates each act by the same criterion. This is a mistake because, in a sense I make precise, situations raise prudential questions, and only some acts—the relevant alternatives—are directly relevant to these questions. To answer the objections, we must revise CCA so that its evaluations foreground the relevant alternatives. The result is a question‐centered account of harm and benefit.
{"title":"The question‐centered account of harm and benefit","authors":"Aaron Thieme","doi":"10.1111/nous.12540","DOIUrl":"https://doi.org/10.1111/nous.12540","url":null,"abstract":"The counterfactual comparative account of harm and benefit (CCA) has faced a barrage of objections from cases involving preemption, overdetermination, and choice. In this paper I provide a unified diagnosis of CCA's vulnerability to these objections: CCA is susceptible to them because it evaluates each act by the same criterion. This is a mistake because, in a sense I make precise, situations raise prudential questions, and only some acts—the <jats:italic>relevant alternatives</jats:italic>—are directly relevant to these questions. To answer the objections, we must revise CCA so that its evaluations foreground the relevant alternatives. The result is a question‐centered account of harm and benefit.","PeriodicalId":501006,"journal":{"name":"Noûs","volume":"43 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142825424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A major open question in the borderlands between epistemology and philosophy of science concerns whether Bayesian updating and abductive inference are compatible. Some philosophers—most influentially Bas van Fraassen—have argued that they are not. Others have disagreed, arguing that abduction, properly understood, is indeed compatible with Bayesianism. Here we present two formal results that allow us to tackle this question from a new angle. We start by formulating what we take to be a minimal version of the claim that abduction is a rational pattern of reasoning. We then show that this minimal abductivist principle, when combined with Bayesian updating by conditionalization, places surprisingly strong and controversial constraints on how we must measure explanatory power. The lesson is not that Bayesianism is definitely incompatible with abduction, but that both compatibilism and incompatibilism have hitherto unrecognized consequences. We end the paper by formulating these consequences in the form of a trilemma.
{"title":"The bayesian and the abductivist","authors":"Mattias Skipper, Olav Benjamin Vassend","doi":"10.1111/nous.12539","DOIUrl":"https://doi.org/10.1111/nous.12539","url":null,"abstract":"A major open question in the borderlands between epistemology and philosophy of science concerns whether Bayesian updating and abductive inference are compatible. Some philosophers—most influentially Bas van Fraassen—have argued that they are not. Others have disagreed, arguing that abduction, properly understood, is indeed compatible with Bayesianism. Here we present two formal results that allow us to tackle this question from a new angle. We start by formulating what we take to be a minimal version of the claim that abduction is a rational pattern of reasoning. We then show that this minimal abductivist principle, when combined with Bayesian updating by conditionalization, places surprisingly strong and controversial constraints on how we must measure explanatory power. The lesson is not that Bayesianism is definitely incompatible with abduction, but that both compatibilism and incompatibilism have hitherto unrecognized consequences. We end the paper by formulating these consequences in the form of a trilemma.","PeriodicalId":501006,"journal":{"name":"Noûs","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142684214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Conceptual engineering is the process of assessing and improving our conceptual repertoire. Some authors have claimed that introducing or revising concepts through conceptual engineering can go as far as expanding the realm of thinkable thoughts and thus enable us to form beliefs, hypotheses, wishes, or desires that we are currently unable to form. If true, this would allow conceptual engineers to contribute to solving stubborn problems – problems that cannot be solved with our current ways of thinking. We call this kind of conceptual engineering heavy‐duty conceptual engineering. As exciting as the idea of heavy‐duty conceptual engineering sounds, it has never been developed or defended. In this paper, we pursue a two‐fold goal. First, to offer a theory of heavy‐duty conceptual engineering that distinguishes it from other kinds of conceptual engineering; second, to show that heavy‐duty conceptual engineering is possible, both in theory and in practice, and to explain how it can be applied in the service of solving stubborn problems. The central idea is that heavy‐duty conceptual engineering can enhance the semantic expressive power of a conceptual system by the use of bootstrapping processes.
{"title":"Heavy‐duty conceptual engineering","authors":"Steffen Koch, Jakob Ohlhorst","doi":"10.1111/nous.12538","DOIUrl":"https://doi.org/10.1111/nous.12538","url":null,"abstract":"Conceptual engineering is the process of assessing and improving our conceptual repertoire. Some authors have claimed that introducing or revising concepts through conceptual engineering can go as far as expanding the realm of thinkable thoughts and thus enable us to form beliefs, hypotheses, wishes, or desires that we are currently unable to form. If true, this would allow conceptual engineers to contribute to solving <jats:italic>stubborn problems</jats:italic> – problems that cannot be solved with our current ways of thinking. We call this kind of conceptual engineering <jats:italic>heavy‐duty conceptual engineering</jats:italic>. As exciting as the idea of heavy‐duty conceptual engineering sounds, it has never been developed or defended. In this paper, we pursue a two‐fold goal. First, to offer a theory of heavy‐duty conceptual engineering that distinguishes it from other kinds of conceptual engineering; second, to show that heavy‐duty conceptual engineering is possible, both in theory and in practice, and to explain how it can be applied in the service of solving stubborn problems. The central idea is that heavy‐duty conceptual engineering can enhance the semantic expressive power of a conceptual system by the use of bootstrapping processes.","PeriodicalId":501006,"journal":{"name":"Noûs","volume":"47 41 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142670706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ontic structuralists claim that there are no individual objects, and that reality should instead be thought of as a “web of relations”. It is difficult to make this metaphysical picture precise, however, since languages usually characterize the world by describing the objects that exist in it. This paper proposes a solution to the problem; I argue that when discourse is reformulated in the language of the calculus of relations ‐ an algebraic logic developed by Alfred Tarski ‐ it can be interpreted without presupposing the existence of objects. What is distinctive about the language of the calculus is that it contains no operator that resembles a quantifier, and yet it can be used to paraphrase any sentence expressible in first‐order logic. Since the use of a first‐order quantifier (or some similar operator) is usually what establishes commitment to an ontology of objects, and since the calculus of relations eschews the quantifier in favor of a composition operator that can be given a natural interpretation consistent with structuralist metaphysics, the calculus is an ideal language for the structuralist to use to describe the world.
本体结构主义者声称,不存在单独的对象,现实应被视为 "关系网"。然而,由于语言通常通过描述存在于世界中的对象来描述世界的特征,因此很难精确地描述这一形而上学图景。本文提出了解决这一问题的方法;我认为,如果用阿尔弗雷德-塔尔斯基(Alfred Tarski)提出的代数逻辑--"关系微积分"(calculations of relations)的语言来重新表述话语,就可以在不预设对象存在的情况下对其进行解释。关系微积分语言的独特之处在于,它不包含任何类似于量词的运算符,但却可以用来解析一阶逻辑中可表达的任何句子。由于使用一阶量词(或类似的运算符)通常是建立对对象本体论的承诺,而关系微积分摒弃了量词,转而使用了一个可以被赋予与结构主义形而上学一致的自然解释的组合运算符,因此微积分是结构主义者用来描述世界的理想语言。
{"title":"A style guide for the structuralist","authors":"Lucy Carr","doi":"10.1111/nous.12537","DOIUrl":"https://doi.org/10.1111/nous.12537","url":null,"abstract":"Ontic structuralists claim that there are no individual objects, and that reality should instead be thought of as a “web of relations”. It is difficult to make this metaphysical picture precise, however, since languages usually characterize the world by describing the objects that exist in it. This paper proposes a solution to the problem; I argue that when discourse is reformulated in the language of the calculus of relations ‐ an algebraic logic developed by Alfred Tarski ‐ it can be interpreted without presupposing the existence of objects. What is distinctive about the language of the calculus is that it contains no operator that resembles a quantifier, and yet it can be used to paraphrase any sentence expressible in first‐order logic. Since the use of a first‐order quantifier (or some similar operator) is usually what establishes commitment to an ontology of objects, and since the calculus of relations eschews the quantifier in favor of a composition operator that can be given a natural interpretation consistent with structuralist metaphysics, the calculus is an ideal language for the structuralist to use to describe the world.","PeriodicalId":501006,"journal":{"name":"Noûs","volume":"13 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142670711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}