This article develops a new account of the relation “before” between events. It does so by taking the set of all states of an object, irrespective of any presupposed order, and then finding the order between events by exploiting a characteristic asymmetry which appears on this set, called the “record asymmetry”. It is shown that the record asymmetry 1. implies a weak temporal order (“before or simultaneous with”), and 2. is necessary for measuring a strong temporal order (“before”). I then propose a condition necessary and sufficient for a strong temporal order in terms of the set of states of a single object. The upshot is that temporal ordering is not ontologically primitive, but reducible to the record asymmetry. Also, it is a local phenomenon which requires no global temporal structure of spacetime.
This paper explores the view that the vocabulary of metaphysical fundamentality is opaque, using Sider’s theory of structure as a motivating case study throughout. Two conceptions of fundamentality are distinguished, only one of which can explain why the vocabulary of fundamentality is opaque.
This paper employs epistemic decision theory to explore rational bridge principles between probabilistic beliefs and deductively cogent beliefs. I re-examine Hempel and Levi’s epistemic decision theories and generalize them by introducing a novel rationality norm for belief binarization. This norm posits that an agent ought to have binary beliefs that maximize expected utility in light of their credences. Our findings reveal that the proposed norm implies certain geometrical principles, namely convexity norms. Building upon this framework, I critically evaluate the Humean thesis in Leitgeb’s stability theory of belief and Lin-Kelly’s tracking theory. We establish the impossibility results, demonstrating that those theories violate the proposed norms and consequently fail to do the job of expected utility maximization. In contrast, we discover alternative approaches that align with all of the proposed norms, such as generating beliefs that minimize a Bregman divergence from credences. Our epistemic decision theory for belief binarization can be compared to Dorst’s accuracy argument for the Lockean thesis. We conclude that deductively cogent expected accuracy maximizers are neither Lockean nor Humean.
This paper offers a new interpretation of Hume’s Treatise as a work written by a methodological solipsist. It argues that Hume anticipates later developments by launching a Fodorian project that is to be realised by Carnapian means. Hume develops an explanatory theory of mental operations based on an analysis conducted by way of similarity recollections in the stream of experience. The paper first presents the case for Hume’s commitment to methodological solipsism and then offers a reconstruction of the methodology with which his project is to be executed. Hume proceeds by analysing perceptions and the connections between them to account for their “nature” and the “principles” underlying their interaction. His analyses reveal the solipsistic methodological credo that Hume did not make explicit.
A standard form of skeptical scenario, in the tradition of Descartes’ evil demon, raises the prospect that our sensory experiences are deceptive. A less familiar and importantly different kind of skeptical scenario raises the prospect that our beliefs have been debased (Schaffer, 2010). This paper provides a new and improved way of resisting this latter kind of debasing skepticism. Along the way, I explore how the debasing demon scenario connects with some potentially controversial epistemological principles and clear up various neglected or misunderstood points concerning debasing skepticism.
The field of AI safety considers whether and how AI development can be safe and beneficial for humans and other animals, and the field of AI welfare considers whether and how AI development can be safe and beneficial for AI systems. There is a prima facie tension between these projects, since some measures in AI safety, if deployed against humans and other animals, would raise questions about the ethics of constraint, deception, surveillance, alteration, suffering, death, disenfranchisement, and more. Is there in fact a tension between these projects? We argue that, considering all relevant factors, there is indeed a moderately strong tension—and it deserves more examination. In particular, we should devise interventions that can promote both safety and welfare where possible, and prepare frameworks for navigating any remaining tensions thoughtfully.
According to qualitativism, thisness is not a fundamental feature of reality; facts about particular things are metaphysically second-rate. In this paper, I advance an argument for qualitativism from ideological parsimony. Supposing that reality fundamentally contains an array of propertied things, non-qualitativists employ a distinct name (or constant) for each fundamental thing. I argue that these names encode a type of worldly structure (thisness structure) that offends against parsimony and that qualitativists can eliminate without incurring a comparable parsimony-offense.
This paper concerns the proxy problem: often machine learning programs utilize seemingly innocuous features as proxies for socially-sensitive attributes, posing various challenges for the creation of ethical algorithms. I argue that to address this problem, we must first settle a prior question of what it means for an algorithm that only has access to seemingly neutral features to be using those features as “proxies” for, and so to be making decisions on the basis of, protected-class features. Borrowing resources from philosophy of mind and language, I argue that the answer depends on whether discrimination against those protected classes explains the algorithm’s selection of individuals. This approach rules out standard theories of proxy discrimination in law and computer science that rely on overly intellectual views of agent intentions or on overly deflationary views that reduce proxy use to statistical correlation. Instead, my theory highlights two distinct ways an algorithm can reason using proxies: either the proxies themselves are meaningfully about the protected classes, highlighting a new kind of intentional content for philosophical theories in mind and language; or the algorithm explicitly represents the protected-class features themselves, and proxy discrimination becomes regular, old, run-of-the-mill discrimination.

