Frauchiger and Renner (2018) argue that no ‘single-world’ theory can consistently maintain quantum mechanical predictions for all systems. Following Bub (2017, 2018, 2019), I argue here that this is overstated, and use their result to develop a framework for neo-Copenhagen theories that avoid the problem. To describe the framework I introduce two concepts, ontological information deficits, and information frames, and explore how these may ultimately be fleshed out by the theorist. I then consider some immediate worries that may be raised against the framework, and conclude by looking at how some existing theories may be seen to fit into it.
The principle of ‘information causality’ can be used to derive an upper bound—known as the ‘Tsirelson bound’—on the strength of quantum mechanical correlations, and has been conjectured to be a foundational principle of nature. To date, however, it has not been sufficiently motivated to play such a foundational role. The motivations that have so far been given are, as I argue, either unsatisfactorily vague or appeal to little if anything more than intuition. Thus in this paper I consider whether some way might be found to successfully motivate the principle. And I propose that a compelling way of so doing is to understand it as a generalisation of Einstein's principle of the mutually independent existence—the ‘being-thus’—of spatially distant things. In particular I first describe an argument, due to Demopoulos, to the effect that the so-called ‘no-signalling’ condition can be viewed as a generalisation of Einstein's principle that is appropriate for an irreducibly statistical theory such as quantum mechanics. I then argue that a compelling way to motivate information causality is to in turn consider it as a further generalisation of the Einsteinian principle that is appropriate for a theory of communication. I describe, however, some important conceptual obstacles that must yet be overcome if the project of establishing information causality as a foundational principle of nature is to succeed.
This paper pushes back against the Democritean-Newtonian tradition of assuming a strict conceptual dichotomy between spacetime and matter. Our approach proceeds via the more narrow distinction between modified gravity/spacetime (MG) and dark matter (DM). A prequel paper argued that the novel field postulated by Berezhiani and Khoury’s ‘superfluid dark matter theory’ is as much (dark) matter as anything could possibly be, but also—below the critical temperature for superfluidity—as much (of a modification of) spacetime as anything could possibly be. Here we introduce and critically evaluate three groups of interpretations that one should consider for such Janus-faced theories. The consubstantiality interpretation holds that is both (dark) matter and a modification of spacetime, analogously to the sense in which Jesus (according to catholicism) is both human and god. The fundamendalist interpretations consider for each of these roles whether they are instantiated fundamentally or emergently. The breakdown interpretations focus on the question of whether signals the breakdown, in some sense to be specified, of the MG-DM dichotomy and perhaps even the broader spacetime–matter distinction. More generally, it is argued that hybrid theories urge a move towards a single space of theories, rather than two separate spaces of spacetime theories and matter theories, respectively.
Considering a complicated extension of a Wigner's friend scenario, Frauchiger and Renner (FR) allegedly showed that “quantum theory cannot consistently describe the use of itself.” However, such a result has been under severe criticism, as it has been convincingly argued to crucially depend on an implicit, non-trivial assumption regarding details of the collapse mechanism. In consequence, the result is not as robust or general as intended. On top of all this, in this work we show that a much simpler arrangement—basically an EPR setting—is sufficient to derive a result fully analogous to that of FR. Moreover, we claim that all lessons learned from FR's result are essentially contained within the original EPR paper. We conclude that FR's result does not offer any novel insights into the conceptual problems of quantum theory.
Newton's Principia begins with eight formal definitions and a scholium, the so-called scholium on space and time. Despite a history of misinterpretation, scholars now largely agree that the purpose of the scholium is to establish and defend the definitions of key concepts. There is no consensus, however, on how those definitions differ in kind from the Principia's formal definitions and why they are set-off in a scholium. The purpose of the present essay is to shed light on the scholium by focusing on Newton's notion and use of definition. The resulting view is developmental. I argue that when Newton first wrote the Principia, he viewed the scholium's definitions as items of “natural philosophy.” By the time of the third edition, however, he came to view their methodological status differently; he viewed them as belonging to the more qualified “manner of geometers.” I explicate the two methods of natural inquiry and draw out their implications for Newton's account of space.
When is it reasonable to abandon a scientific research program? When would it be premature? We take up these questions in the context of a contemporary debate at the border between astrophysics and cosmology, the so-called “small-scale challenges” to the concordance model of cosmology (ΛCDM) and its cold dark matter paradigm. These challenges consist in discrepancies between the outputs of leading cosmological simulations and observational surveys, and have garnered titles such as the Missing Satellites, Too Big To Fail, and Cusp/Core problems. We argue that these challenges do not currently support a wholesale abandonment or even modification of cold dark matter. Indeed, the nature of the challenges suggests prioritizing the incorporation of known physics into cosmological simulations.
It is widely accepted that the violation of Bell inequalities excludes local theories of the quantum realm. This paper presents a new derivation of the inequalities from non-trivial non-local theories and formulates a stronger Bell argument excluding also these non-local theories. Taking into account all possible theories, the conclusion of this stronger argument provably is the strongest possible consequence from the violation of Bell inequalities on a qualitative probabilistic level (given usual background assumptions). Among the forbidden theories is a subset of outcome dependent theories showing that outcome dependence is not sufficient for explaining a violation of Bell inequalities. Non-local theories which can violate Bell inequalities (among them quantum theory) are rather characterized by the fact that at least one of the measurement outcomes in some sense (which is made precise) probabilistically depends both on its local as well as on its distant measurement setting (‘parameter’). When Bell inequalities are found to be violated, the true choice is not ‘outcome dependence or parameter dependence’ but between two kinds of parameter dependences, one of them being what is usually called ‘parameter dependence’. Against the received view established by Jarrett and Shimony that on a probabilistic level quantum non-locality amounts to outcome dependence, this result confirms and makes precise Maudlin’s claim that some kind of parameter dependence is required.
Jeffrey Bub (2018) investigates what we can learn about quantum mechanics from the structure of the correlations it predicts and apart from its detailed mathematical machinery. The present discussion is in the spirit of Bub's project. I examine two arguments, one from Clifton, Pagonis and Pitowsky (1992), and the other from Maudlin (2014). If either is correct, the non-signaling correlations by themselves entail that the quantum world is causally nonlocal. This paper calls both arguments into question. However, it also points out that even if the criticisms succeed, this doesn't settle whether quantum mechanics is causally nonlocal. The answer to that question depends on considerations that go beyond the correlations alone. Arthur Fine's “random devices in harmony” Fine (1981) will play a role as a tool for thinking about correlations that violate Bell-type inequalities.