Stöltzner coined the expression ‘Vienna indeterminism’ to describe a philosophical tradition centered on the Viennese physicist Exner, serving as the ‘historical link’ between Mach and Boltzmann, on the one hand, and von Mises and Frank, on the other. During the early 1930s debate on quantum mechanics, there was a ‘rapprochement’ between Vienna indeterminism and Schlick’s work on causality. However, it was Cassirer’s 1936 monograph Determinismus und Indeterminismus that showed a full ‘convergence’ with major tenets of Vienna indeterminism: the fundamentality of statistical laws, the frequency interpretation of probability, and the statistical interpretation of the uncertainty relations. Yet, Cassirer used these conceptual tools to pursue ‘in parallel’ different philosophical goals. While for the Viennese quantum mechanics represented a fatal blow to the already discredited notion of ‘causality,’ for Cassirer it challenged the classical notion of ‘substantiality,’ the ideas of ‘particles’ as individual substances endowed with properties. The paper concludes that this ‘parallel convergence’ is the most striking and overlooked aspect of Determinismus und Indeterminismus, serving as the keystone of its argumentative structure.
Alan Hodgkin and Andrew Huxley used abductive reasoning to draw conclusions about the ionic basis of the action potential. Here we build on that initial proposal. First, we propose that Hodgkin and Huxley’s constitutive abductive reasoning has four features. Second, we argue that Hodgkin and Huxley are not alone in giving such arguments. Tolman, 1948, and Baumgartner, 1960, also gave such arguments. The implication is that such arguments are common enough in science that philosophers of science should pay more attention to them. Third, we propose that constitutive abduction describes a method that scientists use to confirm constitutive hypotheses that is an alternative to the mutual manipulability approach that is familiar in the New Mechanist literature.
In operational quantum mechanics two measurements are called operationally equivalent if they yield the same distribution of outcomes in every quantum state and hence are represented by the same operator. In this paper, I will show that the ontological models for quantum mechanics and, more generally, for any operational theory sensitively depend on which measurement we choose from the class of operationally equivalent measurements, or more precisely, which of the chosen measurements can be performed simultaneously. To this goal, I will take first three examples—a classical theory, the EPR-Bell scenario and the Popescu-Rochlich box; then realize each example by two operationally equivalent but different operational theories—one with a trivial and another with a non-trivial compatibility structure; and finally show that the ontological models for the different theories will be different with respect to their causal structure, contextuality, and fine-tuning.
Many philosophers have explored the extensive use of non-universal generalizations in different sciences for inductive and explanatory purposes, analyzing properties such as how widely a generalization holds in space and time. In the present paper, we concentrate on developmental biology to distinguish and characterize two common approaches to scientific generalization—mechanism generalization and principle generalization. The former approach seeks detailed descriptions of causal relationships among specific types of biological entities that produce a characteristic phenomenon across some range of different biological entities; the latter approach abstractly describes relations or interactions that occur during ontogeny and are exemplified in a wide variety of different biological entities. These two approaches to generalization correspond to different investigative aims. Our analysis shows why each approach is sought in a research context, thereby accounting for how practices of inquiry are structured. It also diagnoses problematic assumptions in prior discussions, such as abstraction always being correlated positively with generalizations of wide scope.
Performativity of science refers to the phenomenon that the dissemination of scientific conceptualisations can sometimes affect their target systems or referents. A widely held view in the literature is that scientists ought not to deliberately deploy performative models or theories with the aim of eliciting desirable changes in their target systems. This paper has three aims. First, I cast and defend this received view as a worry about autonomy-infringing paternalism and, to that end, develop a taxonomy of the harms it can impose. Second, I consider various approaches to this worry within the extant literature and argue that these offer only unsatisfactory responses. Third, I propose two positive claims. Manipulation of target systems is (a) not inherently paternalist and can be unproblematic, and is (b) sometimes paternalist, but whenever such paternalism is inescapable, it has got to be justifiable. I generalise an example of modelling international climate change coordination to develop this point.
This paper aims to evaluate the relevance of Kant’s much discussed essentialism and mechanism for present-day philosophy of psychiatry. Kendler et al. (Psychological Medicine 41(6):1143–1150, 2011) have argued that essentialism is inadequate for conceptualizing psychiatric disorders. In this paper, I develop this argument in detail by highlighting a variety of essentialism that differs from the one rejected by Kendler et al. I show that Kant’s essentialism is not directly affected by the argument of Kendler et al. (Psychological Medicine 41(6):1143–1150, 2011), and that Kendler et al.’s (Psychological Medicine 41(6):1143–1150, 2011) argument also does not affect other essentialist positions in psychiatry. Hence, the rejection of essentialism in psychiatry needs more arguments than the one supplied by Kendler et al. Nevertheless, the study of current psychiatry also provides reasons to reject Kant’s essentialism and his transcendental project. I argue that Kant’s theory of mechanical explanation is more relevant for analyzing present-day philosophy of psychiatry, insofar as (a) modern psychiatric research into the causes of psychiatric disorders fits the mechanist paradigm, (b) Kant’s theory of mechanical explanation is importantly similar to modern theories of mechanical explanation applicable to psychiatry, such as those of Bechtel and associates, and (c) Kant’s stance that mechanism constitutes a regulative ideal points to useful arguments for the pursuit of mechanical explanations in psychiatry.
Modal Empiricism in philosophy of science proposes to understand the possibility of modal knowledge from experience by replacing talk of possible worlds with talk of possible situations, which are coarse-grained, bounded and relative to background conditions. This allows for an induction towards objective necessity, assuming that actual situations are representative of possible ones. The main limitation of this epistemology is that it does not account for probabilistic knowledge. In this paper, we propose to extend Modal Empiricism to the probabilistic case, thus providing an inductivist epistemology for probabilistic knowledge. The key idea is that extreme probabilities, close to 1 and 0, serve as proxies for testing mild probabilities, using a principle of model combination.
Empiricism has a long and venerable history. Aristotle, the Epicureans, Sextus Empiricus, Bacon, Locke, Hume, Mill, Mach and the Logical Empiricists, among others, represent a long line of historically influential empiricists who, one way or another, placed an emphasis on knowledge gained through the senses. In recent times the most highly articulated and influential edition of empiricism is undoubtedly Bas van Fraassen’s constructive empiricism. Science, according to this view, aims at empirically adequate theories, i.e. theories that save all and only the observable phenomena. Roughly put, something is observable in van Fraassen’s view if members of the human epistemic community can detect it with their unaided senses. Critics have contested this notion, citing, among other reasons, that much of what counts as knowledge for scientists, especially in the natural sciences, concerns things that are detectable only with instruments, i.e. things that are unobservable and hence unknowable by van Fraassen’s lights. The current paper seeks to overcome this objection by putting forth and defending a liberalised conception of observability and an associated, and accordingly liberalised, conception of empiricism. ‘Grounded observability’ and ‘grounded empiricism’, as we call them, unchain themselves from the burdens of traditional conceptions of experience, while at the same time tethering themselves to the source of epistemic credibility in the senses, and, hence to the true spirit of empiricism.

