In leaning forward to better see the details of a Breughel wedding scene, an elderly man with thick glasses bumped his head on the wooden frame. He saw stars. "Careful, that painting is irreplaceable," the guard said. "Please stand back a few feet."
In leaning forward to better see the details of a Breughel wedding scene, an elderly man with thick glasses bumped his head on the wooden frame. He saw stars. "Careful, that painting is irreplaceable," the guard said. "Please stand back a few feet."
Within bioethics, two issues dominate the discourse on suffering: its nature (who can suffer and how) and whether suffering is ever grounds for providing, withholding, or discontinuing interventions. The discussion has focused on the subjective experience of suffering in acute settings or persistent suffering that is the result of terminal, chronic illness. The bioethics literature on suffering, then, is silent about a crucial piece of the moral picture: agents' intersubjectivity. This paper argues that an account of the intersubjective effects of suffering on caregivers could enrich theories of suffering in two ways: first, by clarifying the scope of suffering beyond the individual at the epicenter, i.e., by providing a fuller account of the effects of suffering (good or bad). Second, by drawing attention to how and why, in clinical contexts, the intersubjective dimensions of suffering are sometimes as important, if not more important, than whether an individual is suffering or not.
Neurorights are widely discussed as a means of protecting phenomena like cognitive liberty and freedom of thought. This article is especially interested in example cases where these protections are sought in light of fast-paced developments in neurotechnologies that appear capable of reading the mind in some significant sense. While it is prudent to take care and seek to protect the mind from prying, questions remain over the kinds of claims that prompt concerns over mind reading. The nature of these claims should influence how exactly rights may or may not offer justifiable solutions. Overall, the exploration of neurotechnological mind reading questions here will come in terms of philosophical accounts of mental content and neuroreductionism. The contribution will be to present a contextualization of questions arising from 'mind-reading' neurotechnology, and appraisal of if or how neurorights respond to them.
Increased interest in suffering has given rise to different accounts of what suffering is. This paper focuses the debate between experientialists and non-experientialists about suffering. The former hold that suffering is necessarily experiential-for instance, because it is necessarily unpleasant or painful; the latter deny this-for instance, because one can suffer when and because one's objective properties are damaged, even if one does not experience this. After surveying how the two accounts fare on a range of issues, the paper presents a decisive argument in favor of experientialism. The central claim is that non-experientialist accounts cannot accommodate cases of suffering that are virtuous and that directly contribute to some objective good.
There is an ongoing debate in bioethics regarding the nature of suffering. This conversation revolves around the following question: What kind of thing, exactly, is suffering? Specifically, is suffering a subjective phenomenon-intrinsically linked to personhood, personal values, feelings, and lived experience-or an objective affair, amenable to impersonal criteria and existing as an independent feature of the natural world? Notably, the implications of this determination are politically and ethically significant. This essay attempts to bring clarity to the subjective versus objective debate in suffering scholarship by examining the history of the concept of "objectivity," and putting that history in conversation with physician Eric Cassell's famous theory of suffering. It concludes with a novel, albeit tentative, definition of suffering: suffering is the experience of a gap between how things are and how things ought to be.
Who should decide what passes for disinformation in a liberal democracy? During the COVID-19 pandemic, a committee set up by the Dutch Ministry of Health was actively blocking disinformation. The committee comprised civil servants, communication experts, public health experts, and representatives of commercial online platforms such as Facebook, Twitter, and LinkedIn. To a large extent, vaccine hesitancy was attributed to disinformation, defined as misinformation (or data misinterpreted) with harmful intent. In this study, the question is answered by reflecting on what is needed for us to honor public reason: reasonableness, the willingness to engage in public discourse properly, and trust in the institutions of liberal democracy.
Parental surrogacy remains a highly controversial issue in contemporary ethics with considerable variation in the legal approaches of different jurisdictions. Finding a societal consensus on the issue remains highly elusive. John Rawls' theory of public reason, first developed in his A Theory of Justice (1971), offers a unifying model of political discourse and engagement that enables reasonable citizens to accept policies that they do not necessarily support at a personal level. The theory established a promising framework for private citizens with distinct moral positions on the subject to find common ground and, in doing so, to negotiate a consensus regarding the degree and nature of regulation that is palatable to all rational citizens.
Ethicists frequently suppose that suffering has special moral significance. It is often claimed that a main goal of medicine-perhaps its primary goal-is the alleviation of human suffering. Following Eric Cassell and others, this essay considers suffering understood as the experience of distress-negative emotions-in response to threats to something that one cares about. It examines whether, on this value-based account of suffering, we should accept the claim that suffering has special moral significance. It argues that we should not: suffering does not add significantly to the value of other human interests and rarely changes our moral obligations itself; it merely seems to have strong moral relevance because it often attends to interests that matter. This is because negative emotions themselves have only limited moral significance, which is due to the fact that their primary mental role is to indicate to us the relative importance of non-emotional goods.
Conscious but incapacitated patients need protection from both undertreatment and overtreatment, for they are exceptionally vulnerable, and dependent on others to act in their interests. In the United States, the law prioritizes autonomy over best interests in decision making. Yet U.S. courts, using both substituted judgment and best interests decision making standards, frequently prohibit the withdrawal of life-sustaining treatment from conscious but incapacitated patients, such as those in the minimally conscious state, even when ostensibly seeking to determine what patients would have wanted. In the United Kingdom, under the Mental Capacity Act of 2005, courts decide on the best interests of incapacitated patients by, in part, taking into account the past wishes and values of the patient. This paper examines and compares those ethicolegal approaches to decision making on behalf of conscious but incapacitated patients. We argue for a limited interpretation of best interests such that the standard is properly used only when the preferences of a conscious, but incapacitated patient are unknown and unknowable. When patient preferences and values are known or can be reasonably inferred, using a holistic, all-things-considered substituted judgment standard respects patient autonomy.
This interview with Peter Singer AI serves a dual purpose. It is an exploration of certain-utilitarian and related-views on sentience and its ethical implications. It is also an exercise in the emerging interaction between natural and artificial intelligence, presented not as just ethics of AI but perhaps more importantly, as ethics with AI. The one asking the questions-Matti Häyry-is a person, in the contemporary sense of the word, sentient and self-aware, whereas Peter Singer AI is an artificial intelligence persona, created by Sankalpa Ghose, a person, through dialogue with Peter Singer, a person, to programmatically model and incorporate the latter's writings, presentations, recipes, and character qualities as a renowned philosopher. The interview indicates some subtle differences between natural perspectives and artificial representation, suggesting directions for further development. PSai, as the project is also known, is available to anyone to chat with, anywhere in the world, on almost any topic, in almost any language, at www.petersinger.ai.

