In our ever technologically driven and mediatized society, we face the existential risk of falling into an info-calypse as much as an eco-calypse. To complement the list of values of a progressive culture put forth by Harrison (Natl Interest 60:55–65, 2000) and Vuong (Econ Bus Lett 10(3):284–290, 2021), this short essay proposes cultivating a new cultural value of protecting the infosphere. It argues rewarding practices and products that strengthen the integrity of infosphere as part of the newly emerged corporate social responsibility (CSR) practices are highly beneficial for the fight against contaminations of the infosphere, i.e., misinformation, disinformation, damaging contents, etc.
If I decide to disclose information about myself, this act may undermine other people’s ability to conceal information about them. Such dependencies are called privacy dependencies in the literature. Some say that privacy dependencies generate moral duties to avoid sharing information about oneself. If true, we argue, then it is sometimes justified for others to impose harm on the person sharing information to prevent them from doing so. In this paper, we first show how such conclusions arise. Next, we show that the existence of such a dependency between the moral significance you are inclined to attribute to privacy dependencies and judgments about permissible self-defense puts pressure on at least some ways of spelling out the idea that privacy dependencies ought to constrain our data-sharing conduct.
Well-being is an important policy concept including in discussions around the use of artificial intelligence, machine learning and robotics. Disabled people experience challenges in their well-being. Therefore, the aim of our scoping review study of academic abstracts employing Scopus, IEEE Xplore, Compendex and the 70 databases from EBSCO-HOST as sources was to better understand how academic literature focusing on AI/ML/robotics engages with well-being in relation to disabled people. Our objective was to answer the following research question: how and to what extent does the AI/ML/robot literature we covered include well-being in relation to disabled people? We found 2071 academic abstracts covering AI/ML and well-being, and 1055 covering robotics and well-being. Within these abstracts, only 39 covered AI/ML and 48 robotics and well-being in relation to disabled people. The tone of the coverage was techno-positive and techno-optimistic arguing that AI/ML/robotics could improve the well-being of disabled people in general or improve well-being by helping disabled people overcome their ‘disability’ or make tasks easier. No negative effects that AI/ML/robotics could have or has had on the well-being of disabled people were mentioned. Disabled people were portrayed only within patient, client, or user roles but not in their roles as stakeholders in the governance of AI/ML/robotics discussions. This biased and limited coverage of the impact of AI/ML/robotics on the well-being of disabled people disempowers disabled people.
The purpose of this research is to explore the acceptance of social robots as companions. Understanding what affects the acceptance of humanoid companions may give society tools that will help people overcome loneliness throughout pandemics, such as COVID-19 and beyond. Based on regulatory focus theory, it is proposed that there is a relationship between goal-directed motivation and acceptance of robots as companions. The theory of regulatory focus posits that goal-directed behavior is regulated by two motivational systems—promotion and prevention. People with a promotion focus are concerned about accomplishments, are sensitive to the presence and absence of positive outcomes (gains/non-gains), and have a strategic preference for eager means of goal-pursuit. People with a prevention focus are concerned about security and safety, are sensitive to the absence and presence of negative outcomes (non-losses/losses), and have a strategic preference for vigilant means. Two studies support the notion of a relationship between acceptance of robots as companions and regulatory focus. In Study 1, chronic promotion focus was associated with acceptance of robots, and this association was mediated by loneliness. The weaker the promotion focus, the stronger was the sense of loneliness, and thus the higher was the acceptance of the robots. In Study 2, a situationally induced regulatory focus moderated the association between acceptance of robots and COVID-19 perceived severity. The higher the perceived severity of the disease, the higher was the willingness to accept the robots, and the effect was stronger for an induced prevention (vs. promotion) focus. Models of acceptance of robots are presented. Implications for well-being are discussed.