The use of social robotics in care for older persons is increasingly discussed as one way of meeting emerging care needs due to scarce resources. While many potential benefits are associated with robotic care technologies, there is a variety of ethical challenges. To support steps towards a responsible implementation and use, this review develops an overview on ethical aspects of the use of social robots in care for older people from a decision-makers' perspective. Electronic databases were queried using a comprehensive search strategy based on the key concepts of "ethical aspects", "social robotics" and "elderly care". Abstract and title screening was conducted by two authors independently. Full-text screening was conducted by one author following a joint consolidation phase. Data was extracted using MAXQDA24 by one author, based on a consolidated coding framework. Analysis was performed through modified qualitative content analysis. A total of 1,518 publications were screened, and 248 publications were included. We have organized our analysis in a scheme of ethical hazards, ethical opportunities and unsettled questions, identifying at least 60 broad ethical aspects affecting three different stakeholder groups. While some ethical issues are well-known and broadly discussed our analysis shows a plethora of potentially relevant aspects, often only marginally recognized, that are worthy of consideration from a practical perspective. The findings highlight the need for a contextual and detailed evaluation of implementation scenarios. To make use of the vast knowledge of the ethical discourse, we hypothesize that decision-makers need to understand the specific nature of this discourse to be able to engage in careful ethical deliberation.
Since the early days of the explainable artificial intelligence movement, post hoc explanations have been praised for their potential to improve user understanding, promote trust, and reduce patient-safety risks in black box medical AI systems. Recently, however, critics have argued that the benefits of post hoc explanations are greatly exaggerated since they merely approximate, rather than replicate, the actual reasoning processes that black box systems take to arrive at their outputs. In this paper, we aim to defend the value of post hoc explanations against this recent critique. We argue that even if post hoc explanations do not replicate the exact reasoning processes of black box systems, they can still improve users’ functional understanding of black box systems, increase the accuracy of clinician-AI teams, and assist clinicians in justifying their AI-informed decisions. While post hoc explanations are not a silver-bullet solution to the black box problem in medical AI, they remain a useful strategy for addressing it.

