This paper argues, against the prevailing view, that consent to privacy policies that regular internet users usually give is largely unproblematic from the moral point of view. To substantiate this claim, we rely on the idea of the right not to know (RNTK), as developed by bioethicists. Defenders of the RNTK in bioethical literature on informed consent claim that patients generally have the right to refuse medically relevant information. In this article we extend the application of the RNTK to online privacy. We then argue that if internet users can be thought of as exercising their RNTK before consenting to privacy policies, their consent ought to be considered free of the standard charges leveled against it by critics.
A lot of attention has recently been devoted to the notion of Trustworthy AI (TAI). However, the very applicability of the notions of trust and trustworthiness to AI systems has been called into question. A purely epistemic account of trust can hardly ground the distinction between trustworthy and merely reliable AI, while it has been argued that insisting on the importance of the trustee’s motivations and goodwill makes the notion of TAI a categorical error. After providing an overview of the debate, we contend that the prevailing views on trust and AI fail to account for the ethically relevant and value-laden aspects of the design and use of AI systems, and we propose an understanding of the notion of TAI that explicitly aims at capturing these aspects. The problems involved in applying trust and trustworthiness to AI systems are overcome by keeping apart trust in AI systems and interpersonal trust. These notions share a conceptual core but should be treated as distinct ones.
Human oversight has become a key mechanism for the governance of artificial intelligence (“AI”). Human overseers are supposed to increase the accuracy and safety of AI systems, uphold human values, and build trust in the technology. Empirical research suggests, however, that humans are not reliable in fulfilling their oversight tasks. They may be lacking in competence or be harmfully incentivised. This creates a challenge for human oversight to be effective. In addressing this challenge, this article aims to make three contributions. First, it surveys the emerging laws of oversight, most importantly the European Union’s Artificial Intelligence Act (“AIA”). It will be shown that while the AIA is concerned with the competence of human overseers, it does not provide much guidance on how to achieve effective oversight and leaves oversight obligations for AI developers underdefined. Second, this article presents a novel taxonomy of human oversight roles, differentiated along whether human intervention is constitutive to, or corrective of a decision made or supported by an AI. The taxonomy allows to propose suggestions for improving effectiveness tailored to the type of oversight in question. Third, drawing on scholarship within democratic theory, this article formulates six normative principles which institutionalise distrust in human oversight of AI. The institutionalisation of distrust has historically been practised in democratic governance. Applied for the first time to AI governance, the principles anticipate the fallibility of human overseers and seek to mitigate them at the level of institutional design. They aim to directly increase the trustworthiness of human oversight and to indirectly inspire well-placed trust in AI governance.
In the present study, qualitative and quantitative studies were conducted to explore differences between stakeholders in expectations of gendered robots, with a focus on their specific application in the field of psychotherapy. In Study I, semi-structured interviews were conducted with 18 experts in psychotherapy to extract categories of opinions regarding the use of humanoid robots in the field. Based on these extracted categories, in Study II, an online questionnaire survey was conducted to compare concrete expectations of the use of humanoid robots in psychotherapy between 50 experts and 100 nonexperts in psychotherapy. The results revealed that compared with the female participants, the male participants tended to prefer robots with a female appearance. In addition, compared with the experts, the nonexperts tended not to relate the performance of robots with their gender appearance, and compared with the other participant groups, the female expert participants had lower expectations of the use of robots in the field. These findings suggest that differences between stakeholders regarding the expectations of gendered robots should be resolved to encourage their acceptance in a specific field.