Pub Date : 2023-10-21DOI: 10.1007/s00146-023-01783-1
Evgeni Aizenberg, Matthew J. Dennis, Jeroen van den Hoven
Abstract In this paper, we examine the epistemological and ontological assumptions algorithmic hiring assessments make about job seekers’ attributes (e.g., competencies, skills, abilities) and the ethical implications of these assumptions. Given that both traditional psychometric hiring assessments and algorithmic assessments share a common set of underlying assumptions from the psychometric paradigm, we turn to literature that has examined the merits and limitations of these assumptions, gathering insights across multiple disciplines and several decades. Our exploration leads us to conclude that algorithmic hiring assessments are incompatible with attributes whose meanings are context-dependent and socially constructed. Such attributes call instead for assessment paradigms that offer space for negotiation of meanings between the job seeker and the employer. We argue that in addition to questioning the validity of algorithmic hiring assessments, this raises an often overlooked ethical impact on job seekers’ autonomy over self-representation: their ability to directly represent their identity, lived experiences, and aspirations. Infringement on this autonomy constitutes an infringement on job seekers’ dignity. We suggest beginning to address these issues through epistemological and ethical reflection regarding the choice of assessment paradigm, the means to implement it, and the ethical impacts of these choices. This entails a transdisciplinary effort that would involve job seekers, hiring managers, recruiters, and other professionals and researchers. Combined with a socio-technical design perspective, this may help generate new ideas regarding appropriate roles for human-to-human and human–technology interactions in the hiring process.
{"title":"Examining the assumptions of AI hiring assessments and their impact on job seekers’ autonomy over self-representation","authors":"Evgeni Aizenberg, Matthew J. Dennis, Jeroen van den Hoven","doi":"10.1007/s00146-023-01783-1","DOIUrl":"https://doi.org/10.1007/s00146-023-01783-1","url":null,"abstract":"Abstract In this paper, we examine the epistemological and ontological assumptions algorithmic hiring assessments make about job seekers’ attributes (e.g., competencies, skills, abilities) and the ethical implications of these assumptions. Given that both traditional psychometric hiring assessments and algorithmic assessments share a common set of underlying assumptions from the psychometric paradigm, we turn to literature that has examined the merits and limitations of these assumptions, gathering insights across multiple disciplines and several decades. Our exploration leads us to conclude that algorithmic hiring assessments are incompatible with attributes whose meanings are context-dependent and socially constructed. Such attributes call instead for assessment paradigms that offer space for negotiation of meanings between the job seeker and the employer. We argue that in addition to questioning the validity of algorithmic hiring assessments, this raises an often overlooked ethical impact on job seekers’ autonomy over self-representation: their ability to directly represent their identity, lived experiences, and aspirations. Infringement on this autonomy constitutes an infringement on job seekers’ dignity. We suggest beginning to address these issues through epistemological and ethical reflection regarding the choice of assessment paradigm, the means to implement it, and the ethical impacts of these choices. This entails a transdisciplinary effort that would involve job seekers, hiring managers, recruiters, and other professionals and researchers. Combined with a socio-technical design perspective, this may help generate new ideas regarding appropriate roles for human-to-human and human–technology interactions in the hiring process.","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"62 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135513041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-13DOI: 10.1007/s00146-023-01790-2
Bartlomiej Chomanski, Lode Lauwaert
Abstract This paper argues, against the prevailing view, that consent to privacy policies that regular internet users usually give is largely unproblematic from the moral point of view. To substantiate this claim, we rely on the idea of the right not to know (RNTK), as developed by bioethicists. Defenders of the RNTK in bioethical literature on informed consent claim that patients generally have the right to refuse medically relevant information. In this article we extend the application of the RNTK to online privacy. We then argue that if internet users can be thought of as exercising their RNTK before consenting to privacy policies, their consent ought to be considered free of the standard charges leveled against it by critics.
{"title":"Online consent: how much do we need to know?","authors":"Bartlomiej Chomanski, Lode Lauwaert","doi":"10.1007/s00146-023-01790-2","DOIUrl":"https://doi.org/10.1007/s00146-023-01790-2","url":null,"abstract":"Abstract This paper argues, against the prevailing view, that consent to privacy policies that regular internet users usually give is largely unproblematic from the moral point of view. To substantiate this claim, we rely on the idea of the right not to know (RNTK), as developed by bioethicists. Defenders of the RNTK in bioethical literature on informed consent claim that patients generally have the right to refuse medically relevant information. In this article we extend the application of the RNTK to online privacy. We then argue that if internet users can be thought of as exercising their RNTK before consenting to privacy policies, their consent ought to be considered free of the standard charges leveled against it by critics.","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135858627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract A lot of attention has recently been devoted to the notion of Trustworthy AI (TAI). However, the very applicability of the notions of trust and trustworthiness to AI systems has been called into question. A purely epistemic account of trust can hardly ground the distinction between trustworthy and merely reliable AI, while it has been argued that insisting on the importance of the trustee’s motivations and goodwill makes the notion of TAI a categorical error. After providing an overview of the debate, we contend that the prevailing views on trust and AI fail to account for the ethically relevant and value-laden aspects of the design and use of AI systems, and we propose an understanding of the notion of TAI that explicitly aims at capturing these aspects. The problems involved in applying trust and trustworthiness to AI systems are overcome by keeping apart trust in AI systems and interpersonal trust. These notions share a conceptual core but should be treated as distinct ones.
{"title":"Keep trusting! A plea for the notion of Trustworthy AI","authors":"Giacomo Zanotti, Mattia Petrolo, Daniele Chiffi, Viola Schiaffonati","doi":"10.1007/s00146-023-01789-9","DOIUrl":"https://doi.org/10.1007/s00146-023-01789-9","url":null,"abstract":"Abstract A lot of attention has recently been devoted to the notion of Trustworthy AI (TAI). However, the very applicability of the notions of trust and trustworthiness to AI systems has been called into question. A purely epistemic account of trust can hardly ground the distinction between trustworthy and merely reliable AI, while it has been argued that insisting on the importance of the trustee’s motivations and goodwill makes the notion of TAI a categorical error. After providing an overview of the debate, we contend that the prevailing views on trust and AI fail to account for the ethically relevant and value-laden aspects of the design and use of AI systems, and we propose an understanding of the notion of TAI that explicitly aims at capturing these aspects. The problems involved in applying trust and trustworthiness to AI systems are overcome by keeping apart trust in AI systems and interpersonal trust. These notions share a conceptual core but should be treated as distinct ones.","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135969112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract In the present study, qualitative and quantitative studies were conducted to explore differences between stakeholders in expectations of gendered robots, with a focus on their specific application in the field of psychotherapy. In Study I, semi-structured interviews were conducted with 18 experts in psychotherapy to extract categories of opinions regarding the use of humanoid robots in the field. Based on these extracted categories, in Study II, an online questionnaire survey was conducted to compare concrete expectations of the use of humanoid robots in psychotherapy between 50 experts and 100 nonexperts in psychotherapy. The results revealed that compared with the female participants, the male participants tended to prefer robots with a female appearance. In addition, compared with the experts, the nonexperts tended not to relate the performance of robots with their gender appearance, and compared with the other participant groups, the female expert participants had lower expectations of the use of robots in the field. These findings suggest that differences between stakeholders regarding the expectations of gendered robots should be resolved to encourage their acceptance in a specific field.
{"title":"Differences in stakeholders’ expectations of gendered robots in the field of psychotherapy: an exploratory survey","authors":"Tatsuya Nomura, Tomohiro Suzuki, Hirokazu Kumazaki","doi":"10.1007/s00146-023-01787-x","DOIUrl":"https://doi.org/10.1007/s00146-023-01787-x","url":null,"abstract":"Abstract In the present study, qualitative and quantitative studies were conducted to explore differences between stakeholders in expectations of gendered robots, with a focus on their specific application in the field of psychotherapy. In Study I, semi-structured interviews were conducted with 18 experts in psychotherapy to extract categories of opinions regarding the use of humanoid robots in the field. Based on these extracted categories, in Study II, an online questionnaire survey was conducted to compare concrete expectations of the use of humanoid robots in psychotherapy between 50 experts and 100 nonexperts in psychotherapy. The results revealed that compared with the female participants, the male participants tended to prefer robots with a female appearance. In addition, compared with the experts, the nonexperts tended not to relate the performance of robots with their gender appearance, and compared with the other participant groups, the female expert participants had lower expectations of the use of robots in the field. These findings suggest that differences between stakeholders regarding the expectations of gendered robots should be resolved to encourage their acceptance in a specific field.","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135350512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-06DOI: 10.1007/s00146-023-01777-z
Johann Laux
Abstract Human oversight has become a key mechanism for the governance of artificial intelligence (“AI”). Human overseers are supposed to increase the accuracy and safety of AI systems, uphold human values, and build trust in the technology. Empirical research suggests, however, that humans are not reliable in fulfilling their oversight tasks. They may be lacking in competence or be harmfully incentivised. This creates a challenge for human oversight to be effective. In addressing this challenge, this article aims to make three contributions. First, it surveys the emerging laws of oversight, most importantly the European Union’s Artificial Intelligence Act (“AIA”). It will be shown that while the AIA is concerned with the competence of human overseers, it does not provide much guidance on how to achieve effective oversight and leaves oversight obligations for AI developers underdefined. Second, this article presents a novel taxonomy of human oversight roles, differentiated along whether human intervention is constitutive to, or corrective of a decision made or supported by an AI. The taxonomy allows to propose suggestions for improving effectiveness tailored to the type of oversight in question. Third, drawing on scholarship within democratic theory, this article formulates six normative principles which institutionalise distrust in human oversight of AI. The institutionalisation of distrust has historically been practised in democratic governance. Applied for the first time to AI governance, the principles anticipate the fallibility of human overseers and seek to mitigate them at the level of institutional design. They aim to directly increase the trustworthiness of human oversight and to indirectly inspire well-placed trust in AI governance.
{"title":"Institutionalised distrust and human oversight of artificial intelligence: towards a democratic design of AI governance under the European Union AI Act","authors":"Johann Laux","doi":"10.1007/s00146-023-01777-z","DOIUrl":"https://doi.org/10.1007/s00146-023-01777-z","url":null,"abstract":"Abstract Human oversight has become a key mechanism for the governance of artificial intelligence (“AI”). Human overseers are supposed to increase the accuracy and safety of AI systems, uphold human values, and build trust in the technology. Empirical research suggests, however, that humans are not reliable in fulfilling their oversight tasks. They may be lacking in competence or be harmfully incentivised. This creates a challenge for human oversight to be effective. In addressing this challenge, this article aims to make three contributions. First, it surveys the emerging laws of oversight, most importantly the European Union’s Artificial Intelligence Act (“AIA”). It will be shown that while the AIA is concerned with the competence of human overseers, it does not provide much guidance on how to achieve effective oversight and leaves oversight obligations for AI developers underdefined. Second, this article presents a novel taxonomy of human oversight roles, differentiated along whether human intervention is constitutive to, or corrective of a decision made or supported by an AI. The taxonomy allows to propose suggestions for improving effectiveness tailored to the type of oversight in question. Third, drawing on scholarship within democratic theory, this article formulates six normative principles which institutionalise distrust in human oversight of AI. The institutionalisation of distrust has historically been practised in democratic governance. Applied for the first time to AI governance, the principles anticipate the fallibility of human overseers and seek to mitigate them at the level of institutional design. They aim to directly increase the trustworthiness of human oversight and to indirectly inspire well-placed trust in AI governance.","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135352026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-05DOI: 10.1007/s00146-023-01788-w
Jacob Browning
{"title":"Review of Carlos Montemayor’s The prospect of a humanitarian AI","authors":"Jacob Browning","doi":"10.1007/s00146-023-01788-w","DOIUrl":"https://doi.org/10.1007/s00146-023-01788-w","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134976742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-04DOI: 10.1007/s00146-023-01785-z
Eleri Lillemäe, Kairi Talves, Wolfgang Wagner
{"title":"Public perception of military AI in the context of techno-optimistic society","authors":"Eleri Lillemäe, Kairi Talves, Wolfgang Wagner","doi":"10.1007/s00146-023-01785-z","DOIUrl":"https://doi.org/10.1007/s00146-023-01785-z","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135592409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-02DOI: 10.1007/s00146-023-01778-y
Pankaj Kumar Maskara
{"title":"Developing safer AI–concepts from economics to the rescue","authors":"Pankaj Kumar Maskara","doi":"10.1007/s00146-023-01778-y","DOIUrl":"https://doi.org/10.1007/s00146-023-01778-y","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"232 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135828891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.1007/s00146-023-01779-x
Mona Nabil Demaidi
Abstract Artificial intelligence (AI) national strategies provide countries with a framework for the development and implementation of AI technologies. Sixty countries worldwide published their AI national strategies. The majority of these countries with more than 70% are developed countries. The approach of AI national strategies differentiates between developed and developing countries in several aspects including scientific research, education, talent development, and ethics. This paper examined AI readiness assessment in a developing country (Palestine) to help develop and identify the main pillars of the AI national strategy. AI readiness assessment was applied across education, entrepreneurship, government, and research and development sectors in Palestine (case of a developing country). In addition, it examined the legal framework and whether it is coping with trending technologies. The results revealed that Palestinians have low awareness of AI. Moreover, AI is barely used across several sectors and the legal framework is not coping with trending technologies. The results helped develop and identify the following five main pillars that Palestine’s AI national strategy should focus on: AI for Government, AI for Development, AI for Capacity Building in the private, public and technical and governmental sectors, AI and Legal Framework, and international Activities.
{"title":"Artificial intelligence national strategy in a developing country","authors":"Mona Nabil Demaidi","doi":"10.1007/s00146-023-01779-x","DOIUrl":"https://doi.org/10.1007/s00146-023-01779-x","url":null,"abstract":"Abstract Artificial intelligence (AI) national strategies provide countries with a framework for the development and implementation of AI technologies. Sixty countries worldwide published their AI national strategies. The majority of these countries with more than 70% are developed countries. The approach of AI national strategies differentiates between developed and developing countries in several aspects including scientific research, education, talent development, and ethics. This paper examined AI readiness assessment in a developing country (Palestine) to help develop and identify the main pillars of the AI national strategy. AI readiness assessment was applied across education, entrepreneurship, government, and research and development sectors in Palestine (case of a developing country). In addition, it examined the legal framework and whether it is coping with trending technologies. The results revealed that Palestinians have low awareness of AI. Moreover, AI is barely used across several sectors and the legal framework is not coping with trending technologies. The results helped develop and identify the following five main pillars that Palestine’s AI national strategy should focus on: AI for Government, AI for Development, AI for Capacity Building in the private, public and technical and governmental sectors, AI and Legal Framework, and international Activities.","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135406529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}