Pub Date : 2023-01-03DOI: 10.1007/s10676-022-09675-6
Herman Veluwenkamp, J. van den Hoven
{"title":"Design for values and conceptual engineering","authors":"Herman Veluwenkamp, J. van den Hoven","doi":"10.1007/s10676-022-09675-6","DOIUrl":"https://doi.org/10.1007/s10676-022-09675-6","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"25 1","pages":"1-12"},"PeriodicalIF":3.6,"publicationDate":"2023-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44191435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-03DOI: 10.1007/s10676-022-09671-w
M. Dennis, Evgeni Aizenberg
{"title":"Correction to: the Ethics of AI in Human Resources","authors":"M. Dennis, Evgeni Aizenberg","doi":"10.1007/s10676-022-09671-w","DOIUrl":"https://doi.org/10.1007/s10676-022-09671-w","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"25 1","pages":"1"},"PeriodicalIF":3.6,"publicationDate":"2023-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44078413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1007/s10676-023-09686-x
Markus Christen, Thomas Burri, Serhiy Kandul, Pascal Vörös
Decisions in security contexts, including armed conflict, law enforcement, and disaster relief, often need to be taken under circumstances of limited information, stress, and time pressure. Since AI systems are capable of providing a certain amount of relief in such contexts, such systems will become increasingly important, be it as decision-support or decision-making systems. However, given that human life may be at stake in such situations, moral responsibility for such decisions should remain with humans. Hence the idea of "meaningful human control" of intelligent systems. In this opinion paper, we outline generic configurations of control of AI and we present an alternative to human control of AI, namely the inverse idea of having AI control humans, and we discuss the normative consequences of this alternative.
{"title":"Who is controlling whom? Reframing \"meaningful human control\" of AI systems in security.","authors":"Markus Christen, Thomas Burri, Serhiy Kandul, Pascal Vörös","doi":"10.1007/s10676-023-09686-x","DOIUrl":"https://doi.org/10.1007/s10676-023-09686-x","url":null,"abstract":"<p><p>Decisions in security contexts, including armed conflict, law enforcement, and disaster relief, often need to be taken under circumstances of limited information, stress, and time pressure. Since AI systems are capable of providing a certain amount of relief in such contexts, such systems will become increasingly important, be it as decision-support or decision-making systems. However, given that human life may be at stake in such situations, moral responsibility for such decisions should remain with humans. Hence the idea of \"meaningful human control\" of intelligent systems. In this opinion paper, we outline generic configurations of control of AI and we present an alternative to human control of AI, namely the inverse idea of having AI control humans, and we discuss the normative consequences of this alternative.</p>","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"25 1","pages":"10"},"PeriodicalIF":3.6,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9918557/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10773375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2023-05-18DOI: 10.1007/s10676-023-09704-y
Bouke de Vries
Love, sex, and physical intimacy are some of the most desired goods in life and they are increasingly being sought on dating apps such as Tinder, Bumble, and Badoo. For those who want a leg up in the chase for other people's attention, almost all of these apps now offer the option of paying a fee to boost one's visibility for a certain amount of time, which may range from 30 min to a few hours. In this article, I argue that there are strong moral grounds and, in countries with laws against unconscionable contracts, legal ones for thinking that the sale of such visibility boosts should be regulated, if not banned altogether. To do so, I raise two objections against their unfettered sale, namely that it exploits the impaired autonomy of certain users and that it creates socio-economic injustices.
{"title":"Selling visibility-boosts on dating apps: a problematic practice?","authors":"Bouke de Vries","doi":"10.1007/s10676-023-09704-y","DOIUrl":"10.1007/s10676-023-09704-y","url":null,"abstract":"<p><p>Love, sex, and physical intimacy are some of the most desired goods in life and they are increasingly being sought on dating apps such as Tinder, Bumble, and Badoo. For those who want a leg up in the chase for other people's attention, almost all of these apps now offer the option of paying a fee to boost one's visibility for a certain amount of time, which may range from 30 min to a few hours. In this article, I argue that there are strong moral grounds and, in countries with laws against unconscionable contracts, legal ones for thinking that the sale of such visibility boosts should be regulated, if not banned altogether. To do so, I raise two objections against their unfettered sale, namely that it exploits the impaired autonomy of certain users and that it creates socio-economic injustices.</p>","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"25 2","pages":"30"},"PeriodicalIF":3.6,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10191813/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9515727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1007/s10676-023-09676-z
Giorgia Pozzi
Artificial intelligence-based (AI) technologies such as machine learning (ML) systems are playing an increasingly relevant role in medicine and healthcare, bringing about novel ethical and epistemological issues that need to be timely addressed. Even though ethical questions connected to epistemic concerns have been at the center of the debate, it is going unnoticed how epistemic forms of injustice can be ML-induced, specifically in healthcare. I analyze the shortcomings of an ML system currently deployed in the USA to predict patients' likelihood of opioid addiction and misuse (PDMP algorithmic platforms). Drawing on this analysis, I aim to show that the wrong inflicted on epistemic agents involved in and affected by these systems' decision-making processes can be captured through the lenses of Miranda Fricker's account of hermeneutical injustice. I further argue that ML-induced hermeneutical injustice is particularly harmful due to what I define as an automated hermeneutical appropriation from the side of the ML system. The latter occurs if the ML system establishes meanings and shared hermeneutical resources without allowing for human oversight, impairing understanding and communication practices among stakeholders involved in medical decision-making. Furthermore and very much crucially, an automated hermeneutical appropriation can be recognized if physicians are strongly limited in their possibilities to safeguard patients from ML-induced hermeneutical injustice. Overall, my paper should expand the analysis of ethical issues raised by ML systems that are to be considered epistemic in nature, thus contributing to bridging the gap between these two dimensions in the ongoing debate.
{"title":"Automated opioid risk scores: a case for machine learning-induced epistemic injustice in healthcare.","authors":"Giorgia Pozzi","doi":"10.1007/s10676-023-09676-z","DOIUrl":"https://doi.org/10.1007/s10676-023-09676-z","url":null,"abstract":"<p><p>Artificial intelligence-based (AI) technologies such as machine learning (ML) systems are playing an increasingly relevant role in medicine and healthcare, bringing about novel ethical and epistemological issues that need to be timely addressed. Even though ethical questions connected to epistemic concerns have been at the center of the debate, it is going unnoticed how epistemic forms of injustice can be ML-induced, specifically in healthcare. I analyze the shortcomings of an ML system currently deployed in the USA to predict patients' likelihood of opioid addiction and misuse (PDMP algorithmic platforms). Drawing on this analysis, I aim to show that the wrong inflicted on epistemic agents involved in and affected by these systems' decision-making processes can be captured through the lenses of Miranda Fricker's account of <i>hermeneutical injustice</i>. I further argue that ML-induced hermeneutical injustice is particularly harmful due to what I define as an <i>automated hermeneutical appropriation</i> from the side of the ML system. The latter occurs if the ML system establishes meanings and shared hermeneutical resources without allowing for human oversight, impairing understanding and communication practices among stakeholders involved in medical decision-making. Furthermore and very much crucially, an automated hermeneutical appropriation can be recognized if physicians are strongly limited in their possibilities to safeguard patients from ML-induced hermeneutical injustice. Overall, my paper should expand the analysis of ethical issues raised by ML systems that are to be considered epistemic in nature, thus contributing to bridging the gap between these two dimensions in the ongoing debate.</p>","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"25 1","pages":"3"},"PeriodicalIF":3.6,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9869303/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9255824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1007/s10676-023-09687-w
C. B. Burken
{"title":"Value Sensitive Design for autonomous weapon systems - a primer","authors":"C. B. Burken","doi":"10.1007/s10676-023-09687-w","DOIUrl":"https://doi.org/10.1007/s10676-023-09687-w","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"1 1","pages":"11"},"PeriodicalIF":3.6,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"52259699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1007/s10676-023-09701-1
Tom N Coggins, Steffen Steinert
Many researchers from robotics, machine ethics, and adjacent fields seem to assume that norms represent good behavior that social robots should learn to benefit their users and society. We would like to complicate this view and present seven key troubles with norm-compliant robots: (1) norm biases, (2) paternalism (3) tyrannies of the majority, (4) pluralistic ignorance, (5) paths of least resistance, (6) outdated norms, and (7) technologically-induced norm change. Because discussions of why norm-compliant robots can be problematic are noticeably absent from the robot and machine ethics literature, this paper fills an important research gap. We argue that it is critical for researchers to take these issues into account if they wish to make norm-compliant robots.
{"title":"The seven troubles with norm-compliant robots.","authors":"Tom N Coggins, Steffen Steinert","doi":"10.1007/s10676-023-09701-1","DOIUrl":"https://doi.org/10.1007/s10676-023-09701-1","url":null,"abstract":"<p><p>Many researchers from robotics, machine ethics, and adjacent fields seem to assume that norms represent good behavior that social robots should learn to benefit their users and society. We would like to complicate this view and present seven key troubles with norm-compliant robots: (1) norm biases, (2) paternalism (3) tyrannies of the majority, (4) pluralistic ignorance, (5) paths of least resistance, (6) outdated norms, and (7) technologically-induced norm change. Because discussions of why norm-compliant robots can be problematic are noticeably absent from the robot and machine ethics literature, this paper fills an important research gap. We argue that it is critical for researchers to take these issues into account if they wish to make norm-compliant robots.</p>","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"25 2","pages":"29"},"PeriodicalIF":3.6,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10130815/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9398061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1007/s10676-022-09672-9
Sanju Ahuja, Jyotish Kumar
{"title":"Conceptualizations of user autonomy within the normative evaluation of dark patterns","authors":"Sanju Ahuja, Jyotish Kumar","doi":"10.1007/s10676-022-09672-9","DOIUrl":"https://doi.org/10.1007/s10676-022-09672-9","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":" ","pages":""},"PeriodicalIF":3.6,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43494344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-23DOI: 10.1007/s10676-022-09673-8
Herman Veluwenkamp
{"title":"Reasons for Meaningful Human Control","authors":"Herman Veluwenkamp","doi":"10.1007/s10676-022-09673-8","DOIUrl":"https://doi.org/10.1007/s10676-022-09673-8","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":" ","pages":""},"PeriodicalIF":3.6,"publicationDate":"2022-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46428593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-11DOI: 10.1007/s10676-022-09654-x
Fleur Jongepier, Esther Keymolen
{"title":"Explanation and Agency: exploring the normative-epistemic landscape of the “Right to Explanation”","authors":"Fleur Jongepier, Esther Keymolen","doi":"10.1007/s10676-022-09654-x","DOIUrl":"https://doi.org/10.1007/s10676-022-09654-x","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":" ","pages":""},"PeriodicalIF":3.6,"publicationDate":"2022-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41759186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}