Pub Date : 2023-11-04DOI: 10.1007/s10676-023-09727-5
Robert Sparrow, Mark Andrejevic, Bridget Harris
Abstract It is estimated that one in three women experience intimate partner violence (IPV) across the course of their life. The popular uptake of “smart speakers” powered by sophisticated AI means that surveillance of the domestic environment is increasingly possible. Correspondingly, there are various proposals to use smart speakers to detect or report IPV. In this paper, we clarify what might be possible when it comes to combatting IPV using existing or near-term technology and also begin the project of evaluating this project both ethically and politically. We argue that the ethical landscape looks different depending on whether one is considering the decision to develop the technology or the decision to use it once it has been developed. If activists and governments wish to avoid the privatisation of responses to IPV, ubiquitous surveillance of domestic spaces, increasing the risk posed to members of minority communities by police responses to IPV, and the danger that more powerful smart speakers will be co-opted by men to control and abuse women, then they should resist the development of this technology rather than wait until these systems are developed. If it is judged that the moral urgency of IPV justifies exploring what might be possible by developing this technology, even in the face of these risks, then it will be imperative that victim-survivors from a range of demographics, as well as government and non-government stakeholders, are engaged in shaping this technology and the legislation and policies needed to regulate it.
{"title":"Should we embrace “Big Sister”? Smart speakers as a means to combat intimate partner violence","authors":"Robert Sparrow, Mark Andrejevic, Bridget Harris","doi":"10.1007/s10676-023-09727-5","DOIUrl":"https://doi.org/10.1007/s10676-023-09727-5","url":null,"abstract":"Abstract It is estimated that one in three women experience intimate partner violence (IPV) across the course of their life. The popular uptake of “smart speakers” powered by sophisticated AI means that surveillance of the domestic environment is increasingly possible. Correspondingly, there are various proposals to use smart speakers to detect or report IPV. In this paper, we clarify what might be possible when it comes to combatting IPV using existing or near-term technology and also begin the project of evaluating this project both ethically and politically. We argue that the ethical landscape looks different depending on whether one is considering the decision to develop the technology or the decision to use it once it has been developed. If activists and governments wish to avoid the privatisation of responses to IPV, ubiquitous surveillance of domestic spaces, increasing the risk posed to members of minority communities by police responses to IPV, and the danger that more powerful smart speakers will be co-opted by men to control and abuse women, then they should resist the development of this technology rather than wait until these systems are developed. If it is judged that the moral urgency of IPV justifies exploring what might be possible by developing this technology, even in the face of these risks, then it will be imperative that victim-survivors from a range of demographics, as well as government and non-government stakeholders, are engaged in shaping this technology and the legislation and policies needed to regulate it.","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"108 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135773389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-28DOI: 10.1007/s10676-023-09728-4
Alistair Knott, Dino Pedreschi, Raja Chatila, Tapabrata Chakraborti, Susan Leavy, Ricardo Baeza-Yates, David Eyers, Andrew Trotman, Paul D. Teal, Przemyslaw Biecek, Stuart Russell, Yoshua Bengio
Abstract The new wave of ‘foundation models’—general-purpose generative AI models, for production of text (e.g., ChatGPT) or images (e.g., MidJourney)—represent a dramatic advance in the state of the art for AI. But their use also introduces a range of new risks, which has prompted an ongoing conversation about possible regulatory mechanisms. Here we propose a specific principle that should be incorporated into legislation: that any organization developing a foundation model intended for public use must demonstrate a reliable detection mechanism for the content it generates, as a condition of its public release. The detection mechanism should be made publicly available in a tool that allows users to query, for an arbitrary item of content, whether the item was generated (wholly or partly) by the model. In this paper, we argue that this requirement is technically feasible and would play an important role in reducing certain risks from new AI models in many domains. We also outline a number of options for the tool’s design, and summarize a number of points where further input from policymakers and researchers would be required.
{"title":"Generative AI models should include detection mechanisms as a condition for public release","authors":"Alistair Knott, Dino Pedreschi, Raja Chatila, Tapabrata Chakraborti, Susan Leavy, Ricardo Baeza-Yates, David Eyers, Andrew Trotman, Paul D. Teal, Przemyslaw Biecek, Stuart Russell, Yoshua Bengio","doi":"10.1007/s10676-023-09728-4","DOIUrl":"https://doi.org/10.1007/s10676-023-09728-4","url":null,"abstract":"Abstract The new wave of ‘foundation models’—general-purpose generative AI models, for production of text (e.g., ChatGPT) or images (e.g., MidJourney)—represent a dramatic advance in the state of the art for AI. But their use also introduces a range of new risks, which has prompted an ongoing conversation about possible regulatory mechanisms. Here we propose a specific principle that should be incorporated into legislation: that any organization developing a foundation model intended for public use must demonstrate a reliable detection mechanism for the content it generates, as a condition of its public release. The detection mechanism should be made publicly available in a tool that allows users to query, for an arbitrary item of content, whether the item was generated (wholly or partly) by the model. In this paper, we argue that this requirement is technically feasible and would play an important role in reducing certain risks from new AI models in many domains. We also outline a number of options for the tool’s design, and summarize a number of points where further input from policymakers and researchers would be required.","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"37 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136160670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Nowadays, Artificial Intelligence (AI) is present in all sectors of the economy. Consequently, both data-the raw material used to build AI systems- and AI have an unprecedented impact on society and there is a need to ensure that they work for its benefit. For this reason, the European Union has put data and trustworthy AI at the center of recent legislative initiatives. An important element in these regulations is transparency, understood as the provision of information to relevant stakeholders to support their understanding of AI systems and data throughout their lifecycle. In recent years, an increasing number of approaches for documenting AI and datasets have emerged, both within academia and the private sector. In this work, we identify the 36 most relevant ones from more than 2200 papers related to trustworthy AI. We assess their relevance from the angle of European regulatory objectives, their coverage of AI technologies and economic sectors, and their suitability to address the specific needs of multiple stakeholders. Finally, we discuss the main documentation gaps found, including the need to better address data innovation practices (e.g. data sharing, data reuse) and large-scale algorithmic systems (e.g. those used in online platforms), and to widen the focus from algorithms and data to AI systems as a whole.
{"title":"The landscape of data and AI documentation approaches in the European policy context","authors":"Marina Micheli, Isabelle Hupont, Blagoj Delipetrev, Josep Soler-Garrido","doi":"10.1007/s10676-023-09725-7","DOIUrl":"https://doi.org/10.1007/s10676-023-09725-7","url":null,"abstract":"Abstract Nowadays, Artificial Intelligence (AI) is present in all sectors of the economy. Consequently, both data-the raw material used to build AI systems- and AI have an unprecedented impact on society and there is a need to ensure that they work for its benefit. For this reason, the European Union has put data and trustworthy AI at the center of recent legislative initiatives. An important element in these regulations is transparency, understood as the provision of information to relevant stakeholders to support their understanding of AI systems and data throughout their lifecycle. In recent years, an increasing number of approaches for documenting AI and datasets have emerged, both within academia and the private sector. In this work, we identify the 36 most relevant ones from more than 2200 papers related to trustworthy AI. We assess their relevance from the angle of European regulatory objectives, their coverage of AI technologies and economic sectors, and their suitability to address the specific needs of multiple stakeholders. Finally, we discuss the main documentation gaps found, including the need to better address data innovation practices (e.g. data sharing, data reuse) and large-scale algorithmic systems (e.g. those used in online platforms), and to widen the focus from algorithms and data to AI systems as a whole.","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"7 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136158338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-24DOI: 10.1007/s10676-023-09731-9
Abootaleb Safdari
{"title":"Person, thing, Robot: a moral and legal ontology for the 21st century and beyond: by David Gunkel","authors":"Abootaleb Safdari","doi":"10.1007/s10676-023-09731-9","DOIUrl":"https://doi.org/10.1007/s10676-023-09731-9","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"34 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135265928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-24DOI: 10.1007/s10676-023-09729-3
Verena Zimmermann
Abstract Smart Cities consist of a multitude of interconnected devices and services to, among others, enhance efficiency, comfort, and safety. To achieve these aims, smart cities rely on an interplay of measures including the deployment of interventions targeted to foster certain human behaviors, such as saving energy, or collecting and exchanging sensor and user data. Both aspects have ethical implications, e.g., when it comes to intervention design or the handling of privacy-related data such as personal information, user preferences or geolocations. Resulting concerns must be taken seriously, as they reduce user acceptance and can even lead to the abolition of otherwise promising Smart City projects. Established guidelines for ethical research and practice from the psychological sciences provide a useful framework for the kinds of ethical issues raised when designing human-centered interventions or dealing with user-generated data. This article thus reviews relevant psychological guidelines and discusses their applicability to the Smart City context. A special focus is on the guidelines’ implications and resulting challenges for certain Smart City applications. Additionally, potential gaps in current guidelines and the limits of applicability are reflected upon.
{"title":"Smart cities as a testbed for experimenting with humans? - Applying psychological ethical guidelines to smart city interventions","authors":"Verena Zimmermann","doi":"10.1007/s10676-023-09729-3","DOIUrl":"https://doi.org/10.1007/s10676-023-09729-3","url":null,"abstract":"Abstract Smart Cities consist of a multitude of interconnected devices and services to, among others, enhance efficiency, comfort, and safety. To achieve these aims, smart cities rely on an interplay of measures including the deployment of interventions targeted to foster certain human behaviors, such as saving energy, or collecting and exchanging sensor and user data. Both aspects have ethical implications, e.g., when it comes to intervention design or the handling of privacy-related data such as personal information, user preferences or geolocations. Resulting concerns must be taken seriously, as they reduce user acceptance and can even lead to the abolition of otherwise promising Smart City projects. Established guidelines for ethical research and practice from the psychological sciences provide a useful framework for the kinds of ethical issues raised when designing human-centered interventions or dealing with user-generated data. This article thus reviews relevant psychological guidelines and discusses their applicability to the Smart City context. A special focus is on the guidelines’ implications and resulting challenges for certain Smart City applications. Additionally, potential gaps in current guidelines and the limits of applicability are reflected upon.","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"68 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135273893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-16DOI: 10.1007/s10676-023-09726-6
Alexander Andersson, Per-Erik Milam
Abstract Violent video games (VVGs) are a source of serious and continuing controversy. They are not unique in this respect, though. Other entertainment products have been criticized on moral grounds, from pornography to heavy metal, horror films, and Harry Potter books. Some of these controversies have fizzled out over time and have come to be viewed as cases of moral panic. Others, including moral objections to VVGs, have persisted. The aim of this paper is to determine which, if any, of the concerns raised about VVGs are legitimate. We argue that common moral objections to VVGs are unsuccessful, but that a plausible critique can be developed that captures the insights of these objections while avoiding their pitfalls. Our view suggests that the moral badness of a game depends on how well its internal logic expresses or encourages the players’ objectionable attitudes. This allows us to recognize that some games are morally worse than others—and that it can be morally wrong to design and play some VVGs—but that the moral badness of these games is not necessarily dependent on how violent they are.
{"title":"Violent video games: content, attitudes, and norms","authors":"Alexander Andersson, Per-Erik Milam","doi":"10.1007/s10676-023-09726-6","DOIUrl":"https://doi.org/10.1007/s10676-023-09726-6","url":null,"abstract":"Abstract Violent video games (VVGs) are a source of serious and continuing controversy. They are not unique in this respect, though. Other entertainment products have been criticized on moral grounds, from pornography to heavy metal, horror films, and Harry Potter books. Some of these controversies have fizzled out over time and have come to be viewed as cases of moral panic. Others, including moral objections to VVGs, have persisted. The aim of this paper is to determine which, if any, of the concerns raised about VVGs are legitimate. We argue that common moral objections to VVGs are unsuccessful, but that a plausible critique can be developed that captures the insights of these objections while avoiding their pitfalls. Our view suggests that the moral badness of a game depends on how well its internal logic expresses or encourages the players’ objectionable attitudes. This allows us to recognize that some games are morally worse than others—and that it can be morally wrong to design and play some VVGs—but that the moral badness of these games is not necessarily dependent on how violent they are.","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136115561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-06DOI: 10.1007/s10676-023-09724-8
Ludovico Giacomo Conti, Peter Seele
Abstract The recent proliferation of AI scandals led private and public organisations to implement new ethics guidelines, introduce AI ethics boards, and list ethical principles. Nevertheless, some of these efforts remained a façade not backed by any substantive action. Such behaviour made the public question the legitimacy of the AI industry and prompted scholars to accuse the sector of ethicswashing, machinewashing, and ethics trivialisation—criticisms that spilt over to institutional AI ethics boards. To counter this widespread issue, contributions in the literature have proposed fixes that do not consider its systemic character and are based on a top-down, expert-centric governance. To fill this gap, we propose to make use of qualified informed lotteries : a two-step model that transposes the documented benefits of the ancient practice of sortition into the selection of AI ethics boards’ members and combines them with the advantages of a stakeholder-driven, participative, and deliberative bottom-up process typical of Citizens’ Assemblies. The model permits increasing the public’s legitimacy and participation in the decision-making process and its deliverables, curbing the industry’s over-influence and lobbying, and diminishing the instrumentalisation of ethics boards. We suggest that this sortition-based approach may provide a sound base for both public and private organisations in smart societies for constructing a decentralised, bottom-up, participative digital democracy.
{"title":"The contested role of AI ethics boards in smart societies: a step towards improvement based on board composition by sortition","authors":"Ludovico Giacomo Conti, Peter Seele","doi":"10.1007/s10676-023-09724-8","DOIUrl":"https://doi.org/10.1007/s10676-023-09724-8","url":null,"abstract":"Abstract The recent proliferation of AI scandals led private and public organisations to implement new ethics guidelines, introduce AI ethics boards, and list ethical principles. Nevertheless, some of these efforts remained a façade not backed by any substantive action. Such behaviour made the public question the legitimacy of the AI industry and prompted scholars to accuse the sector of ethicswashing, machinewashing, and ethics trivialisation—criticisms that spilt over to institutional AI ethics boards. To counter this widespread issue, contributions in the literature have proposed fixes that do not consider its systemic character and are based on a top-down, expert-centric governance. To fill this gap, we propose to make use of qualified informed lotteries : a two-step model that transposes the documented benefits of the ancient practice of sortition into the selection of AI ethics boards’ members and combines them with the advantages of a stakeholder-driven, participative, and deliberative bottom-up process typical of Citizens’ Assemblies. The model permits increasing the public’s legitimacy and participation in the decision-making process and its deliverables, curbing the industry’s over-influence and lobbying, and diminishing the instrumentalisation of ethics boards. We suggest that this sortition-based approach may provide a sound base for both public and private organisations in smart societies for constructing a decentralised, bottom-up, participative digital democracy.","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135351995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-26DOI: 10.1007/s10676-023-09723-9
Anda Zahiu, Emilian Mihailov, Brian D. Earp, Kathryn B. Francis, Julian Savulescu
{"title":"Empathy training through virtual reality: moral enhancement with the freedom to fall?","authors":"Anda Zahiu, Emilian Mihailov, Brian D. Earp, Kathryn B. Francis, Julian Savulescu","doi":"10.1007/s10676-023-09723-9","DOIUrl":"https://doi.org/10.1007/s10676-023-09723-9","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134960000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-20DOI: 10.1007/s10676-023-09720-y
Laurence Barry, Arthur Charpentier
{"title":"Melting contestation: insurance fairness and machine learning","authors":"Laurence Barry, Arthur Charpentier","doi":"10.1007/s10676-023-09720-y","DOIUrl":"https://doi.org/10.1007/s10676-023-09720-y","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136308923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1007/s10676-023-09717-7
Seumas Miller
{"title":"Cognitive warfare: an ethical analysis","authors":"Seumas Miller","doi":"10.1007/s10676-023-09717-7","DOIUrl":"https://doi.org/10.1007/s10676-023-09717-7","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":" ","pages":""},"PeriodicalIF":3.6,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42769335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}