Abstract Nowadays, Artificial Intelligence (AI) is present in all sectors of the economy. Consequently, both data-the raw material used to build AI systems- and AI have an unprecedented impact on society and there is a need to ensure that they work for its benefit. For this reason, the European Union has put data and trustworthy AI at the center of recent legislative initiatives. An important element in these regulations is transparency, understood as the provision of information to relevant stakeholders to support their understanding of AI systems and data throughout their lifecycle. In recent years, an increasing number of approaches for documenting AI and datasets have emerged, both within academia and the private sector. In this work, we identify the 36 most relevant ones from more than 2200 papers related to trustworthy AI. We assess their relevance from the angle of European regulatory objectives, their coverage of AI technologies and economic sectors, and their suitability to address the specific needs of multiple stakeholders. Finally, we discuss the main documentation gaps found, including the need to better address data innovation practices (e.g. data sharing, data reuse) and large-scale algorithmic systems (e.g. those used in online platforms), and to widen the focus from algorithms and data to AI systems as a whole.
{"title":"The landscape of data and AI documentation approaches in the European policy context","authors":"Marina Micheli, Isabelle Hupont, Blagoj Delipetrev, Josep Soler-Garrido","doi":"10.1007/s10676-023-09725-7","DOIUrl":"https://doi.org/10.1007/s10676-023-09725-7","url":null,"abstract":"Abstract Nowadays, Artificial Intelligence (AI) is present in all sectors of the economy. Consequently, both data-the raw material used to build AI systems- and AI have an unprecedented impact on society and there is a need to ensure that they work for its benefit. For this reason, the European Union has put data and trustworthy AI at the center of recent legislative initiatives. An important element in these regulations is transparency, understood as the provision of information to relevant stakeholders to support their understanding of AI systems and data throughout their lifecycle. In recent years, an increasing number of approaches for documenting AI and datasets have emerged, both within academia and the private sector. In this work, we identify the 36 most relevant ones from more than 2200 papers related to trustworthy AI. We assess their relevance from the angle of European regulatory objectives, their coverage of AI technologies and economic sectors, and their suitability to address the specific needs of multiple stakeholders. Finally, we discuss the main documentation gaps found, including the need to better address data innovation practices (e.g. data sharing, data reuse) and large-scale algorithmic systems (e.g. those used in online platforms), and to widen the focus from algorithms and data to AI systems as a whole.","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"7 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136158338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-24DOI: 10.1007/s10676-023-09731-9
Abootaleb Safdari
{"title":"Person, thing, Robot: a moral and legal ontology for the 21st century and beyond: by David Gunkel","authors":"Abootaleb Safdari","doi":"10.1007/s10676-023-09731-9","DOIUrl":"https://doi.org/10.1007/s10676-023-09731-9","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"34 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135265928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-24DOI: 10.1007/s10676-023-09729-3
Verena Zimmermann
Abstract Smart Cities consist of a multitude of interconnected devices and services to, among others, enhance efficiency, comfort, and safety. To achieve these aims, smart cities rely on an interplay of measures including the deployment of interventions targeted to foster certain human behaviors, such as saving energy, or collecting and exchanging sensor and user data. Both aspects have ethical implications, e.g., when it comes to intervention design or the handling of privacy-related data such as personal information, user preferences or geolocations. Resulting concerns must be taken seriously, as they reduce user acceptance and can even lead to the abolition of otherwise promising Smart City projects. Established guidelines for ethical research and practice from the psychological sciences provide a useful framework for the kinds of ethical issues raised when designing human-centered interventions or dealing with user-generated data. This article thus reviews relevant psychological guidelines and discusses their applicability to the Smart City context. A special focus is on the guidelines’ implications and resulting challenges for certain Smart City applications. Additionally, potential gaps in current guidelines and the limits of applicability are reflected upon.
{"title":"Smart cities as a testbed for experimenting with humans? - Applying psychological ethical guidelines to smart city interventions","authors":"Verena Zimmermann","doi":"10.1007/s10676-023-09729-3","DOIUrl":"https://doi.org/10.1007/s10676-023-09729-3","url":null,"abstract":"Abstract Smart Cities consist of a multitude of interconnected devices and services to, among others, enhance efficiency, comfort, and safety. To achieve these aims, smart cities rely on an interplay of measures including the deployment of interventions targeted to foster certain human behaviors, such as saving energy, or collecting and exchanging sensor and user data. Both aspects have ethical implications, e.g., when it comes to intervention design or the handling of privacy-related data such as personal information, user preferences or geolocations. Resulting concerns must be taken seriously, as they reduce user acceptance and can even lead to the abolition of otherwise promising Smart City projects. Established guidelines for ethical research and practice from the psychological sciences provide a useful framework for the kinds of ethical issues raised when designing human-centered interventions or dealing with user-generated data. This article thus reviews relevant psychological guidelines and discusses their applicability to the Smart City context. A special focus is on the guidelines’ implications and resulting challenges for certain Smart City applications. Additionally, potential gaps in current guidelines and the limits of applicability are reflected upon.","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"68 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135273893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-16DOI: 10.1007/s10676-023-09726-6
Alexander Andersson, Per-Erik Milam
Abstract Violent video games (VVGs) are a source of serious and continuing controversy. They are not unique in this respect, though. Other entertainment products have been criticized on moral grounds, from pornography to heavy metal, horror films, and Harry Potter books. Some of these controversies have fizzled out over time and have come to be viewed as cases of moral panic. Others, including moral objections to VVGs, have persisted. The aim of this paper is to determine which, if any, of the concerns raised about VVGs are legitimate. We argue that common moral objections to VVGs are unsuccessful, but that a plausible critique can be developed that captures the insights of these objections while avoiding their pitfalls. Our view suggests that the moral badness of a game depends on how well its internal logic expresses or encourages the players’ objectionable attitudes. This allows us to recognize that some games are morally worse than others—and that it can be morally wrong to design and play some VVGs—but that the moral badness of these games is not necessarily dependent on how violent they are.
{"title":"Violent video games: content, attitudes, and norms","authors":"Alexander Andersson, Per-Erik Milam","doi":"10.1007/s10676-023-09726-6","DOIUrl":"https://doi.org/10.1007/s10676-023-09726-6","url":null,"abstract":"Abstract Violent video games (VVGs) are a source of serious and continuing controversy. They are not unique in this respect, though. Other entertainment products have been criticized on moral grounds, from pornography to heavy metal, horror films, and Harry Potter books. Some of these controversies have fizzled out over time and have come to be viewed as cases of moral panic. Others, including moral objections to VVGs, have persisted. The aim of this paper is to determine which, if any, of the concerns raised about VVGs are legitimate. We argue that common moral objections to VVGs are unsuccessful, but that a plausible critique can be developed that captures the insights of these objections while avoiding their pitfalls. Our view suggests that the moral badness of a game depends on how well its internal logic expresses or encourages the players’ objectionable attitudes. This allows us to recognize that some games are morally worse than others—and that it can be morally wrong to design and play some VVGs—but that the moral badness of these games is not necessarily dependent on how violent they are.","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136115561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-06DOI: 10.1007/s10676-023-09724-8
Ludovico Giacomo Conti, Peter Seele
Abstract The recent proliferation of AI scandals led private and public organisations to implement new ethics guidelines, introduce AI ethics boards, and list ethical principles. Nevertheless, some of these efforts remained a façade not backed by any substantive action. Such behaviour made the public question the legitimacy of the AI industry and prompted scholars to accuse the sector of ethicswashing, machinewashing, and ethics trivialisation—criticisms that spilt over to institutional AI ethics boards. To counter this widespread issue, contributions in the literature have proposed fixes that do not consider its systemic character and are based on a top-down, expert-centric governance. To fill this gap, we propose to make use of qualified informed lotteries : a two-step model that transposes the documented benefits of the ancient practice of sortition into the selection of AI ethics boards’ members and combines them with the advantages of a stakeholder-driven, participative, and deliberative bottom-up process typical of Citizens’ Assemblies. The model permits increasing the public’s legitimacy and participation in the decision-making process and its deliverables, curbing the industry’s over-influence and lobbying, and diminishing the instrumentalisation of ethics boards. We suggest that this sortition-based approach may provide a sound base for both public and private organisations in smart societies for constructing a decentralised, bottom-up, participative digital democracy.
{"title":"The contested role of AI ethics boards in smart societies: a step towards improvement based on board composition by sortition","authors":"Ludovico Giacomo Conti, Peter Seele","doi":"10.1007/s10676-023-09724-8","DOIUrl":"https://doi.org/10.1007/s10676-023-09724-8","url":null,"abstract":"Abstract The recent proliferation of AI scandals led private and public organisations to implement new ethics guidelines, introduce AI ethics boards, and list ethical principles. Nevertheless, some of these efforts remained a façade not backed by any substantive action. Such behaviour made the public question the legitimacy of the AI industry and prompted scholars to accuse the sector of ethicswashing, machinewashing, and ethics trivialisation—criticisms that spilt over to institutional AI ethics boards. To counter this widespread issue, contributions in the literature have proposed fixes that do not consider its systemic character and are based on a top-down, expert-centric governance. To fill this gap, we propose to make use of qualified informed lotteries : a two-step model that transposes the documented benefits of the ancient practice of sortition into the selection of AI ethics boards’ members and combines them with the advantages of a stakeholder-driven, participative, and deliberative bottom-up process typical of Citizens’ Assemblies. The model permits increasing the public’s legitimacy and participation in the decision-making process and its deliverables, curbing the industry’s over-influence and lobbying, and diminishing the instrumentalisation of ethics boards. We suggest that this sortition-based approach may provide a sound base for both public and private organisations in smart societies for constructing a decentralised, bottom-up, participative digital democracy.","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135351995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-26DOI: 10.1007/s10676-023-09723-9
Anda Zahiu, Emilian Mihailov, Brian D. Earp, Kathryn B. Francis, Julian Savulescu
{"title":"Empathy training through virtual reality: moral enhancement with the freedom to fall?","authors":"Anda Zahiu, Emilian Mihailov, Brian D. Earp, Kathryn B. Francis, Julian Savulescu","doi":"10.1007/s10676-023-09723-9","DOIUrl":"https://doi.org/10.1007/s10676-023-09723-9","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134960000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-20DOI: 10.1007/s10676-023-09720-y
Laurence Barry, Arthur Charpentier
{"title":"Melting contestation: insurance fairness and machine learning","authors":"Laurence Barry, Arthur Charpentier","doi":"10.1007/s10676-023-09720-y","DOIUrl":"https://doi.org/10.1007/s10676-023-09720-y","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136308923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1007/s10676-023-09717-7
Seumas Miller
{"title":"Cognitive warfare: an ethical analysis","authors":"Seumas Miller","doi":"10.1007/s10676-023-09717-7","DOIUrl":"https://doi.org/10.1007/s10676-023-09717-7","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":" ","pages":""},"PeriodicalIF":3.6,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42769335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1007/s10676-023-09722-w
Charlie Harry Smith
{"title":"Digitising reflective equilibrium","authors":"Charlie Harry Smith","doi":"10.1007/s10676-023-09722-w","DOIUrl":"https://doi.org/10.1007/s10676-023-09722-w","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":" ","pages":""},"PeriodicalIF":3.6,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46725958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1007/s10676-023-09721-x
M. Pruski
{"title":"Ethics framework for predictive clinical AI model updating","authors":"M. Pruski","doi":"10.1007/s10676-023-09721-x","DOIUrl":"https://doi.org/10.1007/s10676-023-09721-x","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":" ","pages":""},"PeriodicalIF":3.6,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42307680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}