Pub Date : 2024-01-01Epub Date: 2024-08-12DOI: 10.1007/s10676-024-09790-6
Elisabeth Stockinger, Jonne Maas, Christofer Talvitie, Virginia Dignum
Voting Advice Applications (VAAs) are interactive tools used to assist in one's choice of a party or candidate to vote for in an upcoming election. They have the potential to increase citizens' trust and participation in democratic structures. However, there is no established ground truth for one's electoral choice, and VAA recommendations depend strongly on architectural and design choices. We assessed several representative European VAAs according to the Ethics Guidelines for Trustworthy AI provided by the European Commission using publicly available information. We found scores to be comparable across VAAs and low in most requirements, with differences reflecting the kind of developing institution. Across VAAs, we identify the need for improvement in (i) transparency regarding the subjectivity of recommendations, (ii) diversity of stakeholder participation, (iii) user-centric documentation of algorithm, and (iv) disclosure of the underlying values and assumptions.
Supplementary information: The online version contains supplementary material available at 10.1007/s10676-024-09790-6.
{"title":"Trustworthiness of voting advice applications in Europe.","authors":"Elisabeth Stockinger, Jonne Maas, Christofer Talvitie, Virginia Dignum","doi":"10.1007/s10676-024-09790-6","DOIUrl":"10.1007/s10676-024-09790-6","url":null,"abstract":"<p><p>Voting Advice Applications (VAAs) are interactive tools used to assist in one's choice of a party or candidate to vote for in an upcoming election. They have the potential to increase citizens' trust and participation in democratic structures. However, there is no established ground truth for one's electoral choice, and VAA recommendations depend strongly on architectural and design choices. We assessed several representative European VAAs according to the Ethics Guidelines for Trustworthy AI provided by the European Commission using publicly available information. We found scores to be comparable across VAAs and low in most requirements, with differences reflecting the kind of developing institution. Across VAAs, we identify the need for improvement in (i) transparency regarding the subjectivity of recommendations, (ii) diversity of stakeholder participation, (iii) user-centric documentation of algorithm, and (iv) disclosure of the underlying values and assumptions.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s10676-024-09790-6.</p>","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"26 3","pages":"55"},"PeriodicalIF":3.4,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11415416/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142300499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01Epub Date: 2024-10-04DOI: 10.1007/s10676-024-09802-5
Sarah A Fisher
Newly powerful large language models have burst onto the scene, with applications across a wide range of functions. We can now expect to encounter their outputs at rapidly increasing volumes and frequencies. Some commentators claim that large language models are bullshitting, generating convincing output without regard for the truth. If correct, that would make large language models distinctively dangerous discourse participants. Bullshitters not only undermine the norm of truthfulness (by saying false things) but the normative status of truth itself (by treating it as entirely irrelevant). So, do large language models really bullshit? I argue that they can, in the sense of issuing propositional content in response to fact-seeking prompts, without having first assessed that content for truth or falsity. However, I further argue that they need not bullshit, given appropriate guardrails. So, just as with human speakers, the propensity for a large language model to bullshit depends on its own particular make-up.
{"title":"Large language models and their big bullshit potential.","authors":"Sarah A Fisher","doi":"10.1007/s10676-024-09802-5","DOIUrl":"10.1007/s10676-024-09802-5","url":null,"abstract":"<p><p>Newly powerful large language models have burst onto the scene, with applications across a wide range of functions. We can now expect to encounter their outputs at rapidly increasing volumes and frequencies. Some commentators claim that large language models are <i>bullshitting</i>, generating convincing output without regard for the truth. If correct, that would make large language models distinctively dangerous discourse participants. Bullshitters not only undermine the norm of truthfulness (by saying false things) but the normative status of truth itself (by treating it as entirely irrelevant). So, do large language models really bullshit? I argue that they can, in the sense of issuing propositional content in response to fact-seeking prompts, without having first assessed that content for truth or falsity. However, I further argue that they <i>need not</i> bullshit, given appropriate guardrails. So, just as with human speakers, the propensity for a large language model to bullshit depends on its own particular make-up.</p>","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"26 4","pages":"67"},"PeriodicalIF":3.4,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11452423/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142382353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-13DOI: 10.1007/s10676-023-09733-7
Andrea Aler Tubella, Marçal Mora-Cantallops, Juan Carlos Nieves
{"title":"How to teach responsible AI in Higher Education: challenges and opportunities","authors":"Andrea Aler Tubella, Marçal Mora-Cantallops, Juan Carlos Nieves","doi":"10.1007/s10676-023-09733-7","DOIUrl":"https://doi.org/10.1007/s10676-023-09733-7","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"12 3","pages":""},"PeriodicalIF":3.6,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139005686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-12DOI: 10.1007/s10676-023-09734-6
A. Guersenzvaig
{"title":"Can machine learning make naturalism about health truly naturalistic? A reflection on a data-driven concept of health","authors":"A. Guersenzvaig","doi":"10.1007/s10676-023-09734-6","DOIUrl":"https://doi.org/10.1007/s10676-023-09734-6","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"226 6","pages":""},"PeriodicalIF":3.6,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139010041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-16DOI: 10.1007/s10676-023-09730-w
E. Rahmadian, Daniel Feitosa, Yulia Virantina
{"title":"Digital twins, big data governance, and sustainable tourism","authors":"E. Rahmadian, Daniel Feitosa, Yulia Virantina","doi":"10.1007/s10676-023-09730-w","DOIUrl":"https://doi.org/10.1007/s10676-023-09730-w","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"28 3","pages":""},"PeriodicalIF":3.6,"publicationDate":"2023-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139270569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-16DOI: 10.1007/s10676-023-09732-8
Bart Kamphorst, Adam Henschke
{"title":"Public health measures and the rise of incidental surveillance: Considerations about private informational power and accountability","authors":"Bart Kamphorst, Adam Henschke","doi":"10.1007/s10676-023-09732-8","DOIUrl":"https://doi.org/10.1007/s10676-023-09732-8","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"38 12","pages":""},"PeriodicalIF":3.6,"publicationDate":"2023-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139268942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-15DOI: 10.1007/s10676-023-09735-5
Brad Partridge, Susan Dodds
{"title":"Conceptualising and regulating all neural data from consumer-directed devices as medical data: more scope for an unnecessary expansion of medical influence?","authors":"Brad Partridge, Susan Dodds","doi":"10.1007/s10676-023-09735-5","DOIUrl":"https://doi.org/10.1007/s10676-023-09735-5","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"51 4","pages":""},"PeriodicalIF":3.6,"publicationDate":"2023-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139272673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-15DOI: 10.1007/s10676-023-09737-3
Bart Custers
{"title":"The Right to Break the Law? Perfect Enforcement of the Law Using Technology Impedes the Development of Legal Systems","authors":"Bart Custers","doi":"10.1007/s10676-023-09737-3","DOIUrl":"https://doi.org/10.1007/s10676-023-09737-3","url":null,"abstract":"","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"27 3","pages":""},"PeriodicalIF":3.6,"publicationDate":"2023-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139273216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-04DOI: 10.1007/s10676-023-09727-5
Robert Sparrow, Mark Andrejevic, Bridget Harris
Abstract It is estimated that one in three women experience intimate partner violence (IPV) across the course of their life. The popular uptake of “smart speakers” powered by sophisticated AI means that surveillance of the domestic environment is increasingly possible. Correspondingly, there are various proposals to use smart speakers to detect or report IPV. In this paper, we clarify what might be possible when it comes to combatting IPV using existing or near-term technology and also begin the project of evaluating this project both ethically and politically. We argue that the ethical landscape looks different depending on whether one is considering the decision to develop the technology or the decision to use it once it has been developed. If activists and governments wish to avoid the privatisation of responses to IPV, ubiquitous surveillance of domestic spaces, increasing the risk posed to members of minority communities by police responses to IPV, and the danger that more powerful smart speakers will be co-opted by men to control and abuse women, then they should resist the development of this technology rather than wait until these systems are developed. If it is judged that the moral urgency of IPV justifies exploring what might be possible by developing this technology, even in the face of these risks, then it will be imperative that victim-survivors from a range of demographics, as well as government and non-government stakeholders, are engaged in shaping this technology and the legislation and policies needed to regulate it.
{"title":"Should we embrace “Big Sister”? Smart speakers as a means to combat intimate partner violence","authors":"Robert Sparrow, Mark Andrejevic, Bridget Harris","doi":"10.1007/s10676-023-09727-5","DOIUrl":"https://doi.org/10.1007/s10676-023-09727-5","url":null,"abstract":"Abstract It is estimated that one in three women experience intimate partner violence (IPV) across the course of their life. The popular uptake of “smart speakers” powered by sophisticated AI means that surveillance of the domestic environment is increasingly possible. Correspondingly, there are various proposals to use smart speakers to detect or report IPV. In this paper, we clarify what might be possible when it comes to combatting IPV using existing or near-term technology and also begin the project of evaluating this project both ethically and politically. We argue that the ethical landscape looks different depending on whether one is considering the decision to develop the technology or the decision to use it once it has been developed. If activists and governments wish to avoid the privatisation of responses to IPV, ubiquitous surveillance of domestic spaces, increasing the risk posed to members of minority communities by police responses to IPV, and the danger that more powerful smart speakers will be co-opted by men to control and abuse women, then they should resist the development of this technology rather than wait until these systems are developed. If it is judged that the moral urgency of IPV justifies exploring what might be possible by developing this technology, even in the face of these risks, then it will be imperative that victim-survivors from a range of demographics, as well as government and non-government stakeholders, are engaged in shaping this technology and the legislation and policies needed to regulate it.","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"108 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135773389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-28DOI: 10.1007/s10676-023-09728-4
Alistair Knott, Dino Pedreschi, Raja Chatila, Tapabrata Chakraborti, Susan Leavy, Ricardo Baeza-Yates, David Eyers, Andrew Trotman, Paul D. Teal, Przemyslaw Biecek, Stuart Russell, Yoshua Bengio
Abstract The new wave of ‘foundation models’—general-purpose generative AI models, for production of text (e.g., ChatGPT) or images (e.g., MidJourney)—represent a dramatic advance in the state of the art for AI. But their use also introduces a range of new risks, which has prompted an ongoing conversation about possible regulatory mechanisms. Here we propose a specific principle that should be incorporated into legislation: that any organization developing a foundation model intended for public use must demonstrate a reliable detection mechanism for the content it generates, as a condition of its public release. The detection mechanism should be made publicly available in a tool that allows users to query, for an arbitrary item of content, whether the item was generated (wholly or partly) by the model. In this paper, we argue that this requirement is technically feasible and would play an important role in reducing certain risks from new AI models in many domains. We also outline a number of options for the tool’s design, and summarize a number of points where further input from policymakers and researchers would be required.
{"title":"Generative AI models should include detection mechanisms as a condition for public release","authors":"Alistair Knott, Dino Pedreschi, Raja Chatila, Tapabrata Chakraborti, Susan Leavy, Ricardo Baeza-Yates, David Eyers, Andrew Trotman, Paul D. Teal, Przemyslaw Biecek, Stuart Russell, Yoshua Bengio","doi":"10.1007/s10676-023-09728-4","DOIUrl":"https://doi.org/10.1007/s10676-023-09728-4","url":null,"abstract":"Abstract The new wave of ‘foundation models’—general-purpose generative AI models, for production of text (e.g., ChatGPT) or images (e.g., MidJourney)—represent a dramatic advance in the state of the art for AI. But their use also introduces a range of new risks, which has prompted an ongoing conversation about possible regulatory mechanisms. Here we propose a specific principle that should be incorporated into legislation: that any organization developing a foundation model intended for public use must demonstrate a reliable detection mechanism for the content it generates, as a condition of its public release. The detection mechanism should be made publicly available in a tool that allows users to query, for an arbitrary item of content, whether the item was generated (wholly or partly) by the model. In this paper, we argue that this requirement is technically feasible and would play an important role in reducing certain risks from new AI models in many domains. We also outline a number of options for the tool’s design, and summarize a number of points where further input from policymakers and researchers would be required.","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"37 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136160670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}