Generative AI models should include detection mechanisms as a condition for public release

IF 3.4 2区 哲学 Q1 ETHICS Ethics and Information Technology Pub Date : 2023-10-28 DOI:10.1007/s10676-023-09728-4
Alistair Knott, Dino Pedreschi, Raja Chatila, Tapabrata Chakraborti, Susan Leavy, Ricardo Baeza-Yates, David Eyers, Andrew Trotman, Paul D. Teal, Przemyslaw Biecek, Stuart Russell, Yoshua Bengio
{"title":"Generative AI models should include detection mechanisms as a condition for public release","authors":"Alistair Knott, Dino Pedreschi, Raja Chatila, Tapabrata Chakraborti, Susan Leavy, Ricardo Baeza-Yates, David Eyers, Andrew Trotman, Paul D. Teal, Przemyslaw Biecek, Stuart Russell, Yoshua Bengio","doi":"10.1007/s10676-023-09728-4","DOIUrl":null,"url":null,"abstract":"Abstract The new wave of ‘foundation models’—general-purpose generative AI models, for production of text (e.g., ChatGPT) or images (e.g., MidJourney)—represent a dramatic advance in the state of the art for AI. But their use also introduces a range of new risks, which has prompted an ongoing conversation about possible regulatory mechanisms. Here we propose a specific principle that should be incorporated into legislation: that any organization developing a foundation model intended for public use must demonstrate a reliable detection mechanism for the content it generates, as a condition of its public release. The detection mechanism should be made publicly available in a tool that allows users to query, for an arbitrary item of content, whether the item was generated (wholly or partly) by the model. In this paper, we argue that this requirement is technically feasible and would play an important role in reducing certain risks from new AI models in many domains. We also outline a number of options for the tool’s design, and summarize a number of points where further input from policymakers and researchers would be required.","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"37 10","pages":"0"},"PeriodicalIF":3.4000,"publicationDate":"2023-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ethics and Information Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s10676-023-09728-4","RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 0

Abstract

Abstract The new wave of ‘foundation models’—general-purpose generative AI models, for production of text (e.g., ChatGPT) or images (e.g., MidJourney)—represent a dramatic advance in the state of the art for AI. But their use also introduces a range of new risks, which has prompted an ongoing conversation about possible regulatory mechanisms. Here we propose a specific principle that should be incorporated into legislation: that any organization developing a foundation model intended for public use must demonstrate a reliable detection mechanism for the content it generates, as a condition of its public release. The detection mechanism should be made publicly available in a tool that allows users to query, for an arbitrary item of content, whether the item was generated (wholly or partly) by the model. In this paper, we argue that this requirement is technically feasible and would play an important role in reducing certain risks from new AI models in many domains. We also outline a number of options for the tool’s design, and summarize a number of points where further input from policymakers and researchers would be required.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
生成式人工智能模型应该包括检测机制,作为公开发布的条件
新一波的“基础模型”——用于生成文本(如ChatGPT)或图像(如MidJourney)的通用生成人工智能模型——代表了人工智能技术的巨大进步。但它们的使用也带来了一系列新的风险,这引发了一场关于可能的监管机制的持续讨论。在这里,我们提出了一个应该纳入立法的具体原则:任何组织开发用于公共使用的基础模型必须证明其生成的内容具有可靠的检测机制,作为其公开发布的条件。检测机制应该在一个工具中公开可用,该工具允许用户查询任意内容项,该项是否(全部或部分)由模型生成。在本文中,我们认为这一要求在技术上是可行的,并且将在许多领域中降低新人工智能模型的某些风险方面发挥重要作用。我们还概述了该工具设计的一些选项,并总结了需要政策制定者和研究人员进一步投入的一些要点。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
8.20
自引率
5.60%
发文量
46
期刊介绍: Ethics and Information Technology is a peer-reviewed journal dedicated to advancing the dialogue between moral philosophy and the field of information and communication technology (ICT). The journal aims to foster and promote reflection and analysis which is intended to make a constructive contribution to answering the ethical, social and political questions associated with the adoption, use, and development of ICT. Within the scope of the journal are also conceptual analysis and discussion of ethical ICT issues which arise in the context of technology assessment, cultural studies, public policy analysis and public administration, cognitive science, social and anthropological studies in technology, mass-communication, and legal studies.
期刊最新文献
Engineers on responsibility: feminist approaches to who’s responsible for ethical AI AI and the need for justification (to the patient). Trustworthiness of voting advice applications in Europe. Large language models and their big bullshit potential. How to teach responsible AI in Higher Education: challenges and opportunities
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1