观察、检查、修改:生成式人工智能治理的三个条件

Fabian Ferrari, José van Dijck, Antal van den Bosch
{"title":"观察、检查、修改:生成式人工智能治理的三个条件","authors":"Fabian Ferrari, José van Dijck, Antal van den Bosch","doi":"10.1177/14614448231214811","DOIUrl":null,"url":null,"abstract":"In a world increasingly shaped by generative AI systems like ChatGPT, the absence of benchmarks to examine the efficacy of oversight mechanisms is a problem for research and policy. What are the structural conditions for governing generative AI systems? To answer this question, it is crucial to situate generative AI systems as regulatory objects: material items that can be governed. On this conceptual basis, we introduce three high-level conditions to structure research and policy agendas on generative AI governance: industrial observability, public inspectability, and technical modifiability. Empirically, we explicate those conditions with a focus on the EU’s AI Act, grounding the analysis of oversight mechanisms for generative AI systems in their granular material properties as observable, inspectable, and modifiable objects. Those three conditions represent an action plan to help us perceive generative AI systems as negotiable objects, rather than seeing them as mysterious forces that pose existential risks for humanity.","PeriodicalId":443328,"journal":{"name":"New Media & Society","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Observe, inspect, modify: Three conditions for generative AI governance\",\"authors\":\"Fabian Ferrari, José van Dijck, Antal van den Bosch\",\"doi\":\"10.1177/14614448231214811\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In a world increasingly shaped by generative AI systems like ChatGPT, the absence of benchmarks to examine the efficacy of oversight mechanisms is a problem for research and policy. What are the structural conditions for governing generative AI systems? To answer this question, it is crucial to situate generative AI systems as regulatory objects: material items that can be governed. On this conceptual basis, we introduce three high-level conditions to structure research and policy agendas on generative AI governance: industrial observability, public inspectability, and technical modifiability. Empirically, we explicate those conditions with a focus on the EU’s AI Act, grounding the analysis of oversight mechanisms for generative AI systems in their granular material properties as observable, inspectable, and modifiable objects. Those three conditions represent an action plan to help us perceive generative AI systems as negotiable objects, rather than seeing them as mysterious forces that pose existential risks for humanity.\",\"PeriodicalId\":443328,\"journal\":{\"name\":\"New Media & Society\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-11-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"New Media & Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1177/14614448231214811\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"New Media & Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/14614448231214811","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在像 ChatGPT 这样的生成式人工智能系统越来越多地塑造的世界里,缺乏检验监督机制有效性的基准是研究和政策面临的一个问题。管理生成式人工智能系统的结构条件是什么?要回答这个问题,关键是要将生成式人工智能系统定位为监管对象:可被监管的物质项目。在此概念基础上,我们提出了三个高层次条件,以构建关于生成式人工智能治理的研究和政策议程:工业可观察性、公共可检查性和技术可修改性。从经验上讲,我们以欧盟的《人工智能法》为重点来阐释这些条件,将对生成式人工智能系统监督机制的分析建立在其作为可观察、可检查和可修改对象的细粒度物质属性的基础上。这三个条件代表了一项行动计划,帮助我们将人工智能生成系统视为可协商的对象,而不是将其视为对人类生存构成威胁的神秘力量。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Observe, inspect, modify: Three conditions for generative AI governance
In a world increasingly shaped by generative AI systems like ChatGPT, the absence of benchmarks to examine the efficacy of oversight mechanisms is a problem for research and policy. What are the structural conditions for governing generative AI systems? To answer this question, it is crucial to situate generative AI systems as regulatory objects: material items that can be governed. On this conceptual basis, we introduce three high-level conditions to structure research and policy agendas on generative AI governance: industrial observability, public inspectability, and technical modifiability. Empirically, we explicate those conditions with a focus on the EU’s AI Act, grounding the analysis of oversight mechanisms for generative AI systems in their granular material properties as observable, inspectable, and modifiable objects. Those three conditions represent an action plan to help us perceive generative AI systems as negotiable objects, rather than seeing them as mysterious forces that pose existential risks for humanity.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Performing lowbrowness: How Chinese queer people negotiate visibility on short-video platforms When content moderation is not about content: How Chinese social media platforms moderate content and why it matters “Our advice is to break up”: Douban’s intimate public and the rise of girlfriend culture Unmasking coordinated hate: Analysing hate speech on Spanish digital news media Human–AI communication in initial encounters: How AI agency affects trust, liking, and chat quality evaluation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1