用人权来解决人工智能的承诺和挑战

IF 6.5 1区 社会学 Q1 SOCIAL SCIENCES, INTERDISCIPLINARY Big Data & Society Pub Date : 2023-07-01 DOI:10.1177/20539517231205476
Onur Bakiner
{"title":"用人权来解决人工智能的承诺和挑战","authors":"Onur Bakiner","doi":"10.1177/20539517231205476","DOIUrl":null,"url":null,"abstract":"This paper examines the potential promises and limitations of the human rights framework in the age of AI. It addresses the question: what, if anything, makes human rights well suited to face the challenges arising from new and emerging technologies like AI? It argues that the historical evolution of human rights as a series of legal norms and concrete practices has made it well placed to address AI-related challenges. The human rights framework should be understood comprehensively as a combination of legal remedies, moral justification, and political analysis that inform one another. Over time, the framework has evolved in ways that accommodate the balancing of contending rights claims, using multiple ex ante and ex post facto mechanisms, involving government and/or business actors, and in situations of diffuse responsibility that may or may not result from malicious intent. However, the widespread adoption of AI technologies pushes the moral, sociological, and political boundaries of the human rights framework in other ways. AI reproduces long-term, structural problems going beyond issue-by-issue regulation, is embedded within economic structures that produce cumulative negative effects, and introduces additional challenges that require a discussion about the relationship between human rights and science & technology. Some of the reasons for why AI produces problematic outcomes are deep rooted in technical intricacies that human rights practitioners should be more willing than before to get involved in.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":"27 1","pages":"0"},"PeriodicalIF":6.5000,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The promises and challenges of addressing artificial intelligence with human rights\",\"authors\":\"Onur Bakiner\",\"doi\":\"10.1177/20539517231205476\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper examines the potential promises and limitations of the human rights framework in the age of AI. It addresses the question: what, if anything, makes human rights well suited to face the challenges arising from new and emerging technologies like AI? It argues that the historical evolution of human rights as a series of legal norms and concrete practices has made it well placed to address AI-related challenges. The human rights framework should be understood comprehensively as a combination of legal remedies, moral justification, and political analysis that inform one another. Over time, the framework has evolved in ways that accommodate the balancing of contending rights claims, using multiple ex ante and ex post facto mechanisms, involving government and/or business actors, and in situations of diffuse responsibility that may or may not result from malicious intent. However, the widespread adoption of AI technologies pushes the moral, sociological, and political boundaries of the human rights framework in other ways. AI reproduces long-term, structural problems going beyond issue-by-issue regulation, is embedded within economic structures that produce cumulative negative effects, and introduces additional challenges that require a discussion about the relationship between human rights and science & technology. Some of the reasons for why AI produces problematic outcomes are deep rooted in technical intricacies that human rights practitioners should be more willing than before to get involved in.\",\"PeriodicalId\":47834,\"journal\":{\"name\":\"Big Data & Society\",\"volume\":\"27 1\",\"pages\":\"0\"},\"PeriodicalIF\":6.5000,\"publicationDate\":\"2023-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Big Data & Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1177/20539517231205476\",\"RegionNum\":1,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"SOCIAL SCIENCES, INTERDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Big Data & Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/20539517231205476","RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"SOCIAL SCIENCES, INTERDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

摘要

本文探讨了人工智能时代人权框架的潜在承诺和局限性。它解决了这样一个问题:如果有的话,是什么让人权非常适合面对人工智能等新兴技术带来的挑战?报告认为,人权作为一系列法律规范和具体实践的历史演变,使其能够很好地应对与人工智能相关的挑战。人权框架应被全面理解为法律补救、道德辩护和政治分析三者的结合,它们相互促进。随着时间的推移,该框架已演变成能够平衡相互冲突的权利要求的方式,使用多种事前和事后机制,涉及政府和/或商业行为者,以及可能或可能不是由恶意意图造成的责任分散的情况。然而,人工智能技术的广泛采用以其他方式推动了人权框架的道德、社会学和政治界限。人工智能再现了长期的结构性问题,超出了逐个问题的监管范围,嵌入了产生累积负面影响的经济结构中,并引入了额外的挑战,需要讨论人权与科学之间的关系。技术。人工智能产生问题结果的一些原因深深植根于技术的复杂性,人权从业者应该比以前更愿意参与其中。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
The promises and challenges of addressing artificial intelligence with human rights
This paper examines the potential promises and limitations of the human rights framework in the age of AI. It addresses the question: what, if anything, makes human rights well suited to face the challenges arising from new and emerging technologies like AI? It argues that the historical evolution of human rights as a series of legal norms and concrete practices has made it well placed to address AI-related challenges. The human rights framework should be understood comprehensively as a combination of legal remedies, moral justification, and political analysis that inform one another. Over time, the framework has evolved in ways that accommodate the balancing of contending rights claims, using multiple ex ante and ex post facto mechanisms, involving government and/or business actors, and in situations of diffuse responsibility that may or may not result from malicious intent. However, the widespread adoption of AI technologies pushes the moral, sociological, and political boundaries of the human rights framework in other ways. AI reproduces long-term, structural problems going beyond issue-by-issue regulation, is embedded within economic structures that produce cumulative negative effects, and introduces additional challenges that require a discussion about the relationship between human rights and science & technology. Some of the reasons for why AI produces problematic outcomes are deep rooted in technical intricacies that human rights practitioners should be more willing than before to get involved in.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Big Data & Society
Big Data & Society SOCIAL SCIENCES, INTERDISCIPLINARY-
CiteScore
10.90
自引率
10.60%
发文量
59
审稿时长
11 weeks
期刊介绍: Big Data & Society (BD&S) is an open access, peer-reviewed scholarly journal that publishes interdisciplinary work principally in the social sciences, humanities, and computing and their intersections with the arts and natural sciences. The journal focuses on the implications of Big Data for societies and aims to connect debates about Big Data practices and their effects on various sectors such as academia, social life, industry, business, and government. BD&S considers Big Data as an emerging field of practices, not solely defined by but generative of unique data qualities such as high volume, granularity, data linking, and mining. The journal pays attention to digital content generated both online and offline, encompassing social media, search engines, closed networks (e.g., commercial or government transactions), and open networks like digital archives, open government, and crowdsourced data. Rather than providing a fixed definition of Big Data, BD&S encourages interdisciplinary inquiries, debates, and studies on various topics and themes related to Big Data practices. BD&S seeks contributions that analyze Big Data practices, involve empirical engagements and experiments with innovative methods, and reflect on the consequences of these practices for the representation, realization, and governance of societies. As a digital-only journal, BD&S's platform can accommodate multimedia formats such as complex images, dynamic visualizations, videos, and audio content. The contents of the journal encompass peer-reviewed research articles, colloquia, bookcasts, think pieces, state-of-the-art methods, and work by early career researchers.
期刊最新文献
Is there a role of the kidney failure risk equation in optimizing timing of vascular access creation in pre-dialysis patients? From rules to examples: Machine learning's type of authority Outlier bias: AI classification of curb ramps, outliers, and context Artificial intelligence and skills in the workplace: An integrative research agenda Redress and worldmaking: Differing approaches to algorithmic reparations for housing justice
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1