人工智能需要人权:关注道德人工智能如何未能解决隐私、歧视和其他问题

Kate Saslow, Philippe Lorenz
{"title":"人工智能需要人权:关注道德人工智能如何未能解决隐私、歧视和其他问题","authors":"Kate Saslow, Philippe Lorenz","doi":"10.2139/ssrn.3589473","DOIUrl":null,"url":null,"abstract":"AI has been a catalyst for automation and efficiency in numerous ways, but has also had harmful consequences, including: unforeseen algorithmic bias that affects already marginalized communities, as with Amazon’s AI recruiting algorithm that showed bias against women; accountability and liability coming into question if an autonomous vehicle injures or kills, as seen with Uber’s self-driving car casualties; even the notion of democracy is being challenged as the technology enables authoritarian and democratic states like China and the United States to practice surveillance at an unprecedented scale.<br><br>The risks as well as the need for some form of basic rules have not gone unnoticed and governments, tech companies, research consortiums or advocacy groups have broached the issue. In fact, this has been the topic of local, national, and supranational discussion for some years now, as can be seen with new legislation popping up to ban facial recognition software in public spaces. The problem with these discussions, however, is that they have been heavily dominated by how we can make AI more “ethical”. Companies, states, and even international organizations discuss ethical principles, such as fair, accountable, responsible, or safe AI in numerous expert groups or ad hoc committees, such as the High-Level Expert Group on AI in the European Commission, the group on AI in Society of the Organization for Economic Co-operation and Development (OECD), or the select committee on Artificial Intelligence of the United Kingdom House of Lords.<br><br>This may sound like a solid approach to tackling the dangers that AI poses, but to actually be impactful, these discussions must be grounded in rhetoric that is focused and actionable. Not only may the principles be defined differently depending on the stakeholders, but there are overwhelming differences in how principles are interpreted and what requirements are necessary for them to materialize. In addition, ethical debates on AI are often dominated by American or Chinese companies, which are both propagating their own idea of ethical AI, but which may in many cases stand in conflict with the values of other cultures and nations. Not only do different countries have different ideas of which “ethics” principles need to be protected, but different countries play starkly different roles in developing AI. Another problem is when ethical guidelines are discussed, suggestions often come from tech companies themselves, while voices from citizens or even governments are marginalized.<br><br>Self-regulation around ethical principles is too weak to address the spreading implications that AI technologies have had. Ethical principles lack clarity and enforcement capabilities. We must stop focusing the discourse on ethical principles, and instead shift the debate to human rights. Debates must be louder at the supranational level. International pressure must be put on states and companies who fail to protect individuals by propagating AI technologies that carry risks. Leadership must be defined not by actors who come up with new iterations of ethical guidelines, but by those who develop legal obligations regarding AI, which are anchored in and derived from a human rights perspective.<br><br>A way to do this would be to reaffirm the human-centric nature of AI development and deployment that follows actionable standards of human rights law. The human rights legal framework has been around for decades and has been instrumental in fighting and pressuring states to change domestic laws. Nelson Mandela referred to the duties spelled out in the Universal Declaration of Human Rights while fighting to end apartheid in South Africa; in 1973 with Roe v. Wade the United States Supreme Court followed a larger global trend of recognizing women’s human rights by protecting individuals from undue governmental interference in private affairs and giving women the ability to participate fully and equally in society; more recently, open access to the Internet has been recognized as a human right essential to not only freedom of opinion, expression, association, and assembly, but also instrumental in mobilizing the population to call for equality, justice, and accountability in order to advance global respect for human rights. These examples show how human rights standards have been applied to a diverse set of domestic and international rules. That these standards are actionable and enforceable show that they are well-suited to regulate the cross-border nature of AI technologies. AI systems must be scrutinized through a human rights perspective to analyze current and future harms either created or exacerbated by AI, and take action to avoid any harm.<br><br>The adoption of AI technologies has spread across borders and has had diverse effects on societies all over the world. A globalized technology needs international obligations to mitigate the societal problems being faced at an accelerated and larger scale. Companies and states should strive for the development of AI technologies that uphold human rights. Centering the AI discourse around human rights rather than simply ethics can be one way of providing a clearer legal basis for development and deployment of AI technologies. The international community must raise awareness, build consensus, and analyze thoroughly how AI technologies violate human rights in different contexts and develop paths for effective legal remedies. Focusing the discourse on human rights rather than ethical principles can provide more accountability measures, more obligations for state and private actors, and can redirect the debate to rely on consistent and widely accepted legal principles developed over decades.","PeriodicalId":369029,"journal":{"name":"PsychRN: Attitudes & Social Cognition (Topic)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Artificial Intelligence Needs Human Rights: How the Focus on Ethical AI Fails to Address Privacy, Discrimination and Other Concerns\",\"authors\":\"Kate Saslow, Philippe Lorenz\",\"doi\":\"10.2139/ssrn.3589473\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"AI has been a catalyst for automation and efficiency in numerous ways, but has also had harmful consequences, including: unforeseen algorithmic bias that affects already marginalized communities, as with Amazon’s AI recruiting algorithm that showed bias against women; accountability and liability coming into question if an autonomous vehicle injures or kills, as seen with Uber’s self-driving car casualties; even the notion of democracy is being challenged as the technology enables authoritarian and democratic states like China and the United States to practice surveillance at an unprecedented scale.<br><br>The risks as well as the need for some form of basic rules have not gone unnoticed and governments, tech companies, research consortiums or advocacy groups have broached the issue. In fact, this has been the topic of local, national, and supranational discussion for some years now, as can be seen with new legislation popping up to ban facial recognition software in public spaces. The problem with these discussions, however, is that they have been heavily dominated by how we can make AI more “ethical”. Companies, states, and even international organizations discuss ethical principles, such as fair, accountable, responsible, or safe AI in numerous expert groups or ad hoc committees, such as the High-Level Expert Group on AI in the European Commission, the group on AI in Society of the Organization for Economic Co-operation and Development (OECD), or the select committee on Artificial Intelligence of the United Kingdom House of Lords.<br><br>This may sound like a solid approach to tackling the dangers that AI poses, but to actually be impactful, these discussions must be grounded in rhetoric that is focused and actionable. Not only may the principles be defined differently depending on the stakeholders, but there are overwhelming differences in how principles are interpreted and what requirements are necessary for them to materialize. In addition, ethical debates on AI are often dominated by American or Chinese companies, which are both propagating their own idea of ethical AI, but which may in many cases stand in conflict with the values of other cultures and nations. Not only do different countries have different ideas of which “ethics” principles need to be protected, but different countries play starkly different roles in developing AI. Another problem is when ethical guidelines are discussed, suggestions often come from tech companies themselves, while voices from citizens or even governments are marginalized.<br><br>Self-regulation around ethical principles is too weak to address the spreading implications that AI technologies have had. Ethical principles lack clarity and enforcement capabilities. We must stop focusing the discourse on ethical principles, and instead shift the debate to human rights. Debates must be louder at the supranational level. International pressure must be put on states and companies who fail to protect individuals by propagating AI technologies that carry risks. Leadership must be defined not by actors who come up with new iterations of ethical guidelines, but by those who develop legal obligations regarding AI, which are anchored in and derived from a human rights perspective.<br><br>A way to do this would be to reaffirm the human-centric nature of AI development and deployment that follows actionable standards of human rights law. The human rights legal framework has been around for decades and has been instrumental in fighting and pressuring states to change domestic laws. Nelson Mandela referred to the duties spelled out in the Universal Declaration of Human Rights while fighting to end apartheid in South Africa; in 1973 with Roe v. Wade the United States Supreme Court followed a larger global trend of recognizing women’s human rights by protecting individuals from undue governmental interference in private affairs and giving women the ability to participate fully and equally in society; more recently, open access to the Internet has been recognized as a human right essential to not only freedom of opinion, expression, association, and assembly, but also instrumental in mobilizing the population to call for equality, justice, and accountability in order to advance global respect for human rights. These examples show how human rights standards have been applied to a diverse set of domestic and international rules. That these standards are actionable and enforceable show that they are well-suited to regulate the cross-border nature of AI technologies. AI systems must be scrutinized through a human rights perspective to analyze current and future harms either created or exacerbated by AI, and take action to avoid any harm.<br><br>The adoption of AI technologies has spread across borders and has had diverse effects on societies all over the world. A globalized technology needs international obligations to mitigate the societal problems being faced at an accelerated and larger scale. Companies and states should strive for the development of AI technologies that uphold human rights. Centering the AI discourse around human rights rather than simply ethics can be one way of providing a clearer legal basis for development and deployment of AI technologies. The international community must raise awareness, build consensus, and analyze thoroughly how AI technologies violate human rights in different contexts and develop paths for effective legal remedies. Focusing the discourse on human rights rather than ethical principles can provide more accountability measures, more obligations for state and private actors, and can redirect the debate to rely on consistent and widely accepted legal principles developed over decades.\",\"PeriodicalId\":369029,\"journal\":{\"name\":\"PsychRN: Attitudes & Social Cognition (Topic)\",\"volume\":\"102 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-09-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"PsychRN: Attitudes & Social Cognition (Topic)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2139/ssrn.3589473\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"PsychRN: Attitudes & Social Cognition (Topic)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/ssrn.3589473","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

人工智能在许多方面都是自动化和效率的催化剂,但也产生了有害的后果,包括:无法预见的算法偏见,影响到已经被边缘化的社区,比如亚马逊的人工智能招聘算法就显示出对女性的偏见;如果自动驾驶汽车造成伤害或死亡,就像优步(Uber)的自动驾驶汽车伤亡事件一样,问责制和责任就会受到质疑;就连民主的概念也受到了挑战,因为这项技术使中国和美国等专制和民主国家能够以前所未有的规模实施监控。风险以及对某种形式的基本规则的需求并没有被忽视,政府、科技公司、研究联盟或倡导团体都提出了这个问题。事实上,这已经成为地方、国家和超国家讨论的话题好几年了,从禁止在公共场所使用面部识别软件的新立法中可以看出。然而,这些讨论的问题在于,它们在很大程度上被我们如何使人工智能更“合乎道德”所主导。公司、国家甚至国际组织在众多专家组或特设委员会中讨论道德原则,例如公平、负责、负责或安全的人工智能,例如欧盟委员会的人工智能高级专家组、经济合作与发展组织(OECD)的社会人工智能小组或英国上议院的人工智能特别委员会。这听起来像是解决人工智能带来的危险的可靠方法,但要真正产生影响,这些讨论必须建立在专注和可操作的修辞基础上。不仅原则的定义可能会根据涉众的不同而不同,而且在如何解释原则以及实现原则所必需的需求方面也存在着巨大的差异。此外,关于人工智能的伦理辩论往往由美国或中国公司主导,它们都在宣传自己的人工智能伦理理念,但在很多情况下,这可能与其他文化和国家的价值观相冲突。不同的国家不仅对哪些“伦理”原则需要得到保护有不同的看法,而且不同的国家在发展人工智能方面扮演的角色也截然不同。另一个问题是,当讨论道德准则时,建议往往来自科技公司自己,而公民甚至政府的声音被边缘化。围绕道德原则的自我监管过于薄弱,无法应对人工智能技术带来的广泛影响。道德原则缺乏明确性和执行能力。我们必须停止把讨论的重点放在伦理原则上,而把辩论转向人权。超国家层面的辩论必须更加激烈。国际社会必须向那些未能通过宣传带有风险的人工智能技术来保护个人的国家和公司施加压力。领导力的定义不应由提出新的道德准则的行为者来定义,而应由那些制定有关人工智能的法律义务的人来定义,这些义务以人权的角度为基础并源于人权的角度。要做到这一点,一种方法是重申人工智能开发和部署的以人为本的本质,并遵循人权法的可操作标准。人权法律框架已经存在了几十年,在打击和迫使各国修改国内法方面发挥了重要作用。纳尔逊·曼德拉(Nelson Mandela)在为结束南非的种族隔离制度而斗争时,提到了《世界人权宣言》(Universal Declaration of Human Rights)中规定的义务;1973年,通过罗伊诉韦德案,美国最高法院顺应了承认妇女人权的全球大趋势,保护个人不受政府对私人事务的不当干预,赋予妇女充分和平等参与社会的能力;最近,互联网的开放接入被认为是一项人权,不仅对见解、言论、结社和集会自由至关重要,而且有助于动员民众呼吁平等、正义和问责制,以促进全球对人权的尊重。这些例子表明,人权标准是如何适用于各种国内和国际规则的。这些标准具有可操作性和可执行性,这表明它们非常适合监管人工智能技术的跨境性质。必须从人权的角度审视人工智能系统,分析人工智能造成或加剧的当前和未来危害,并采取行动避免任何伤害。人工智能技术的采用已经跨越国界,对世界各地的社会产生了不同的影响。全球化的技术需要国际义务来加速和更大规模地减轻所面临的社会问题。 企业和国家应该努力发展维护人权的人工智能技术。将人工智能话语围绕人权而不仅仅是伦理展开,可以为人工智能技术的开发和部署提供更清晰的法律基础。国际社会必须提高认识,凝聚共识,深入分析人工智能技术在不同背景下如何侵犯人权,并制定有效的法律补救途径。把讨论的重点放在人权而不是道德原则上,可以为国家和私人行为者提供更多的问责措施和更多的义务,并可以将辩论转向依赖于几十年来形成的一致和广泛接受的法律原则。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Artificial Intelligence Needs Human Rights: How the Focus on Ethical AI Fails to Address Privacy, Discrimination and Other Concerns
AI has been a catalyst for automation and efficiency in numerous ways, but has also had harmful consequences, including: unforeseen algorithmic bias that affects already marginalized communities, as with Amazon’s AI recruiting algorithm that showed bias against women; accountability and liability coming into question if an autonomous vehicle injures or kills, as seen with Uber’s self-driving car casualties; even the notion of democracy is being challenged as the technology enables authoritarian and democratic states like China and the United States to practice surveillance at an unprecedented scale.

The risks as well as the need for some form of basic rules have not gone unnoticed and governments, tech companies, research consortiums or advocacy groups have broached the issue. In fact, this has been the topic of local, national, and supranational discussion for some years now, as can be seen with new legislation popping up to ban facial recognition software in public spaces. The problem with these discussions, however, is that they have been heavily dominated by how we can make AI more “ethical”. Companies, states, and even international organizations discuss ethical principles, such as fair, accountable, responsible, or safe AI in numerous expert groups or ad hoc committees, such as the High-Level Expert Group on AI in the European Commission, the group on AI in Society of the Organization for Economic Co-operation and Development (OECD), or the select committee on Artificial Intelligence of the United Kingdom House of Lords.

This may sound like a solid approach to tackling the dangers that AI poses, but to actually be impactful, these discussions must be grounded in rhetoric that is focused and actionable. Not only may the principles be defined differently depending on the stakeholders, but there are overwhelming differences in how principles are interpreted and what requirements are necessary for them to materialize. In addition, ethical debates on AI are often dominated by American or Chinese companies, which are both propagating their own idea of ethical AI, but which may in many cases stand in conflict with the values of other cultures and nations. Not only do different countries have different ideas of which “ethics” principles need to be protected, but different countries play starkly different roles in developing AI. Another problem is when ethical guidelines are discussed, suggestions often come from tech companies themselves, while voices from citizens or even governments are marginalized.

Self-regulation around ethical principles is too weak to address the spreading implications that AI technologies have had. Ethical principles lack clarity and enforcement capabilities. We must stop focusing the discourse on ethical principles, and instead shift the debate to human rights. Debates must be louder at the supranational level. International pressure must be put on states and companies who fail to protect individuals by propagating AI technologies that carry risks. Leadership must be defined not by actors who come up with new iterations of ethical guidelines, but by those who develop legal obligations regarding AI, which are anchored in and derived from a human rights perspective.

A way to do this would be to reaffirm the human-centric nature of AI development and deployment that follows actionable standards of human rights law. The human rights legal framework has been around for decades and has been instrumental in fighting and pressuring states to change domestic laws. Nelson Mandela referred to the duties spelled out in the Universal Declaration of Human Rights while fighting to end apartheid in South Africa; in 1973 with Roe v. Wade the United States Supreme Court followed a larger global trend of recognizing women’s human rights by protecting individuals from undue governmental interference in private affairs and giving women the ability to participate fully and equally in society; more recently, open access to the Internet has been recognized as a human right essential to not only freedom of opinion, expression, association, and assembly, but also instrumental in mobilizing the population to call for equality, justice, and accountability in order to advance global respect for human rights. These examples show how human rights standards have been applied to a diverse set of domestic and international rules. That these standards are actionable and enforceable show that they are well-suited to regulate the cross-border nature of AI technologies. AI systems must be scrutinized through a human rights perspective to analyze current and future harms either created or exacerbated by AI, and take action to avoid any harm.

The adoption of AI technologies has spread across borders and has had diverse effects on societies all over the world. A globalized technology needs international obligations to mitigate the societal problems being faced at an accelerated and larger scale. Companies and states should strive for the development of AI technologies that uphold human rights. Centering the AI discourse around human rights rather than simply ethics can be one way of providing a clearer legal basis for development and deployment of AI technologies. The international community must raise awareness, build consensus, and analyze thoroughly how AI technologies violate human rights in different contexts and develop paths for effective legal remedies. Focusing the discourse on human rights rather than ethical principles can provide more accountability measures, more obligations for state and private actors, and can redirect the debate to rely on consistent and widely accepted legal principles developed over decades.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Choosing Less over More Money:The Love of Praiseworthiness and the Dread of Blameworthiness in One-Player Games CEO Partisan Bias and Management Earnings Forecast Bias Understanding Systemic Social Problems: Moving Beyond the Limits of Leibenstein’s X-Efficiency Theory— An Essay in Theoretical Behavioral Social Policy An Exploratory Study on Brand Preference of Cool drinks in Gobichettipalayam Town Inequality, Poverty, and Social Protection in Bulgaria
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1