Challenges of Aligning Artificial Intelligence with Human Values

M. Sutrop
{"title":"Challenges of Aligning Artificial Intelligence with Human Values","authors":"M. Sutrop","doi":"10.11590/abhps.2020.2.04","DOIUrl":null,"url":null,"abstract":"As artificial intelligence (AI) systems are becoming increasingly autonomous and will soon be able to make decisions on their own about what to do, AI researchers have started to talk about the need to align AI with human values. The AI ‘value alignment problem’ faces two kinds of challenges—a technical and a normative one—which are interrelated. The technical challenge deals with the question of how to encode human values in artificial intelligence. The normative challenge is associated with two questions: “Which values or whose values should artificial intelligence align with?” My concern is that AI developers underestimate the difficulty of answering the normative question. They hope that we can easily identify the purposes we really desire and that they can focus on the design of those objectives. But how are we to decide which objectives or values to induce in AI, given that there is a plurality of values and moral principles and that our everyday life is full of moral disagreements? In my paper I will show that although it is not realistic to reach an agreement on what we, humans, really want as people value different things and seek different ends, it may be possible to agree on what we do not want to happen, considering the possibility that intelligence, equal to our own, or even exceeding it, can be created. I will argue for pluralism (and not for relativism!) which is compatible with objectivism. In spite of the fact that there is no uniquely best solution to every moral problem, it is still possible to identify which answers are wrong. And this is where we should begin the value alignment of AI.","PeriodicalId":37693,"journal":{"name":"Acta Baltica Historiae et Philosophiae Scientiarum","volume":"8 1","pages":"54-72"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Acta Baltica Historiae et Philosophiae Scientiarum","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.11590/abhps.2020.2.04","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Arts and Humanities","Score":null,"Total":0}
引用次数: 9

Abstract

As artificial intelligence (AI) systems are becoming increasingly autonomous and will soon be able to make decisions on their own about what to do, AI researchers have started to talk about the need to align AI with human values. The AI ‘value alignment problem’ faces two kinds of challenges—a technical and a normative one—which are interrelated. The technical challenge deals with the question of how to encode human values in artificial intelligence. The normative challenge is associated with two questions: “Which values or whose values should artificial intelligence align with?” My concern is that AI developers underestimate the difficulty of answering the normative question. They hope that we can easily identify the purposes we really desire and that they can focus on the design of those objectives. But how are we to decide which objectives or values to induce in AI, given that there is a plurality of values and moral principles and that our everyday life is full of moral disagreements? In my paper I will show that although it is not realistic to reach an agreement on what we, humans, really want as people value different things and seek different ends, it may be possible to agree on what we do not want to happen, considering the possibility that intelligence, equal to our own, or even exceeding it, can be created. I will argue for pluralism (and not for relativism!) which is compatible with objectivism. In spite of the fact that there is no uniquely best solution to every moral problem, it is still possible to identify which answers are wrong. And this is where we should begin the value alignment of AI.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
将人工智能与人类价值观相结合的挑战
随着人工智能系统变得越来越自主,很快就能自己决定该做什么,人工智能研究人员开始讨论将人工智能与人类价值观相结合的必要性。人工智能的“价值取向问题”面临两种挑战——技术挑战和规范挑战——这两种挑战是相互关联的。技术挑战涉及如何在人工智能中编码人类价值观的问题。规范性挑战与两个问题有关:“人工智能应该与哪些价值观或谁的价值观保持一致?”我担心的是,人工智能开发人员低估了回答规范性问题的难度。他们希望我们能够很容易地确定我们真正想要的目的,并希望他们能够专注于这些目标的设计。但是,考虑到存在多种价值观和道德原则,并且我们的日常生活充满了道德分歧,我们如何决定在人工智能中引入哪些目标或价值观?在我的论文中,我将表明,尽管由于人们重视不同的事物并寻求不同的目的,就我们人类真正想要的东西达成一致是不现实的,但考虑到可以创造出与我们自己同等甚至超过我们自己的智慧的可能性,就我们不想发生的事情达成一致是可能的。我将主张多元主义(而不是相对主义!),它与客观主义相兼容。尽管每个道德问题都没有唯一的最佳解决方案,但仍然可以确定哪些答案是错误的。这就是我们应该开始调整人工智能价值的地方。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
0.80
自引率
0.00%
发文量
10
审稿时长
30 weeks
期刊介绍: Acta Baltica Historiae et Philosophiae Scientiarum sees its mission in offering publishing opportunities for Baltic and non-Baltic scholars in the field of the history and philosophy of natural and social sciences (including legal studies) to promote and further international cooperation between scholars of different countries in this field.
期刊最新文献
What Does ‘φ-Scientificity’ Mean? I. Models and Measurement: Galileo Practical Realism and the Philosophy of Science and Technology Value Conflicts as Value Indicators The Problem of Realism in Vihalemm Cybernetic Epistemology
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1