人工智能算法中的偏好:不确定情况下的自动决策需要相关的风险态度。

IF 3 3区 医学 Q1 MATHEMATICS, INTERDISCIPLINARY APPLICATIONS Risk Analysis Pub Date : 2024-10-01 Epub Date: 2024-01-06 DOI:10.1111/risa.14268
Elisabeth Paté-Cornell
{"title":"人工智能算法中的偏好:不确定情况下的自动决策需要相关的风险态度。","authors":"Elisabeth Paté-Cornell","doi":"10.1111/risa.14268","DOIUrl":null,"url":null,"abstract":"<p><p>Artificial intelligence (AI) has the potential to improve life and reduce risks by providing large amounts of information embedded in big databases and by suggesting or implementing automated decisions under uncertainties. Yet, in the design of a prescriptive AI algorithm, some problems may occur, first and clearly, if the AI information is wrong or incomplete. But the main point of this article is that under uncertainties, the decision algorithm, rational or not, includes, in one way or another, a risk attitude in addition to deterministic preferences. That risk attitude implemented in the software is chosen by the analysts, the organization that they serve, the experts who inform them, and more generally by the process of identifying possible options. The problem is that it may or may not represent, as it should, the preferences of the actual decision maker (the risk manager) and of the people subjected to his/her decisions. This article briefly describes the sometimes-serious problem of that discrepancy between the preferences of the risk managers who use an AI output, and the risk attitude embedded in the AI system. The recommendation is to make these AI factors as accessible and transparent as possible and to allow for preference adjustments in the model if needed. The formulation of two simplified examples is described, that of a medical doctor and his/her patient when using an AI system to decide of a treatment option, and that of a skipper in a sailing race such as the America's Cup, receiving AI-processed sensor signals about the sailing conditions on different possible courses.</p>","PeriodicalId":21472,"journal":{"name":"Risk Analysis","volume":" ","pages":"2317-2323"},"PeriodicalIF":3.0000,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Preferences in AI algorithms: The need for relevant risk attitudes in automated decisions under uncertainties.\",\"authors\":\"Elisabeth Paté-Cornell\",\"doi\":\"10.1111/risa.14268\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Artificial intelligence (AI) has the potential to improve life and reduce risks by providing large amounts of information embedded in big databases and by suggesting or implementing automated decisions under uncertainties. Yet, in the design of a prescriptive AI algorithm, some problems may occur, first and clearly, if the AI information is wrong or incomplete. But the main point of this article is that under uncertainties, the decision algorithm, rational or not, includes, in one way or another, a risk attitude in addition to deterministic preferences. That risk attitude implemented in the software is chosen by the analysts, the organization that they serve, the experts who inform them, and more generally by the process of identifying possible options. The problem is that it may or may not represent, as it should, the preferences of the actual decision maker (the risk manager) and of the people subjected to his/her decisions. This article briefly describes the sometimes-serious problem of that discrepancy between the preferences of the risk managers who use an AI output, and the risk attitude embedded in the AI system. The recommendation is to make these AI factors as accessible and transparent as possible and to allow for preference adjustments in the model if needed. The formulation of two simplified examples is described, that of a medical doctor and his/her patient when using an AI system to decide of a treatment option, and that of a skipper in a sailing race such as the America's Cup, receiving AI-processed sensor signals about the sailing conditions on different possible courses.</p>\",\"PeriodicalId\":21472,\"journal\":{\"name\":\"Risk Analysis\",\"volume\":\" \",\"pages\":\"2317-2323\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2024-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Risk Analysis\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1111/risa.14268\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/1/6 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"MATHEMATICS, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Risk Analysis","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1111/risa.14268","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/6 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"MATHEMATICS, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

摘要

人工智能(AI)通过提供嵌入在大型数据库中的大量信息,以及在不确定情况下建议或执行自动决策,具有改善生活和降低风险的潜力。然而,在设计规定性人工智能算法时,如果人工智能信息有误或不完整,首先可能会出现一些问题。但本文的主要观点是,在不确定情况下,决策算法无论是否理性,除了确定性偏好之外,还以某种方式包含了风险态度。在软件中实施的风险态度是由分析人员、他们所服务的组织、为他们提供信息的专家,以及更普遍的由确定可能选项的过程所选择的。问题是,它可能代表实际决策者(风险管理者)的偏好,也可能不代表受其决策影响的人的偏好。本文简要介绍了使用人工智能输出结果的风险管理者的偏好与人工智能系统中嵌入的风险态度之间有时存在的严重差异问题。我们的建议是尽可能使这些人工智能因素具有可访问性和透明度,并在必要时允许在模型中对偏好进行调整。本文描述了两个简化示例的制定过程,一个是医生和他/她的病人使用人工智能系统决定治疗方案,另一个是帆船比赛(如美洲杯帆船赛)的船长接收人工智能处理的传感器信号,了解不同可能赛道的航行条件。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Preferences in AI algorithms: The need for relevant risk attitudes in automated decisions under uncertainties.

Artificial intelligence (AI) has the potential to improve life and reduce risks by providing large amounts of information embedded in big databases and by suggesting or implementing automated decisions under uncertainties. Yet, in the design of a prescriptive AI algorithm, some problems may occur, first and clearly, if the AI information is wrong or incomplete. But the main point of this article is that under uncertainties, the decision algorithm, rational or not, includes, in one way or another, a risk attitude in addition to deterministic preferences. That risk attitude implemented in the software is chosen by the analysts, the organization that they serve, the experts who inform them, and more generally by the process of identifying possible options. The problem is that it may or may not represent, as it should, the preferences of the actual decision maker (the risk manager) and of the people subjected to his/her decisions. This article briefly describes the sometimes-serious problem of that discrepancy between the preferences of the risk managers who use an AI output, and the risk attitude embedded in the AI system. The recommendation is to make these AI factors as accessible and transparent as possible and to allow for preference adjustments in the model if needed. The formulation of two simplified examples is described, that of a medical doctor and his/her patient when using an AI system to decide of a treatment option, and that of a skipper in a sailing race such as the America's Cup, receiving AI-processed sensor signals about the sailing conditions on different possible courses.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Risk Analysis
Risk Analysis 数学-数学跨学科应用
CiteScore
7.50
自引率
10.50%
发文量
183
审稿时长
4.2 months
期刊介绍: Published on behalf of the Society for Risk Analysis, Risk Analysis is ranked among the top 10 journals in the ISI Journal Citation Reports under the social sciences, mathematical methods category, and provides a focal point for new developments in the field of risk analysis. This international peer-reviewed journal is committed to publishing critical empirical research and commentaries dealing with risk issues. The topics covered include: • Human health and safety risks • Microbial risks • Engineering • Mathematical modeling • Risk characterization • Risk communication • Risk management and decision-making • Risk perception, acceptability, and ethics • Laws and regulatory policy • Ecological risks.
期刊最新文献
JointLIME: An interpretation method for machine learning survival models with endogenous time-varying covariates in credit scoring. Portrayal of risk information and its impact on audiences' risk perception during the Covid-19 pandemic: A multi-method approach. A quantitative analysis of biosafety and biosecurity using attack trees in low-to-moderate risk scenarios: Evidence from iGEM. Two paths of news frames affecting support for particulate matter policies in South Korea: The moderating roles of media exposure and psychological distance. A generalized multinomial probabilistic model for SARS-COV-2 infection prediction and public health intervention assessment in an indoor environment.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1