{"title":"Preferences in AI algorithms: The need for relevant risk attitudes in automated decisions under uncertainties.","authors":"Elisabeth Paté-Cornell","doi":"10.1111/risa.14268","DOIUrl":null,"url":null,"abstract":"<p><p>Artificial intelligence (AI) has the potential to improve life and reduce risks by providing large amounts of information embedded in big databases and by suggesting or implementing automated decisions under uncertainties. Yet, in the design of a prescriptive AI algorithm, some problems may occur, first and clearly, if the AI information is wrong or incomplete. But the main point of this article is that under uncertainties, the decision algorithm, rational or not, includes, in one way or another, a risk attitude in addition to deterministic preferences. That risk attitude implemented in the software is chosen by the analysts, the organization that they serve, the experts who inform them, and more generally by the process of identifying possible options. The problem is that it may or may not represent, as it should, the preferences of the actual decision maker (the risk manager) and of the people subjected to his/her decisions. This article briefly describes the sometimes-serious problem of that discrepancy between the preferences of the risk managers who use an AI output, and the risk attitude embedded in the AI system. The recommendation is to make these AI factors as accessible and transparent as possible and to allow for preference adjustments in the model if needed. The formulation of two simplified examples is described, that of a medical doctor and his/her patient when using an AI system to decide of a treatment option, and that of a skipper in a sailing race such as the America's Cup, receiving AI-processed sensor signals about the sailing conditions on different possible courses.</p>","PeriodicalId":21472,"journal":{"name":"Risk Analysis","volume":" ","pages":"2317-2323"},"PeriodicalIF":3.0000,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Risk Analysis","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1111/risa.14268","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/6 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"MATHEMATICS, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
Artificial intelligence (AI) has the potential to improve life and reduce risks by providing large amounts of information embedded in big databases and by suggesting or implementing automated decisions under uncertainties. Yet, in the design of a prescriptive AI algorithm, some problems may occur, first and clearly, if the AI information is wrong or incomplete. But the main point of this article is that under uncertainties, the decision algorithm, rational or not, includes, in one way or another, a risk attitude in addition to deterministic preferences. That risk attitude implemented in the software is chosen by the analysts, the organization that they serve, the experts who inform them, and more generally by the process of identifying possible options. The problem is that it may or may not represent, as it should, the preferences of the actual decision maker (the risk manager) and of the people subjected to his/her decisions. This article briefly describes the sometimes-serious problem of that discrepancy between the preferences of the risk managers who use an AI output, and the risk attitude embedded in the AI system. The recommendation is to make these AI factors as accessible and transparent as possible and to allow for preference adjustments in the model if needed. The formulation of two simplified examples is described, that of a medical doctor and his/her patient when using an AI system to decide of a treatment option, and that of a skipper in a sailing race such as the America's Cup, receiving AI-processed sensor signals about the sailing conditions on different possible courses.
期刊介绍:
Published on behalf of the Society for Risk Analysis, Risk Analysis is ranked among the top 10 journals in the ISI Journal Citation Reports under the social sciences, mathematical methods category, and provides a focal point for new developments in the field of risk analysis. This international peer-reviewed journal is committed to publishing critical empirical research and commentaries dealing with risk issues. The topics covered include:
• Human health and safety risks
• Microbial risks
• Engineering
• Mathematical modeling
• Risk characterization
• Risk communication
• Risk management and decision-making
• Risk perception, acceptability, and ethics
• Laws and regulatory policy
• Ecological risks.