Ethical decision-making in human-automation collaboration: a case study of the nurse rostering problem

Vincent Bebien, Odile Bellenguez, Gilles Coppin, Anna Ma-Wyatt, Rachel Stephens
{"title":"Ethical decision-making in human-automation collaboration: a case study of the nurse rostering problem","authors":"Vincent Bebien,&nbsp;Odile Bellenguez,&nbsp;Gilles Coppin,&nbsp;Anna Ma-Wyatt,&nbsp;Rachel Stephens","doi":"10.1007/s43681-024-00459-w","DOIUrl":null,"url":null,"abstract":"<div><p>As artificial intelligence (AI) is increasingly present in different aspects of society and its harmful impacts are more visible, concrete methods to help design ethical AI systems and limit currently encountered risks must be developed. Taking the example of a well-known Operations Research problem, the Nurse Rostering Problem (NRP), this paper presents a way to help close the gap between abstract principles and on-the-ground applications with two different steps. We first propose a normative step that uses dedicated scientific knowledge to provide new rules for an NRP model, with the aim of improving nurses’ well-being. However, this step alone may be insufficient to comprehensively deal with all key ethical issues, particularly autonomy and explicability. Therefore, as a complementary second step, we introduce an interactive process that integrates a human decision-maker in the loop and allows practical ethics to be applied. Using input from stakeholders to enrich a mathematical model may help compensate for flaws in automated tools.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 2","pages":"1163 - 1175"},"PeriodicalIF":0.0000,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-024-00459-w.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-024-00459-w","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

As artificial intelligence (AI) is increasingly present in different aspects of society and its harmful impacts are more visible, concrete methods to help design ethical AI systems and limit currently encountered risks must be developed. Taking the example of a well-known Operations Research problem, the Nurse Rostering Problem (NRP), this paper presents a way to help close the gap between abstract principles and on-the-ground applications with two different steps. We first propose a normative step that uses dedicated scientific knowledge to provide new rules for an NRP model, with the aim of improving nurses’ well-being. However, this step alone may be insufficient to comprehensively deal with all key ethical issues, particularly autonomy and explicability. Therefore, as a complementary second step, we introduce an interactive process that integrates a human decision-maker in the loop and allows practical ethics to be applied. Using input from stakeholders to enrich a mathematical model may help compensate for flaws in automated tools.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
人机协作中的伦理决策:护士名册问题案例研究
随着人工智能(AI)越来越多地出现在社会的各个方面,其有害影响也越来越明显,必须开发具体的方法来帮助设计合乎道德的人工智能系统,并限制目前遇到的风险。以一个著名的运筹学问题——护士名册问题(NRP)为例,本文提出了一种方法,通过两个不同的步骤来帮助缩小抽象原则和实际应用之间的差距。我们首先提出了一个规范的步骤,利用专门的科学知识为NRP模型提供新的规则,目的是改善护士的福祉。然而,仅这一步可能不足以全面处理所有关键的伦理问题,特别是自主性和可解释性。因此,作为补充的第二步,我们引入了一个互动过程,将人类决策者整合到循环中,并允许应用实际伦理。使用来自涉众的输入来丰富数学模型可能有助于弥补自动化工具中的缺陷。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
aiHumanoid v11.9: a large concept model for autonomous ethical reasoning in clinical AI Towards a clinical large language model: an ethico-legal case study analysis A lacanian re-reading of sex robots: human subjectivity and programmable compliance on desire Culturally contextual datasheets: a framework for embedding cultural reflexivity in global AI governance AI Trustworthiness Index for Healthcare (AITI-H): conceptualization, structure, and development
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1