Beverley Townsend , Katie J. Parnell , Sinem Getir Yaman , Gabriel Nemirovsky , Radu Calinescu
{"title":"Normative conflict resolution through human–autonomous agent interaction","authors":"Beverley Townsend , Katie J. Parnell , Sinem Getir Yaman , Gabriel Nemirovsky , Radu Calinescu","doi":"10.1016/j.jrt.2025.100114","DOIUrl":null,"url":null,"abstract":"<div><div>We have become increasingly reliant on the decision-making capabilities of autonomous agents. These decisions are often executed under non-ideal conditions, offer significant moral risk, and directly affect human well-being. Such decisions may involve the choice to optimise one value over another: promoting safety over human autonomy, or ensuring accuracy over fairness, for example. All too often decision-making of this kind requires a level of normative evaluation involving ethically defensible moral choices and value judgements, compromises, and trade-offs. Guided by normative principles such decisions inform the possible courses of action the agent may take and may even change a set of established actionable courses.</div><div>This paper seeks to map the decision-making processes in normative choice scenarios wherein autonomous agents are intrinsically linked to the decision process. A care-robot is used to illustrate how a normative choice - underpinned by normative principles - arises, where the agent must ‘choose’ an actionable path involving the administration of critical or non-critical medication. Critically, the choice is dependent upon the trade-off involving two normative principles: respect for human autonomy and the prevention of harm. An additional dimension is presented, that of the inclusion of the urgency of the medication to be administered, which further informs and changes the course of action to be followed.</div><div>We offer a means to map decision-making involving a normative choice within a decision ladder using stakeholder input, and, using defeasibility, we show how specification rules with defeaters can be written to operationalise such choice.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"21 ","pages":"Article 100114"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of responsible technology","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666659625000101","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We have become increasingly reliant on the decision-making capabilities of autonomous agents. These decisions are often executed under non-ideal conditions, offer significant moral risk, and directly affect human well-being. Such decisions may involve the choice to optimise one value over another: promoting safety over human autonomy, or ensuring accuracy over fairness, for example. All too often decision-making of this kind requires a level of normative evaluation involving ethically defensible moral choices and value judgements, compromises, and trade-offs. Guided by normative principles such decisions inform the possible courses of action the agent may take and may even change a set of established actionable courses.
This paper seeks to map the decision-making processes in normative choice scenarios wherein autonomous agents are intrinsically linked to the decision process. A care-robot is used to illustrate how a normative choice - underpinned by normative principles - arises, where the agent must ‘choose’ an actionable path involving the administration of critical or non-critical medication. Critically, the choice is dependent upon the trade-off involving two normative principles: respect for human autonomy and the prevention of harm. An additional dimension is presented, that of the inclusion of the urgency of the medication to be administered, which further informs and changes the course of action to be followed.
We offer a means to map decision-making involving a normative choice within a decision ladder using stakeholder input, and, using defeasibility, we show how specification rules with defeaters can be written to operationalise such choice.