{"title":"拥有决策:人工智能决策支持与属性差距。","authors":"Jannik Zeiser","doi":"10.1007/s11948-024-00485-1","DOIUrl":null,"url":null,"abstract":"<p><p>Artificial intelligence (AI) has long been recognised as a challenge to responsibility. Much of this discourse has been framed around robots, such as autonomous weapons or self-driving cars, where we arguably lack control over a machine's behaviour and therefore struggle to identify an agent that can be held accountable. However, most of today's AI is based on machine-learning technology that does not act on its own, but rather serves as a decision-support tool, automatically analysing data to help human agents make better decisions. I argue that decision-support tools pose a challenge to responsibility that goes beyond the familiar problem of finding someone to blame or punish for the behaviour of agent-like systems. Namely, they pose a problem for what we might call \"decision ownership\": they make it difficult to identify human agents to whom we can attribute value-judgements that are reflected in decisions. Drawing on recent philosophical literature on responsibility and its various facets, I argue that this is primarily a problem of attributability rather than of accountability. This particular responsibility problem comes in different forms and degrees, most obviously when an AI provides direct recommendations for actions, but also, less obviously, when it provides mere descriptive information on the basis of which a decision is made.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 4","pages":"27"},"PeriodicalIF":2.7000,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11189344/pdf/","citationCount":"0","resultStr":"{\"title\":\"Owning Decisions: AI Decision-Support and the Attributability-Gap.\",\"authors\":\"Jannik Zeiser\",\"doi\":\"10.1007/s11948-024-00485-1\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Artificial intelligence (AI) has long been recognised as a challenge to responsibility. Much of this discourse has been framed around robots, such as autonomous weapons or self-driving cars, where we arguably lack control over a machine's behaviour and therefore struggle to identify an agent that can be held accountable. However, most of today's AI is based on machine-learning technology that does not act on its own, but rather serves as a decision-support tool, automatically analysing data to help human agents make better decisions. I argue that decision-support tools pose a challenge to responsibility that goes beyond the familiar problem of finding someone to blame or punish for the behaviour of agent-like systems. Namely, they pose a problem for what we might call \\\"decision ownership\\\": they make it difficult to identify human agents to whom we can attribute value-judgements that are reflected in decisions. Drawing on recent philosophical literature on responsibility and its various facets, I argue that this is primarily a problem of attributability rather than of accountability. This particular responsibility problem comes in different forms and degrees, most obviously when an AI provides direct recommendations for actions, but also, less obviously, when it provides mere descriptive information on the basis of which a decision is made.</p>\",\"PeriodicalId\":49564,\"journal\":{\"name\":\"Science and Engineering Ethics\",\"volume\":\"30 4\",\"pages\":\"27\"},\"PeriodicalIF\":2.7000,\"publicationDate\":\"2024-06-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11189344/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Science and Engineering Ethics\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.1007/s11948-024-00485-1\",\"RegionNum\":2,\"RegionCategory\":\"哲学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Science and Engineering Ethics","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1007/s11948-024-00485-1","RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
Owning Decisions: AI Decision-Support and the Attributability-Gap.
Artificial intelligence (AI) has long been recognised as a challenge to responsibility. Much of this discourse has been framed around robots, such as autonomous weapons or self-driving cars, where we arguably lack control over a machine's behaviour and therefore struggle to identify an agent that can be held accountable. However, most of today's AI is based on machine-learning technology that does not act on its own, but rather serves as a decision-support tool, automatically analysing data to help human agents make better decisions. I argue that decision-support tools pose a challenge to responsibility that goes beyond the familiar problem of finding someone to blame or punish for the behaviour of agent-like systems. Namely, they pose a problem for what we might call "decision ownership": they make it difficult to identify human agents to whom we can attribute value-judgements that are reflected in decisions. Drawing on recent philosophical literature on responsibility and its various facets, I argue that this is primarily a problem of attributability rather than of accountability. This particular responsibility problem comes in different forms and degrees, most obviously when an AI provides direct recommendations for actions, but also, less obviously, when it provides mere descriptive information on the basis of which a decision is made.
期刊介绍:
Science and Engineering Ethics is an international multidisciplinary journal dedicated to exploring ethical issues associated with science and engineering, covering professional education, research and practice as well as the effects of technological innovations and research findings on society.
While the focus of this journal is on science and engineering, contributions from a broad range of disciplines, including social sciences and humanities, are welcomed. Areas of interest include, but are not limited to, ethics of new and emerging technologies, research ethics, computer ethics, energy ethics, animals and human subjects ethics, ethics education in science and engineering, ethics in design, biomedical ethics, values in technology and innovation.
We welcome contributions that deal with these issues from an international perspective, particularly from countries that are underrepresented in these discussions.