Artificial intelligence and mental capacity legislation: Opening Pandora's modem

IF 1.4 4区 医学 Q1 LAW International Journal of Law and Psychiatry Pub Date : 2024-04-04 DOI:10.1016/j.ijlp.2024.101985
Maria Redahan , Brendan D. Kelly
{"title":"Artificial intelligence and mental capacity legislation: Opening Pandora's modem","authors":"Maria Redahan ,&nbsp;Brendan D. Kelly","doi":"10.1016/j.ijlp.2024.101985","DOIUrl":null,"url":null,"abstract":"<div><p>People with impaired decision-making capacity enjoy the same rights to access technology as people with full capacity. Our paper looks at realising this right in the specific contexts of artificial intelligence (AI) and mental capacity legislation. Ireland's Assisted Decision-Making (Capacity) Act, 2015 commenced in April 2023 and refers to ‘assistive technology’ within its ‘communication’ criterion for capacity. We explore the potential benefits and risks of AI in assisting communication under this legislation and seek to identify principles or lessons which might be applicable in other jurisdictions. We focus especially on Ireland's provisions for advance healthcare directives because previous research demonstrates that common barriers to advance care planning include (i) lack of knowledge and skills, (ii) fear of starting conversations about advance care planning, and (iii) lack of time. We hypothesise that these barriers might be overcome, at least in part, by using generative AI which is already freely available worldwide. Bodies such as the United Nations have produced guidance about ethical use of AI and these guide our analysis. One of the ethical risks in the current context is that AI would reach beyond communication and start to influence the content of decisions, especially among people with impaired decision-making capacity. For example, when we asked one AI model to ‘Make me an advance healthcare directive’, its initial response did not explicitly suggest content for the directive, but it did suggest topics that might be included, which could be seen as setting an agenda. One possibility for circumventing this and other shortcomings, such as concerns around accuracy of information, is to look to foundational models of AI. With their capabilities to be trained and fine-tuned to downstream tasks, purpose-designed AI models could be adapted to provide education about capacity legislation, facilitate patient and staff interaction, and allow interactive updates by healthcare professionals. These measures could optimise the benefits of AI and minimise risks. Similar efforts have been made to use AI more responsibly in healthcare by training large language models to answer healthcare questions more safely and accurately. We highlight the need for open discussion about optimising the potential of AI while minimising risks in this population.</p></div>","PeriodicalId":47930,"journal":{"name":"International Journal of Law and Psychiatry","volume":null,"pages":null},"PeriodicalIF":1.4000,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0160252724000347/pdfft?md5=08c86753e77bd95dd0c4da8a2a5354cf&pid=1-s2.0-S0160252724000347-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Law and Psychiatry","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0160252724000347","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"LAW","Score":null,"Total":0}
引用次数: 0

Abstract

People with impaired decision-making capacity enjoy the same rights to access technology as people with full capacity. Our paper looks at realising this right in the specific contexts of artificial intelligence (AI) and mental capacity legislation. Ireland's Assisted Decision-Making (Capacity) Act, 2015 commenced in April 2023 and refers to ‘assistive technology’ within its ‘communication’ criterion for capacity. We explore the potential benefits and risks of AI in assisting communication under this legislation and seek to identify principles or lessons which might be applicable in other jurisdictions. We focus especially on Ireland's provisions for advance healthcare directives because previous research demonstrates that common barriers to advance care planning include (i) lack of knowledge and skills, (ii) fear of starting conversations about advance care planning, and (iii) lack of time. We hypothesise that these barriers might be overcome, at least in part, by using generative AI which is already freely available worldwide. Bodies such as the United Nations have produced guidance about ethical use of AI and these guide our analysis. One of the ethical risks in the current context is that AI would reach beyond communication and start to influence the content of decisions, especially among people with impaired decision-making capacity. For example, when we asked one AI model to ‘Make me an advance healthcare directive’, its initial response did not explicitly suggest content for the directive, but it did suggest topics that might be included, which could be seen as setting an agenda. One possibility for circumventing this and other shortcomings, such as concerns around accuracy of information, is to look to foundational models of AI. With their capabilities to be trained and fine-tuned to downstream tasks, purpose-designed AI models could be adapted to provide education about capacity legislation, facilitate patient and staff interaction, and allow interactive updates by healthcare professionals. These measures could optimise the benefits of AI and minimise risks. Similar efforts have been made to use AI more responsibly in healthcare by training large language models to answer healthcare questions more safely and accurately. We highlight the need for open discussion about optimising the potential of AI while minimising risks in this population.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
人工智能与心智能力立法:打开潘多拉的调制解调器
决策能力受损者与完全行为能力者一样享有使用技术的权利。我们的论文着眼于在人工智能(AI)和心智能力立法的特定背景下实现这一权利。爱尔兰的《2015 年辅助决策(能力)法案》于 2023 年 4 月生效,并在其能力的 "交流 "标准中提到了 "辅助技术"。我们探讨了人工智能在该立法下协助交流的潜在益处和风险,并试图找出可能适用于其他司法管辖区的原则或经验。我们特别关注爱尔兰关于预先医疗指示的规定,因为先前的研究表明,预先医疗规划的常见障碍包括:(i)缺乏知识和技能;(ii)害怕开始关于预先医疗规划的对话;以及(iii)缺乏时间。我们假设,这些障碍至少可以部分通过使用生成式人工智能来克服,而这种人工智能已经在全球范围内免费提供。联合国等机构已经制定了人工智能伦理使用指南,这些指南将指导我们的分析。当前的伦理风险之一是,人工智能将超越交流,开始影响决策内容,特别是对决策能力受损的人。例如,当我们要求一个人工智能模型 "为我制定一份预先医疗保健指令 "时,它的初始响应并没有明确建议指令的内容,但它确实建议了可能包括的主题,这可能被视为设置了一个议程。规避这一缺陷和其他缺陷(如对信息准确性的担忧)的一种可能性是借鉴人工智能的基础模型。由于人工智能模型具有针对下游任务进行训练和微调的能力,因此可以对其进行调整,以提供有关行为能力立法的教育,促进患者和员工之间的互动,并允许医疗保健专业人员进行交互式更新。这些措施可以优化人工智能的效益,并将风险降至最低。通过训练大型语言模型来更安全、更准确地回答医疗保健问题,在医疗保健领域更负责任地使用人工智能方面也做出了类似的努力。我们强调有必要就优化人工智能的潜力,同时最大限度地降低在这一人群中的风险展开公开讨论。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
4.70
自引率
8.70%
发文量
54
审稿时长
41 days
期刊介绍: The International Journal of Law and Psychiatry is intended to provide a multi-disciplinary forum for the exchange of ideas and information among professionals concerned with the interface of law and psychiatry. There is a growing awareness of the need for exploring the fundamental goals of both the legal and psychiatric systems and the social implications of their interaction. The journal seeks to enhance understanding and cooperation in the field through the varied approaches represented, not only by law and psychiatry, but also by the social sciences and related disciplines.
期刊最新文献
Recent research involving consent, alcohol intoxication, and memory: Implications for expert testimony in sexual assault cases Comparison of sociodemographic, clinical, and alexithymia characteristics of schizophrenia patients with and without criminal records Assessing mental capacity in the context of abuse and neglect: A relational lens Mediating the court procedural justice–delinquency relationship with certainty perceptions and legitimacy beliefs RECAPACITA project: Comparing neuropsychological profiles in people with severe mental disorders, with and without capacity modification
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1