Ethical and preventive legal technology

Georgios Stathis, Jaap van den Herik
{"title":"Ethical and preventive legal technology","authors":"Georgios Stathis,&nbsp;Jaap van den Herik","doi":"10.1007/s43681-023-00413-2","DOIUrl":null,"url":null,"abstract":"<div><p>Preventive Legal Technology (PLT) is a new field of Artificial Intelligence (AI) investigating the <i>intelligent prevention of disputes</i>. The concept integrates the theories of <i>preventive law</i> and <i>legal technology</i>. Our goal is to give ethics a place in the new technology. By <i>explaining</i> the decisions of PLT, we aim to achieve a higher degree of <i>trustworthiness</i> because explicit explanations are expected to improve the level of <i>transparency</i> and <i>accountability</i>. Trustworthiness is an urgent topic in the discussion on doing AI research ethically and accounting for the regulations. For this purpose, we examine the limitations of rule-based explainability for PLT. Hence, our Problem Statement reads: <i>to what extent is it possible to develop an explainable and trustworthy Preventive Legal Technology?</i> After an insightful literature review, we focus on case studies with applications. The results describe (1) the effectivity of PLT and (2) its responsibility. The discussion is challenging and multivariate, investigating deeply the relevance of PLT for LegalTech applications in light of the development of the AI Act (currently still in its final phase of process) and the work of the High-Level Expert Group (HLEG) on AI. On the ethical side, explaining AI decisions for small PLT domains is clearly possible, with direct effects on trustworthiness due to increased transparency and accountability.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 2","pages":"1069 - 1086"},"PeriodicalIF":0.0000,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-023-00413-2.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-023-00413-2","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Preventive Legal Technology (PLT) is a new field of Artificial Intelligence (AI) investigating the intelligent prevention of disputes. The concept integrates the theories of preventive law and legal technology. Our goal is to give ethics a place in the new technology. By explaining the decisions of PLT, we aim to achieve a higher degree of trustworthiness because explicit explanations are expected to improve the level of transparency and accountability. Trustworthiness is an urgent topic in the discussion on doing AI research ethically and accounting for the regulations. For this purpose, we examine the limitations of rule-based explainability for PLT. Hence, our Problem Statement reads: to what extent is it possible to develop an explainable and trustworthy Preventive Legal Technology? After an insightful literature review, we focus on case studies with applications. The results describe (1) the effectivity of PLT and (2) its responsibility. The discussion is challenging and multivariate, investigating deeply the relevance of PLT for LegalTech applications in light of the development of the AI Act (currently still in its final phase of process) and the work of the High-Level Expert Group (HLEG) on AI. On the ethical side, explaining AI decisions for small PLT domains is clearly possible, with direct effects on trustworthiness due to increased transparency and accountability.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
道德和预防性法律技术
预防性法律技术(PLT)是人工智能(AI)研究智能预防纠纷的一个新领域。这一概念融合了预防法理论和法律技术理论。我们的目标是让道德在新技术中占有一席之地。通过解释PLT的决策,我们的目标是实现更高程度的可信度,因为明确的解释有望提高透明度和问责制水平。诚信是人工智能研究伦理和规范讨论中亟待解决的问题。为此,我们研究了基于规则的PLT可解释性的局限性。因此,我们的问题陈述是:在多大程度上有可能开发出一种可解释且值得信赖的预防性法律技术?在深刻的文献回顾之后,我们将重点放在案例研究和应用上。结果描述了(1)PLT的有效性和(2)它的责任。讨论是具有挑战性和多元的,根据人工智能法案的发展(目前仍处于最后阶段)和人工智能高级别专家组(HLEG)的工作,深入调查了PLT与法律技术应用的相关性。在道德方面,解释小型PLT领域的人工智能决策显然是可能的,由于透明度和问责制的提高,这对可信度有直接影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Against AI ethics: challenging the conventional narratives Legitimate expectations in the age of innovation Dehumanising education: AI and the capitalist capture of teaching An overview of AI ethics: moral concerns through the lens of principles, lived realities and power structures Justification optional: ChatGPT’s advice can still influence human judgments about moral dilemmas
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1