Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models

IF 3 1区 社会学 Q1 LAW Journal of Legal Analysis Pub Date : 2024-06-26 DOI:10.1093/jla/laae003
Matthew Dahl, Varun Magesh, Mirac Suzgun, Daniel E Ho
{"title":"Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models","authors":"Matthew Dahl, Varun Magesh, Mirac Suzgun, Daniel E Ho","doi":"10.1093/jla/laae003","DOIUrl":null,"url":null,"abstract":"Do large language models (LLMs) know the law? LLMs are increasingly being used to augment legal practice, education, and research, yet their revolutionary potential is threatened by the presence of “hallucinations”—textual output that is not consistent with legal facts. We present the first systematic evidence of these hallucinations in public-facing LLMs, documenting trends across jurisdictions, courts, time periods, and cases. Using OpenAI’s ChatGPT 4 and other public models, we show that LLMs hallucinate at least 58% of the time, struggle to predict their own hallucinations, and often uncritically accept users’ incorrect legal assumptions. We conclude by cautioning against the rapid and unsupervised integration of popular LLMs into legal tasks, and we develop a typology of legal hallucinations to guide future research in this area.","PeriodicalId":45189,"journal":{"name":"Journal of Legal Analysis","volume":"87 1","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Legal Analysis","FirstCategoryId":"90","ListUrlMain":"https://doi.org/10.1093/jla/laae003","RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"LAW","Score":null,"Total":0}
引用次数: 0

Abstract

Do large language models (LLMs) know the law? LLMs are increasingly being used to augment legal practice, education, and research, yet their revolutionary potential is threatened by the presence of “hallucinations”—textual output that is not consistent with legal facts. We present the first systematic evidence of these hallucinations in public-facing LLMs, documenting trends across jurisdictions, courts, time periods, and cases. Using OpenAI’s ChatGPT 4 and other public models, we show that LLMs hallucinate at least 58% of the time, struggle to predict their own hallucinations, and often uncritically accept users’ incorrect legal assumptions. We conclude by cautioning against the rapid and unsupervised integration of popular LLMs into legal tasks, and we develop a typology of legal hallucinations to guide future research in this area.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
大型法律虚构:剖析大型语言模型中的法律幻觉
大型语言模型(LLMs)了解法律吗?大型语言模型正越来越多地被用于增强法律实践、教育和研究,然而其革命性潜力却受到了 "幻觉"--与法律事实不符的文本输出--的威胁。我们首次系统地展示了面向公众的法律硕士中的这些幻觉,并记录了不同司法管辖区、法院、时间段和案例的趋势。通过使用 OpenAI 的 ChatGPT 4 和其他公共模型,我们发现法律硕士至少有 58% 的时间会产生幻觉,他们难以预测自己的幻觉,并且经常不加批判地接受用户错误的法律假设。最后,我们告诫大家不要将流行的 LLM 快速、无监督地整合到法律任务中,我们还提出了一种法律幻觉类型学,以指导该领域的未来研究。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
4.10
自引率
0.00%
发文量
3
审稿时长
16 weeks
期刊最新文献
The Limits of Formalism in the Separation of Powers Putting Freedom of Contract in its Place Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models How Election Rules Affect Who Wins Remote Work and City Decline: Lessons From the Garment District
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1