Automatic Summarization of Court Decision Documents over Narcotic Cases Using BERT

G. Wicaksono, Sheila Fitria Al asqalani, Yufis Azhar, N. Hidayah, Andreawana Andreawana
{"title":"Automatic Summarization of Court Decision Documents over Narcotic Cases Using BERT","authors":"G. Wicaksono, Sheila Fitria Al asqalani, Yufis Azhar, N. Hidayah, Andreawana Andreawana","doi":"10.30630/joiv.7.2.1811","DOIUrl":null,"url":null,"abstract":"Reviewing court decision documents for references in handling similar cases can be time-consuming. From this perspective, we need a system that can allow the summarization of court decision documents to enable adequate information extraction. This study used 50 court decision documents taken from the official website of the Supreme Court of the Republic of Indonesia, with the cases raised being Narcotics and Psychotropics. The court decision document dataset was divided into two types, court decision documents with the identity of the defendant and court decision documents without the defendant's identity. We used BERT specific to the IndoBERT model to summarize the court decision documents. This study uses four types of IndoBert models: IndoBERT-Base-Phase 1, IndoBERT-Lite-Bas-Phase 1, IndoBERT-Large-Phase 1, and IndoBERT-Lite-Large-Phase 1. This study also uses three types of ratios and ROUGE-N in summarizing court decision documents consisting of ratios of 20%, 30%, and 40% ratios, as well as ROUGE1, ROUGE2, and ROUGE3. The results have found that IndoBERT pre-trained model had a better performance in summarizing court decision documents with or without the defendant's identity with a 40% summarizing ratio. The highest ROUGE score produced by IndoBERT was found in the INDOBERT-LITE-BASE PHASE 1 model with a ROUGE value of 1.00 for documents with the defendant's identity and 0.970 for documents without the defendant's identity at a ratio of 40% in R-1. For future research, it is expected to be able to use other types of Bert models such as IndoBERT Phase-2, LegalBert, etc.","PeriodicalId":32468,"journal":{"name":"JOIV International Journal on Informatics Visualization","volume":"76 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"JOIV International Journal on Informatics Visualization","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.30630/joiv.7.2.1811","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Decision Sciences","Score":null,"Total":0}
引用次数: 1

Abstract

Reviewing court decision documents for references in handling similar cases can be time-consuming. From this perspective, we need a system that can allow the summarization of court decision documents to enable adequate information extraction. This study used 50 court decision documents taken from the official website of the Supreme Court of the Republic of Indonesia, with the cases raised being Narcotics and Psychotropics. The court decision document dataset was divided into two types, court decision documents with the identity of the defendant and court decision documents without the defendant's identity. We used BERT specific to the IndoBERT model to summarize the court decision documents. This study uses four types of IndoBert models: IndoBERT-Base-Phase 1, IndoBERT-Lite-Bas-Phase 1, IndoBERT-Large-Phase 1, and IndoBERT-Lite-Large-Phase 1. This study also uses three types of ratios and ROUGE-N in summarizing court decision documents consisting of ratios of 20%, 30%, and 40% ratios, as well as ROUGE1, ROUGE2, and ROUGE3. The results have found that IndoBERT pre-trained model had a better performance in summarizing court decision documents with or without the defendant's identity with a 40% summarizing ratio. The highest ROUGE score produced by IndoBERT was found in the INDOBERT-LITE-BASE PHASE 1 model with a ROUGE value of 1.00 for documents with the defendant's identity and 0.970 for documents without the defendant's identity at a ratio of 40% in R-1. For future research, it is expected to be able to use other types of Bert models such as IndoBERT Phase-2, LegalBert, etc.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于BERT的毒品案件裁判文书自动摘要
在处理类似案件时,查阅法庭判决书作为参考可能会耗费大量时间。从这个角度来看,我们需要一个能够对法院判决文件进行摘要的系统,以便充分提取信息。本研究使用了印度尼西亚共和国最高法院官方网站上的50份法院判决文件,提出的案件是麻醉品和精神药物。将判决书数据集分为被告身份判决书和不含被告身份判决书两类。我们使用特定于IndoBERT模型的BERT来总结法院判决文件。本研究使用了四种IndoBert模型:IndoBert - base - phase 1、IndoBert - lite - base - phase 1、IndoBert - large - phase 1和IndoBert - lite - large - phase 1。本研究还使用了三种类型的比率和ROUGE-N,分别由20%、30%、40%的比率以及ROUGE1、ROUGE2、ROUGE3组成。结果发现,IndoBERT预训练模型在总结有或没有被告身份的法院判决文件方面表现更好,总结率为40%。IndoBERT产生的ROUGE得分最高的是IndoBERT - lite - base PHASE 1模型,具有被告身份的文件的ROUGE值为1.00,不具有被告身份的文件的ROUGE值为0.970,R-1的比例为40%。对于未来的研究,预计能够使用其他类型的Bert模型,如IndoBERT Phase-2、LegalBert等。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
JOIV International Journal on Informatics Visualization
JOIV International Journal on Informatics Visualization Decision Sciences-Information Systems and Management
CiteScore
1.40
自引率
0.00%
发文量
100
审稿时长
16 weeks
期刊最新文献
Composition Model of Organic Waste Raw Materials Image-Based To Obtain Charcoal Briquette Energy Potential Visualization Mapping of the Socio-Technical Architecture based on Tongkonan Traditional House Skew Correction and Image Cleaning Handwriting Recognition Using a Convolutional Neural Network 433Mhz based Robot using PID (Proportional Integral Derivative) for Precise Facing Direction Closer Look at Image Classification for Indonesian Sign Language with Few-Shot Learning Using Matching Network Approach
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1