Automatic (near-) duplicate content document detection in a cancer registry

IF 4.1 2区 医学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS International Journal of Medical Informatics Pub Date : 2025-03-01 Epub Date: 2025-01-18 DOI:10.1016/j.ijmedinf.2025.105799
Tapio Niemi, Jean Pierre Ghobril, Gautier Defossez, Simon Germann, Eloïse Martin, Jean-Luc Bulliard
{"title":"Automatic (near-) duplicate content document detection in a cancer registry","authors":"Tapio Niemi,&nbsp;Jean Pierre Ghobril,&nbsp;Gautier Defossez,&nbsp;Simon Germann,&nbsp;Eloïse Martin,&nbsp;Jean-Luc Bulliard","doi":"10.1016/j.ijmedinf.2025.105799","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><div>Duplicate and near-duplicate medical documents are problematic in document management, clinical use, and medical research. In this study, we focus on multisourced medical documents in the context of a population-based cancer registry in Switzerland. Although the data collection process is well-regulated, the volume of transmitted documents steadily increases and the presence of full or near-duplicates slows down and complicates document processing. Identifying near-duplicates is particularly challenging because the large number of documents makes pairwise comparison non-feasible.</div></div><div><h3>Methods</h3><div>We implemented a system based on both normal hash functions, Simhash (Locality Sensitive Hashing), and Smith-Waterman text alignment similarity. Simhash offers good performance and confirming its results by the Smith-Waterman algorithm with a selected similarity threshold reduces the false positive rate to near zero without lowering sensitivity. Extracted differences in near-duplicate content documents are shown by highlighting differences in original PDF documents.</div><div>We validated the method using 3042 manually verified document pairs containing 1252 full-duplicate and 398 near-duplicate pairs. The area under the curve (AUC) was 0.96, sensitivity 0.92, specificity 1.00, PPV 1.00, and NPV 0.91. For the same size simulated data, corresponding values were 0.86, 0.72, 1.00, 1.00, and 0.77, respectively.</div></div><div><h3>Results</h3><div>We applied the method against 224,398 medical documents in the cancer registry. We found 5.5% of duplicates on the text level, and 0.17–0.24% near-duplicates depending on the used parameters and threshold values. Most near-duplicates related to the same patient and originated from the same transmitter. Manual evaluation showed that only 2% of differences were in medical contents and 83% in administrative data (21% in patient, 11% in doctor, and 51% in other administrative data). Many near-duplicates looked strikingly similar from a human perspective.</div></div><div><h3>Conclusions</h3><div>We demonstrated that our method can efficiently find all full-duplicates and most near-duplicates in a large set of multisourced medical documents. Potential ways to further improve this method are discussed. The method can be applied to documents in all domains.</div></div>","PeriodicalId":54950,"journal":{"name":"International Journal of Medical Informatics","volume":"195 ","pages":"Article 105799"},"PeriodicalIF":4.1000,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Medical Informatics","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1386505625000164","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/18 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Background

Duplicate and near-duplicate medical documents are problematic in document management, clinical use, and medical research. In this study, we focus on multisourced medical documents in the context of a population-based cancer registry in Switzerland. Although the data collection process is well-regulated, the volume of transmitted documents steadily increases and the presence of full or near-duplicates slows down and complicates document processing. Identifying near-duplicates is particularly challenging because the large number of documents makes pairwise comparison non-feasible.

Methods

We implemented a system based on both normal hash functions, Simhash (Locality Sensitive Hashing), and Smith-Waterman text alignment similarity. Simhash offers good performance and confirming its results by the Smith-Waterman algorithm with a selected similarity threshold reduces the false positive rate to near zero without lowering sensitivity. Extracted differences in near-duplicate content documents are shown by highlighting differences in original PDF documents.
We validated the method using 3042 manually verified document pairs containing 1252 full-duplicate and 398 near-duplicate pairs. The area under the curve (AUC) was 0.96, sensitivity 0.92, specificity 1.00, PPV 1.00, and NPV 0.91. For the same size simulated data, corresponding values were 0.86, 0.72, 1.00, 1.00, and 0.77, respectively.

Results

We applied the method against 224,398 medical documents in the cancer registry. We found 5.5% of duplicates on the text level, and 0.17–0.24% near-duplicates depending on the used parameters and threshold values. Most near-duplicates related to the same patient and originated from the same transmitter. Manual evaluation showed that only 2% of differences were in medical contents and 83% in administrative data (21% in patient, 11% in doctor, and 51% in other administrative data). Many near-duplicates looked strikingly similar from a human perspective.

Conclusions

We demonstrated that our method can efficiently find all full-duplicates and most near-duplicates in a large set of multisourced medical documents. Potential ways to further improve this method are discussed. The method can be applied to documents in all domains.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
在癌症注册表中自动(近)重复内容文档检测。
背景:重复和近似重复的医疗文件在文件管理、临床使用和医学研究中存在问题。在这项研究中,我们关注瑞士基于人群的癌症登记背景下的多源医疗文件。虽然数据收集过程得到了很好的管理,但传输的文件数量稳步增加,而且出现完全或近乎重复的情况减慢了文件处理速度,使其复杂化。识别近似重复尤其具有挑战性,因为大量文档使得两两比较不可行。方法:我们实现了一个基于普通哈希函数、Simhash (Locality Sensitive hash)和Smith-Waterman文本对齐相似度的系统。Simhash具有良好的性能,通过Smith-Waterman算法对其结果进行验证,选择相似度阈值,在不降低灵敏度的情况下将误报率降低到接近零。通过突出显示原始PDF文档中的差异来显示在几乎重复的内容文档中提取的差异。我们使用3042对手动验证的文档对验证了该方法,其中包含1252对完全重复和398对近重复。曲线下面积(AUC)为0.96,敏感性0.92,特异性1.00,PPV 1.00, NPV 0.91。对于相同大小的模拟数据,对应值分别为0.86、0.72、1.00、1.00、0.77。结果:我们将该方法应用于癌症登记处的224398份医疗文件。根据所使用的参数和阈值,我们发现5.5%的文本重复,0.17-0.24%的近似重复。大多数近似重复与同一患者有关,并且来自同一传递者。人工评估显示,医疗内容差异仅为2%,管理数据差异为83%(患者差异21%,医生差异11%,其他管理数据差异51%)。从人类的角度来看,许多近乎复制的基因看起来惊人地相似。结论:我们证明了我们的方法可以有效地在大量多源医学文献中找到所有完整重复和大多数近似重复。讨论了进一步改进该方法的可能途径。该方法适用于所有领域的文档。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
International Journal of Medical Informatics
International Journal of Medical Informatics 医学-计算机:信息系统
CiteScore
8.90
自引率
4.10%
发文量
217
审稿时长
42 days
期刊介绍: International Journal of Medical Informatics provides an international medium for dissemination of original results and interpretative reviews concerning the field of medical informatics. The Journal emphasizes the evaluation of systems in healthcare settings. The scope of journal covers: Information systems, including national or international registration systems, hospital information systems, departmental and/or physician''s office systems, document handling systems, electronic medical record systems, standardization, systems integration etc.; Computer-aided medical decision support systems using heuristic, algorithmic and/or statistical methods as exemplified in decision theory, protocol development, artificial intelligence, etc. Educational computer based programs pertaining to medical informatics or medicine in general; Organizational, economic, social, clinical impact, ethical and cost-benefit aspects of IT applications in health care.
期刊最新文献
Transforming nursing documentation data into the Observational Medical Outcomes Partners common data model Human in the loop artificial intelligence in healthcare: applications, outcomes, and implementation challenges Accuracy and completeness of large language models in Epidemic keratoconjunctivitis Queries: A Comparative study From algorithmic innovation to clinical deployment: A systematic review of methodological gaps limiting federated learning in healthcare Accuracy, Efficiency, and usability of a Semi-Automated SMART on FHIR venous thromboembolism risk assessment App: A randomised crossover simulation study
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1