Stijn van Lierop , Daniel Ramos , Marjan Sjerps , Rolf Ypma
{"title":"An overview of log likelihood ratio cost in forensic science – Where is it used and what values can we expect?","authors":"Stijn van Lierop , Daniel Ramos , Marjan Sjerps , Rolf Ypma","doi":"10.1016/j.fsisyn.2024.100466","DOIUrl":null,"url":null,"abstract":"<div><p>There is increasing support for reporting evidential strength as a likelihood ratio (LR) and increasing interest in (semi-)automated LR systems. The log-likelihood ratio cost (<em>C</em><sub><em>llr</em></sub>) is a popular metric for such systems, penalizing misleading LRs further from 1 more. <em>C</em><sub><em>llr</em></sub> = 0 indicates perfection while <em>C</em><sub><em>llr</em></sub> = 1 indicates an uninformative system. However, beyond this, what constitutes a “good” <em>C</em><sub><em>llr</em></sub> is unclear.</p><p>Aiming to provide handles on when a <em>C</em><sub><em>llr</em></sub> is “good”, we studied 136 publications on (semi-)automated LR systems. Results show <em>C</em><sub><em>llr</em></sub> use heavily depends on the field, e.g., being absent in DNA analysis. Despite more publications on automated LR systems over time, the proportion reporting <em>C</em><sub><em>llr</em></sub> remains stable. Noticeably, <em>C</em><sub><em>llr</em></sub> values lack clear patterns and depend on the area, analysis and dataset.</p><p>As LR systems become more prevalent, comparing them becomes crucial. This is hampered by different studies using different datasets. We advocate using public benchmark datasets to advance the field.</p></div>","PeriodicalId":36925,"journal":{"name":"Forensic Science International: Synergy","volume":"8 ","pages":"Article 100466"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589871X24000135/pdfft?md5=ba5837ba032d15cb5b24187d341f287f&pid=1-s2.0-S2589871X24000135-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Forensic Science International: Synergy","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2589871X24000135","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 0
Abstract
There is increasing support for reporting evidential strength as a likelihood ratio (LR) and increasing interest in (semi-)automated LR systems. The log-likelihood ratio cost (Cllr) is a popular metric for such systems, penalizing misleading LRs further from 1 more. Cllr = 0 indicates perfection while Cllr = 1 indicates an uninformative system. However, beyond this, what constitutes a “good” Cllr is unclear.
Aiming to provide handles on when a Cllr is “good”, we studied 136 publications on (semi-)automated LR systems. Results show Cllr use heavily depends on the field, e.g., being absent in DNA analysis. Despite more publications on automated LR systems over time, the proportion reporting Cllr remains stable. Noticeably, Cllr values lack clear patterns and depend on the area, analysis and dataset.
As LR systems become more prevalent, comparing them becomes crucial. This is hampered by different studies using different datasets. We advocate using public benchmark datasets to advance the field.