In the process of developing and improving statistical models to address flaws in the examination and interpretation of highly selective fingermarks, the groundwork is being laid for a much broader and greater impact. This impact will arise from the use of these same improved statistical methods to exploit information from the examination of fingermarks with lower degrees of selectivity—those fingermarks traditionally considered to be devoid of evidentiary value. To the contrary, research has shown that fingermarks of lower selectivity have much to offer. They occur very frequently: much more often than those assessed to be sufficient for inclusion in existing fingerprint examination processes. In individual cases, they occur in locations and numbers that can provide important new information for investigators and additional routes to further investigation. As evidence contributing to proving a case, they can provide detailed activity-level information and new avenues to address the relevance and probative value of other direct and circumstantial evidence. The broader application of fingerprint models to these traditionally unused fingermarks of lower selectivity needs to be specifically developed and implemented to realize the contributions and to responsibly manage the risks and benefits.
{"title":"How the work being done on statistical fingerprint models provides the basis for a much broader and greater impact affecting many areas within the criminal justice system","authors":"David Stoney, Paul Stoney","doi":"10.1093/lpr/mgae008","DOIUrl":"https://doi.org/10.1093/lpr/mgae008","url":null,"abstract":"In the process of developing and improving statistical models to address flaws in the examination and interpretation of highly selective fingermarks, the groundwork is being laid for a much broader and greater impact. This impact will arise from the use of these same improved statistical methods to exploit information from the examination of fingermarks with lower degrees of selectivity—those fingermarks traditionally considered to be devoid of evidentiary value. To the contrary, research has shown that fingermarks of lower selectivity have much to offer. They occur very frequently: much more often than those assessed to be sufficient for inclusion in existing fingerprint examination processes. In individual cases, they occur in locations and numbers that can provide important new information for investigators and additional routes to further investigation. As evidence contributing to proving a case, they can provide detailed activity-level information and new avenues to address the relevance and probative value of other direct and circumstantial evidence. The broader application of fingerprint models to these traditionally unused fingermarks of lower selectivity needs to be specifically developed and implemented to realize the contributions and to responsibly manage the risks and benefits.","PeriodicalId":501426,"journal":{"name":"Law, Probability and Risk","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141500633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael Rosenblum, Elizabeth T. Chin, Elizabeth L Ogburn, Akihiko Nishimura, Daniel Westreich, Abhirup Datta, Susan Vanderplas, Maria Cuellar, William C. Thompson
{"title":"Misuse of statistical method results in highly biased interpretation of forensic evidence in","authors":"Michael Rosenblum, Elizabeth T. Chin, Elizabeth L Ogburn, Akihiko Nishimura, Daniel Westreich, Abhirup Datta, Susan Vanderplas, Maria Cuellar, William C. Thompson","doi":"10.1093/lpr/mgad010","DOIUrl":"https://doi.org/10.1093/lpr/mgad010","url":null,"abstract":"","PeriodicalId":501426,"journal":{"name":"Law, Probability and Risk","volume":"29 18","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139455978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider the forensic context in which the goal is to assess whether two sets of observed data came from the same source or from different sources. In particular, we focus on the situation in which the evidence consists of two sets of categorical count data: a set of event counts from an unknown source tied to a crime and a set of event counts generated by a known source. Using a same-source versus different-source hypothesis framework, we develop an approach to calculating a likelihood ratio. Under our proposed model, the likelihood ratio can be calculated in closed form, and we use this to theoretically analyse how the likelihood ratio is affected by how much data is observed, the number of event types being considered, and the prior used in the Bayesian model. Our work is motivated in particular by user-generated event data in digital forensics, a context in which relatively few statistical methodologies have yet been developed to support quantitative analysis of event data after it is extracted from a device. We evaluate our proposed method through experiments using three real-world event datasets, representing a variety of event types that may arise in digital forensics. The results of the theoretical analyses and experiments with real-world datasets demonstrate that while this model is a useful starting point for the statistical forensic analysis of user-generated event data, more work is needed before it can be applied for practical use.
{"title":"Likelihood ratios for categorical count data with applications in digital forensics","authors":"Rachel Longjohn, Padhraic Smyth, Hal S Stern","doi":"10.1093/lpr/mgac016","DOIUrl":"https://doi.org/10.1093/lpr/mgac016","url":null,"abstract":"We consider the forensic context in which the goal is to assess whether two sets of observed data came from the same source or from different sources. In particular, we focus on the situation in which the evidence consists of two sets of categorical count data: a set of event counts from an unknown source tied to a crime and a set of event counts generated by a known source. Using a same-source versus different-source hypothesis framework, we develop an approach to calculating a likelihood ratio. Under our proposed model, the likelihood ratio can be calculated in closed form, and we use this to theoretically analyse how the likelihood ratio is affected by how much data is observed, the number of event types being considered, and the prior used in the Bayesian model. Our work is motivated in particular by user-generated event data in digital forensics, a context in which relatively few statistical methodologies have yet been developed to support quantitative analysis of event data after it is extracted from a device. We evaluate our proposed method through experiments using three real-world event datasets, representing a variety of event types that may arise in digital forensics. The results of the theoretical analyses and experiments with real-world datasets demonstrate that while this model is a useful starting point for the statistical forensic analysis of user-generated event data, more work is needed before it can be applied for practical use.","PeriodicalId":501426,"journal":{"name":"Law, Probability and Risk","volume":"14 6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138535351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Previous methods to evaluate evidence from handwriting examinations were usually associated with a redefinition of how these examinations are to be made. Here we propose the likelihood ratio method for handwriting evidence evaluation which is fully compatible with the current handwriting examination protocols. The method is focused on the similarity between handwriting samples, quantified using Jaccard index from results of a usual forensic handwriting comparison. The numerator of the likelihood ratio is the probability of a given class of similarity, assuming that a given person wrote the questioned sample. The denominator is the probability of the same class of similarity, assuming that a randomly selected person wrote questioned sample. The similarity distribution to quantify the numerator is derived from comparisons across reference handwritings. To calculate the denominator we propose to develop similarity distributions relevant for particular forensic scenarios. In the proof-of-a-concept study, we developed the distribution for the simulation scenario.
{"title":"Likelihood ratio to evaluate handwriting evidence using similarity index","authors":"Jȩdrzej Wydra, Szymon Matuszewski","doi":"10.1093/lpr/mgac013","DOIUrl":"https://doi.org/10.1093/lpr/mgac013","url":null,"abstract":"Previous methods to evaluate evidence from handwriting examinations were usually associated with a redefinition of how these examinations are to be made. Here we propose the likelihood ratio method for handwriting evidence evaluation which is fully compatible with the current handwriting examination protocols. The method is focused on the similarity between handwriting samples, quantified using Jaccard index from results of a usual forensic handwriting comparison. The numerator of the likelihood ratio is the probability of a given class of similarity, assuming that a given person wrote the questioned sample. The denominator is the probability of the same class of similarity, assuming that a randomly selected person wrote questioned sample. The similarity distribution to quantify the numerator is derived from comparisons across reference handwritings. To calculate the denominator we propose to develop similarity distributions relevant for particular forensic scenarios. In the proof-of-a-concept study, we developed the distribution for the simulation scenario.","PeriodicalId":501426,"journal":{"name":"Law, Probability and Risk","volume":"95 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138543189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This is the first in a series of occasional articles in Law, Probability & Risk: interviews with prominent statisticians who have contributed to the field. They will share their influences, achievements and expectations of where research in this area may head in the future.
本文是Law, Probability &系列文章中的第一篇。风险:采访对该领域有贡献的杰出统计学家。他们将分享他们的影响、成就以及对该领域未来研究方向的期望。
{"title":"Interview with Professor Colin Aitken","authors":"Aitken C.","doi":"10.1093/lpr/mgac001","DOIUrl":"https://doi.org/10.1093/lpr/mgac001","url":null,"abstract":"<span>This is the first in a series of occasional articles in <span style=\"font-style:italic;\">Law, Probability & Risk</span>: interviews with prominent statisticians who have contributed to the field. They will share their influences, achievements and expectations of where research in this area may head in the future.</span>","PeriodicalId":501426,"journal":{"name":"Law, Probability and Risk","volume":"35 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138535349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For Bayesian inference to be useful to a court, it is essential that the priors used should be neutral between the parties. ‘Neutrality’ reflects the idea that the fact-finder would want the statistical analyses to be fair to both parties. It is neither the same as the legal designation of which party has the burden of proof with respect to a particular matter, nor the standard of proof that must be met for that party to prevail. The recent case of Idaho v. Ish raises the question of how to find such priors, particularly in a doubly constrained 2 × 2 table with a zero. This article re-examines this issue. It also offers reflection on whether, given a zero in the table (which here means that all members of a particular race or sex are excluded from jury service), it matters how many are excluded.
{"title":"Priors neutral between the parties: the Batson motion in Idaho v. Ish","authors":"Kadane J.","doi":"10.1093/lpr/mgab005","DOIUrl":"https://doi.org/10.1093/lpr/mgab005","url":null,"abstract":"<span><div>Abstract</div>For Bayesian inference to be useful to a court, it is essential that the priors used should be neutral between the parties. ‘Neutrality’ reflects the idea that the fact-finder would want the statistical analyses to be fair to both parties. It is neither the same as the legal designation of which party has the burden of proof with respect to a particular matter, nor the standard of proof that must be met for that party to prevail. The recent case of <span style=\"font-style:italic;\">Idaho</span> v. <span style=\"font-style:italic;\">Ish</span> raises the question of how to find such priors, particularly in a doubly constrained 2 × 2 table with a zero. This article re-examines this issue. It also offers reflection on whether, given a zero in the table (which here means that all members of a particular race or sex are excluded from jury service), it matters how many are excluded.</span>","PeriodicalId":501426,"journal":{"name":"Law, Probability and Risk","volume":"75 1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138535350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This is a story of a lawsuit in Japan, about an alleged incident in America thirty years before. The focus of the analysis is comparing the rates of skips in ballpoint pen writing in a diary. Chernoff proposed several methods to address the comparison between the skips observed in different passages in the diary. I also give my own alternative analysis of the data.
{"title":"Comparing two probabilities: an essay in honour of Herman Chernoff","authors":"Kadane J.","doi":"10.1093/lpr/mgab006","DOIUrl":"https://doi.org/10.1093/lpr/mgab006","url":null,"abstract":"<span><div>Abstract</div>This is a story of a lawsuit in Japan, about an alleged incident in America thirty years before. The focus of the analysis is comparing the rates of skips in ballpoint pen writing in a diary. Chernoff proposed several methods to address the comparison between the skips observed in different passages in the diary. I also give my own alternative analysis of the data.</span>","PeriodicalId":501426,"journal":{"name":"Law, Probability and Risk","volume":"54 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138535348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
What have been called ‘Bayesian confirmation measures’ or ‘evidential support measures’ offer a numerical expression for the impact of a piece of evidence on a judicial hypothesis of interest. The Bayes’ factor, sometimes simply called the ‘likelihood ratio’, represents the best measure of the value of the evidence. It satisfies a number of necessary conditions on normative logical adequacy. It is shown that the same cannot be said for alternative expressions put forward by some legal and forensic quarters. A list of desiderata are given that support the choice of the Bayes’ factor as the best measure for quantification of the value of evidence.
{"title":"The Bayes’ factor: the coherent measure for hypothesis confirmation","authors":"Taroni F, Garbolino P, Bozza S, et al.","doi":"10.1093/lpr/mgab007","DOIUrl":"https://doi.org/10.1093/lpr/mgab007","url":null,"abstract":"<span><div>Abstract</div>What have been called ‘Bayesian confirmation measures’ or ‘evidential support measures’ offer a numerical expression for the impact of a piece of evidence on a judicial hypothesis of interest. The Bayes’ factor, sometimes simply called the ‘likelihood ratio’, represents the best measure of the value of the evidence. It satisfies a number of necessary conditions on normative logical adequacy. It is shown that the same cannot be said for alternative expressions put forward by some legal and forensic quarters. A list of desiderata are given that support the choice of the Bayes’ factor as the best measure for quantification of the value of evidence.</span>","PeriodicalId":501426,"journal":{"name":"Law, Probability and Risk","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138543188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}