从理解到证明:基于人工智能的法医证据评估的计算可靠性

Juan M. Durán , David van der Vloed , Arnout Ruifrok , Rolf J.F. Ypma
{"title":"从理解到证明:基于人工智能的法医证据评估的计算可靠性","authors":"Juan M. Durán ,&nbsp;David van der Vloed ,&nbsp;Arnout Ruifrok ,&nbsp;Rolf J.F. Ypma","doi":"10.1016/j.fsisyn.2024.100554","DOIUrl":null,"url":null,"abstract":"<div><p>Techniques from artificial intelligence (AI) can be used in forensic evidence evaluation and are currently applied in biometric fields. However, it is generally not possible to fully understand how and why these algorithms reach their conclusions. Whether and how we should include such ‘black box’ algorithms in this crucial part of the criminal law system is an open question that has not only scientific but also ethical, legal, and philosophical angles. Ideally, the question should be debated by people with diverse backgrounds.</p><p>Here, we present a view on the question from the philosophy of science angle: computational reliabilism (CR). CR posits that we are justified in believing the output of an AI system, if we have grounds for believing its reliability. Under CR, these grounds are classified into ‘reliability indicators’ of three types: technical, scientific, and societal. This framework enables debates on the suitability of AI methods for forensic evidence evaluation that take a wider view than explainability and validation.</p><p>We argue that we are justified in believing the AI's output for forensic comparison of voices and forensic comparison of faces. Technical indicators include the validation of the AI algorithm in itself, validation of its application in the forensic setting, and case-based validation. Scientific indicators include the simple notion that we know faces and voices contain identifying information along with operationalizing well-established metrics and forensic practices. Societal indicators are the emerging scientific consensus on the use of these methods, as well as their application and interpretation by well-educated and certified practitioners. We expect expert witnesses to rely more on technical indicators to be justified in believing AIsystems, and triers-of-fact to rely more on societal indicators to believe the expert witness supported by the AIsystem.</p></div>","PeriodicalId":36925,"journal":{"name":"Forensic Science International: Synergy","volume":"9 ","pages":"Article 100554"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589871X24001013/pdfft?md5=7d0bd2cb83ab31d103d83126ff71d976&pid=1-s2.0-S2589871X24001013-main.pdf","citationCount":"0","resultStr":"{\"title\":\"From understanding to justifying: Computational reliabilism for AI-based forensic evidence evaluation\",\"authors\":\"Juan M. Durán ,&nbsp;David van der Vloed ,&nbsp;Arnout Ruifrok ,&nbsp;Rolf J.F. Ypma\",\"doi\":\"10.1016/j.fsisyn.2024.100554\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Techniques from artificial intelligence (AI) can be used in forensic evidence evaluation and are currently applied in biometric fields. However, it is generally not possible to fully understand how and why these algorithms reach their conclusions. Whether and how we should include such ‘black box’ algorithms in this crucial part of the criminal law system is an open question that has not only scientific but also ethical, legal, and philosophical angles. Ideally, the question should be debated by people with diverse backgrounds.</p><p>Here, we present a view on the question from the philosophy of science angle: computational reliabilism (CR). CR posits that we are justified in believing the output of an AI system, if we have grounds for believing its reliability. Under CR, these grounds are classified into ‘reliability indicators’ of three types: technical, scientific, and societal. This framework enables debates on the suitability of AI methods for forensic evidence evaluation that take a wider view than explainability and validation.</p><p>We argue that we are justified in believing the AI's output for forensic comparison of voices and forensic comparison of faces. Technical indicators include the validation of the AI algorithm in itself, validation of its application in the forensic setting, and case-based validation. Scientific indicators include the simple notion that we know faces and voices contain identifying information along with operationalizing well-established metrics and forensic practices. Societal indicators are the emerging scientific consensus on the use of these methods, as well as their application and interpretation by well-educated and certified practitioners. We expect expert witnesses to rely more on technical indicators to be justified in believing AIsystems, and triers-of-fact to rely more on societal indicators to believe the expert witness supported by the AIsystem.</p></div>\",\"PeriodicalId\":36925,\"journal\":{\"name\":\"Forensic Science International: Synergy\",\"volume\":\"9 \",\"pages\":\"Article 100554\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2589871X24001013/pdfft?md5=7d0bd2cb83ab31d103d83126ff71d976&pid=1-s2.0-S2589871X24001013-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Forensic Science International: Synergy\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2589871X24001013\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Social Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Forensic Science International: Synergy","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2589871X24001013","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 0

摘要

人工智能(AI)技术可用于法医证据评估,目前已应用于生物识别领域。然而,人们通常无法完全理解这些算法是如何以及为何得出结论的。我们是否应该以及如何将这种 "黑盒 "算法纳入刑法系统的这一关键部分,是一个不仅涉及科学,而且涉及伦理、法律和哲学角度的开放性问题。在此,我们将从科学哲学的角度来阐述对这一问题的看法:计算可靠论(CR)。CR认为,如果我们有理由相信人工智能系统的输出是可靠的,那么我们就有理由相信它。根据 CR,这些理由被分为三类 "可靠性指标":技术、科学和社会。我们认为,在对声音进行法证比对和对面孔进行法证比对时,我们有理由相信人工智能的输出结果。技术指标包括人工智能算法本身的验证、其在法医环境中应用的验证以及基于案例的验证。科学指标包括一个简单的概念,即我们知道人脸和声音都包含识别信息,同时我们还需要将成熟的衡量标准和法医实践操作化。社会指标是指在使用这些方法方面正在形成的科学共识,以及受过良好教育并获得认证的从业人员对这些方法的应用和解释。我们预计专家证人会更多地依赖技术指标来证明人工智能系统的合理性,而事实审判者会更多地依赖社会指标来相信人工智能系统所支持的专家证人。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
From understanding to justifying: Computational reliabilism for AI-based forensic evidence evaluation

Techniques from artificial intelligence (AI) can be used in forensic evidence evaluation and are currently applied in biometric fields. However, it is generally not possible to fully understand how and why these algorithms reach their conclusions. Whether and how we should include such ‘black box’ algorithms in this crucial part of the criminal law system is an open question that has not only scientific but also ethical, legal, and philosophical angles. Ideally, the question should be debated by people with diverse backgrounds.

Here, we present a view on the question from the philosophy of science angle: computational reliabilism (CR). CR posits that we are justified in believing the output of an AI system, if we have grounds for believing its reliability. Under CR, these grounds are classified into ‘reliability indicators’ of three types: technical, scientific, and societal. This framework enables debates on the suitability of AI methods for forensic evidence evaluation that take a wider view than explainability and validation.

We argue that we are justified in believing the AI's output for forensic comparison of voices and forensic comparison of faces. Technical indicators include the validation of the AI algorithm in itself, validation of its application in the forensic setting, and case-based validation. Scientific indicators include the simple notion that we know faces and voices contain identifying information along with operationalizing well-established metrics and forensic practices. Societal indicators are the emerging scientific consensus on the use of these methods, as well as their application and interpretation by well-educated and certified practitioners. We expect expert witnesses to rely more on technical indicators to be justified in believing AIsystems, and triers-of-fact to rely more on societal indicators to believe the expert witness supported by the AIsystem.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
4.90
自引率
0.00%
发文量
75
审稿时长
90 days
期刊最新文献
A transdisciplinary integrated approach to improve identification outcomes for decomposed decedents in medicolegal death investigations Manner of death prediction: A machine learning approach to classify suicide and non-suicide using blood metabolomics Digitalisation of forensic expert activity in Ukraine: Organisational and legal framework Impact of harassment and bullying of forensic scientists on work performance, absenteeism, and intention to leave the workplace in the United States Barriers to human remains identification using forensic odontology in resource-constrained settings
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1