Heleen Coreelman, Jannick De Tobel, Thomas Widek, Martin Urschler, Steffen Fieuws, Patrick W Thevissen, Koenraad L Verstraete
Multifactorial age estimation is preferred over methods based on a single anatomical site. The main challenge of the multifactorial methods lies in calculating the overall prediction interval. This study compared the performance of two approaches to achieve this: the minimal age principle versus a Bayesian approach. MRI of the third molars, left hand/wrist, and sternal extremity of both clavicles were prospectively conducted in 335 healthy Austrian Caucasian males aged 13-24 years. Development was staged according to De Tobel et al. Multi-factorial age estimation: A Bayesian approach combining dental and skeletal magnetic resonance imaging. Forensic Sci Int. 2020;306:110054. Applying the minimal age principle rendered a mean absolute error of 1.47 years, root mean square error of 1.81 years, mean width of the 95% prediction interval (PI) of 4.44 ± 2.49 years, and coverage of 68.7%. For the Bayesian approach, the results were 1.41, 1.80, 5.15 ± 1.94 years, and 81.5%, respectively. Higher inconsistency between the different age indicators was linked to a lower coverage probability in the minimal age principle, but not in the Bayesian approach. Moreover, higher inconsistency between age indicators was also linked to a higher probability of obtaining an impossible PI with the minimal age principle. Furthermore, applying the minimal age principle rendered 97.9%/81.0% correctly categorized adults (based on the point prediction of age/based on the PI) and 69.2%/85.6% correctly categorized minors. For the Bayesian approach, the results were 95.2%/76.2% and 81.5%/95.9%, respectively. In conclusion, the Bayesian approach outperformed the minimal age principle for multifactorial forensic age estimation, allowing the construction of more appropriate PIs and more correctly categorized minors.
{"title":"Minimal age principle versus Bayesian approach to combine age indicators from magnetic resonance imaging for multifactorial forensic age estimation.","authors":"Heleen Coreelman, Jannick De Tobel, Thomas Widek, Martin Urschler, Steffen Fieuws, Patrick W Thevissen, Koenraad L Verstraete","doi":"10.1111/1556-4029.70270","DOIUrl":"https://doi.org/10.1111/1556-4029.70270","url":null,"abstract":"<p><p>Multifactorial age estimation is preferred over methods based on a single anatomical site. The main challenge of the multifactorial methods lies in calculating the overall prediction interval. This study compared the performance of two approaches to achieve this: the minimal age principle versus a Bayesian approach. MRI of the third molars, left hand/wrist, and sternal extremity of both clavicles were prospectively conducted in 335 healthy Austrian Caucasian males aged 13-24 years. Development was staged according to De Tobel et al. Multi-factorial age estimation: A Bayesian approach combining dental and skeletal magnetic resonance imaging. Forensic Sci Int. 2020;306:110054. Applying the minimal age principle rendered a mean absolute error of 1.47 years, root mean square error of 1.81 years, mean width of the 95% prediction interval (PI) of 4.44 ± 2.49 years, and coverage of 68.7%. For the Bayesian approach, the results were 1.41, 1.80, 5.15 ± 1.94 years, and 81.5%, respectively. Higher inconsistency between the different age indicators was linked to a lower coverage probability in the minimal age principle, but not in the Bayesian approach. Moreover, higher inconsistency between age indicators was also linked to a higher probability of obtaining an impossible PI with the minimal age principle. Furthermore, applying the minimal age principle rendered 97.9%/81.0% correctly categorized adults (based on the point prediction of age/based on the PI) and 69.2%/85.6% correctly categorized minors. For the Bayesian approach, the results were 95.2%/76.2% and 81.5%/95.9%, respectively. In conclusion, the Bayesian approach outperformed the minimal age principle for multifactorial forensic age estimation, allowing the construction of more appropriate PIs and more correctly categorized minors.</p>","PeriodicalId":94080,"journal":{"name":"Journal of forensic sciences","volume":" ","pages":""},"PeriodicalIF":1.8,"publicationDate":"2026-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146151563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kale Taib Karim, Dler Abdulrahman Mohammad, Mohammed Taha Ahmed Baban
Estimating sex is a critical step in the identification of unknown human remains, reducing the pool of potential matches by approximately 50%. Among the anatomical structures used for this purpose, human teeth hold particular value due to their structural durability and pronounced sexual dimorphism, making them a preferred focus of forensic research. In recent years, intraoral scanners and machine learning algorithms have emerged as powerful tools in forensic investigations, offering high accuracy and efficiency. This study integrates these technologies to estimate sex from measurements of the upper six anterior teeth and maxillary arch dimensions. Linear measurements of maxillary anterior teeth and upper arch dimensions were obtained from digital impressions of 100 male and 100 female subjects using 3D Slicer software. These features were analyzed using six different machine learning models to predict sex. The cervico-to-cusp tip linear measurements of the right and left canines demonstrated the highest discriminative power, with area under the curve values of 0.968 and 0.947, respectively. Among the machine learning models tested, the Support Vector Classifier achieved the highest mean prediction accuracy of 94.5% as estimated by nested cross-validation. This methodology shows strong potential for accurate sex estimation in forensic contexts. Further research with larger, more diverse samples is recommended to validate and enhance the generalizability of these findings.
{"title":"Sex estimation based on odontometry of maxillary anterior teeth and arch dimensions predicted by machine learning algorithms.","authors":"Kale Taib Karim, Dler Abdulrahman Mohammad, Mohammed Taha Ahmed Baban","doi":"10.1111/1556-4029.70282","DOIUrl":"https://doi.org/10.1111/1556-4029.70282","url":null,"abstract":"<p><p>Estimating sex is a critical step in the identification of unknown human remains, reducing the pool of potential matches by approximately 50%. Among the anatomical structures used for this purpose, human teeth hold particular value due to their structural durability and pronounced sexual dimorphism, making them a preferred focus of forensic research. In recent years, intraoral scanners and machine learning algorithms have emerged as powerful tools in forensic investigations, offering high accuracy and efficiency. This study integrates these technologies to estimate sex from measurements of the upper six anterior teeth and maxillary arch dimensions. Linear measurements of maxillary anterior teeth and upper arch dimensions were obtained from digital impressions of 100 male and 100 female subjects using 3D Slicer software. These features were analyzed using six different machine learning models to predict sex. The cervico-to-cusp tip linear measurements of the right and left canines demonstrated the highest discriminative power, with area under the curve values of 0.968 and 0.947, respectively. Among the machine learning models tested, the Support Vector Classifier achieved the highest mean prediction accuracy of 94.5% as estimated by nested cross-validation. This methodology shows strong potential for accurate sex estimation in forensic contexts. Further research with larger, more diverse samples is recommended to validate and enhance the generalizability of these findings.</p>","PeriodicalId":94080,"journal":{"name":"Journal of forensic sciences","volume":" ","pages":""},"PeriodicalIF":1.8,"publicationDate":"2026-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146145326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Arthur J Funnell, Panayiotis Petousis, Fabrice Harel-Canada, Ruby Romero, Alex A T Bui, Adam Koncsol, Hritika Chaturvedi, Chelsea Shover, David Goodman-Meza
The rising rate of drug-related deaths in the United States, largely driven by fentanyl, requires timely and accurate surveillance. However, critical overdose data are often buried in free-text coroner reports, leading to delays and information loss when coded into ICD (International Classification of Disease)-10 classifications. Natural language processing (NLP) models may automate and enhance overdose surveillance, but prior applications have been limited. A dataset of 35,433 death records from multiple US jurisdictions in 2020 was used for model training and internal testing. External validation was conducted using a novel separate dataset of 3335 records from 2023 to 2024. Multiple NLP approaches were evaluated for classifying specific drug involvement from unstructured death certificate text. These included traditional single- and multi-label classifiers, as well as fine-tuned encoder-only language models such as Bidirectional Encoder Representations from Transformers (BERT) and BioClinicalBERT, and contemporary decoder-only large language models (LLMs) such as Qwen 3 and Llama 3. Model performance was assessed using macro-averaged F1 scores, and 95% confidence intervals were calculated to quantify uncertainty. Fine-tuned BioClinicalBERT models achieved near-perfect performance, with macro F1 scores ≥0.998 on the internal test set. External validation confirmed robustness (macro F1 = 0.966), outperforming conventional machine learning, general-domain BERT models, and various decoder-only LLMs. NLP models, particularly fine-tuned clinical variants like BioClinicalBERT, offer a highly accurate and scalable solution for overdose death classification from free-text reports. These methods can significantly accelerate surveillance workflows, overcoming the limitations of manual ICD-10 coding and supporting near real-time detection of emerging substance use trends.
{"title":"Improving drug identification in overdose death surveillance by using clinical natural language processing models.","authors":"Arthur J Funnell, Panayiotis Petousis, Fabrice Harel-Canada, Ruby Romero, Alex A T Bui, Adam Koncsol, Hritika Chaturvedi, Chelsea Shover, David Goodman-Meza","doi":"10.1111/1556-4029.70281","DOIUrl":"https://doi.org/10.1111/1556-4029.70281","url":null,"abstract":"<p><p>The rising rate of drug-related deaths in the United States, largely driven by fentanyl, requires timely and accurate surveillance. However, critical overdose data are often buried in free-text coroner reports, leading to delays and information loss when coded into ICD (International Classification of Disease)-10 classifications. Natural language processing (NLP) models may automate and enhance overdose surveillance, but prior applications have been limited. A dataset of 35,433 death records from multiple US jurisdictions in 2020 was used for model training and internal testing. External validation was conducted using a novel separate dataset of 3335 records from 2023 to 2024. Multiple NLP approaches were evaluated for classifying specific drug involvement from unstructured death certificate text. These included traditional single- and multi-label classifiers, as well as fine-tuned encoder-only language models such as Bidirectional Encoder Representations from Transformers (BERT) and BioClinicalBERT, and contemporary decoder-only large language models (LLMs) such as Qwen 3 and Llama 3. Model performance was assessed using macro-averaged F<sub>1</sub> scores, and 95% confidence intervals were calculated to quantify uncertainty. Fine-tuned BioClinicalBERT models achieved near-perfect performance, with macro F<sub>1</sub> scores ≥0.998 on the internal test set. External validation confirmed robustness (macro F<sub>1</sub> = 0.966), outperforming conventional machine learning, general-domain BERT models, and various decoder-only LLMs. NLP models, particularly fine-tuned clinical variants like BioClinicalBERT, offer a highly accurate and scalable solution for overdose death classification from free-text reports. These methods can significantly accelerate surveillance workflows, overcoming the limitations of manual ICD-10 coding and supporting near real-time detection of emerging substance use trends.</p>","PeriodicalId":94080,"journal":{"name":"Journal of forensic sciences","volume":" ","pages":""},"PeriodicalIF":1.8,"publicationDate":"2026-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146145348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Radiation-induced hemorrhagic cystitis (RHC) is a severe complication of pelvic radiotherapy, often used to treat various pelvic malignancies. Despite multiple therapeutic options, including conservative and invasive interventions, the optimal management remains uncertain. We report the case of a 76-year-old male with pulmonary emphysema and a history of prostate cancer treated with radiotherapy, who developed refractory RHC. During a hemostatic transurethral resection of a bladder tumor, autologous fibrin glue was applied via aerosol. Shortly after, the patient experienced sudden cardiorespiratory arrest and died. Post-mortem computed tomography (PMCT) revealed extensive intravascular gas in the heart and cerebral vessels, confirming fatal air embolism. No gas was identified in the pulmonary arteries, and autopsy findings excluded structural cardiac anomalies such as a patent foramen ovale. These results support the hypothesis of a right-to-left functional pulmonary shunt, a mechanism in which venous gas bypasses the pulmonary filter through intrapulmonary arteriovenous anastomoses. Pulmonary emphysema, present in this case, may have contributed by impairing alveolar-capillary integrity and reducing vascular filtration capacity. Additionally, bladder adhesions observed at autopsy likely reduced bladder compliance, facilitating air entry during glue application. This is, to our knowledge, the first documented case of fatal air embolism following aerosolized fibrin glue use for RHC, confirmed by both PMCT and autopsy. The case highlights the need for caution when using aerosolized hemostatic agents in patients with predisposing factors such as bladder adhesions and obstructive pulmonary diseases. Furthermore, it demonstrates the essential role of PMCT in identifying embolic complications and determining the cause of death in forensic settings.
{"title":"Post-mortem CT detection of fatal air embolism after aerosolized fibrin glue for bladder bleeding.","authors":"Beatrice Benedetti, Nazario Foschi, Caterina Pesaresi, Tommaso Tartaglione, Matteo Mancino, Alberto Chighine, Fabio De-Giorgio","doi":"10.1111/1556-4029.70278","DOIUrl":"https://doi.org/10.1111/1556-4029.70278","url":null,"abstract":"<p><p>Radiation-induced hemorrhagic cystitis (RHC) is a severe complication of pelvic radiotherapy, often used to treat various pelvic malignancies. Despite multiple therapeutic options, including conservative and invasive interventions, the optimal management remains uncertain. We report the case of a 76-year-old male with pulmonary emphysema and a history of prostate cancer treated with radiotherapy, who developed refractory RHC. During a hemostatic transurethral resection of a bladder tumor, autologous fibrin glue was applied via aerosol. Shortly after, the patient experienced sudden cardiorespiratory arrest and died. Post-mortem computed tomography (PMCT) revealed extensive intravascular gas in the heart and cerebral vessels, confirming fatal air embolism. No gas was identified in the pulmonary arteries, and autopsy findings excluded structural cardiac anomalies such as a patent foramen ovale. These results support the hypothesis of a right-to-left functional pulmonary shunt, a mechanism in which venous gas bypasses the pulmonary filter through intrapulmonary arteriovenous anastomoses. Pulmonary emphysema, present in this case, may have contributed by impairing alveolar-capillary integrity and reducing vascular filtration capacity. Additionally, bladder adhesions observed at autopsy likely reduced bladder compliance, facilitating air entry during glue application. This is, to our knowledge, the first documented case of fatal air embolism following aerosolized fibrin glue use for RHC, confirmed by both PMCT and autopsy. The case highlights the need for caution when using aerosolized hemostatic agents in patients with predisposing factors such as bladder adhesions and obstructive pulmonary diseases. Furthermore, it demonstrates the essential role of PMCT in identifying embolic complications and determining the cause of death in forensic settings.</p>","PeriodicalId":94080,"journal":{"name":"Journal of forensic sciences","volume":" ","pages":""},"PeriodicalIF":1.8,"publicationDate":"2026-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146128022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Body-worn cameras document crime scenes during initial law enforcement response, yet their potential for forensic reconstruction has not been empirically validated. Despite expanding global adoption, recorded video primarily serves qualitative documentation rather than quantitative measurement applications. This study empirically evaluated three-dimensional (3D) reconstruction accuracy from body-worn camera video to assess its feasibility for feature measurement. Three Axon camera models-Body 2 (AB2), Flex 2 (AF2), and Body 3 (AB3)-were tested in an outdoor parking lot, with each model recording five videos at both 720P and 1080P resolutions (n = 30). Videos were recorded under controlled experimental conditions to achieve optimized documentation scenarios. Videos were processed using 3DF Zephyr photogrammetry software to create 3D reconstructions, then compared against Faro Focus S350 laser scanner ground truth at three distances: long (12.48 m), medium (2.42 m), and short (0.24 m). One-sample t-tests revealed significant differences between AF2 measurements and ground truth (p < 0.05), with a maximum mean error of 14.42 cm at 720P for long distances. AB2 and AB3 showed no significant differences from the ground truth at both resolutions across all validation distances (p ≥ 0.05). Two-sample t-tests demonstrated no significant differences between resolutions (p ≥ 0.05). Single-factor ANOVAs indicated significant differences between camera models (p < 0.05). Resolution did not affect measurement accuracy under the conditions of the controlled methodology and internal software interpolation. These best-case results demonstrate that with deliberate documentation protocols, accurate 3D reconstruction from body-worn camera video is achievable for forensic applications.
{"title":"3D scene reconstruction from body-worn camera video using 3DF Zephyr.","authors":"Yuening Chen, Eugene Liscio","doi":"10.1111/1556-4029.70283","DOIUrl":"https://doi.org/10.1111/1556-4029.70283","url":null,"abstract":"<p><p>Body-worn cameras document crime scenes during initial law enforcement response, yet their potential for forensic reconstruction has not been empirically validated. Despite expanding global adoption, recorded video primarily serves qualitative documentation rather than quantitative measurement applications. This study empirically evaluated three-dimensional (3D) reconstruction accuracy from body-worn camera video to assess its feasibility for feature measurement. Three Axon camera models-Body 2 (AB2), Flex 2 (AF2), and Body 3 (AB3)-were tested in an outdoor parking lot, with each model recording five videos at both 720P and 1080P resolutions (n = 30). Videos were recorded under controlled experimental conditions to achieve optimized documentation scenarios. Videos were processed using 3DF Zephyr photogrammetry software to create 3D reconstructions, then compared against Faro Focus S350 laser scanner ground truth at three distances: long (12.48 m), medium (2.42 m), and short (0.24 m). One-sample t-tests revealed significant differences between AF2 measurements and ground truth (p < 0.05), with a maximum mean error of 14.42 cm at 720P for long distances. AB2 and AB3 showed no significant differences from the ground truth at both resolutions across all validation distances (p ≥ 0.05). Two-sample t-tests demonstrated no significant differences between resolutions (p ≥ 0.05). Single-factor ANOVAs indicated significant differences between camera models (p < 0.05). Resolution did not affect measurement accuracy under the conditions of the controlled methodology and internal software interpolation. These best-case results demonstrate that with deliberate documentation protocols, accurate 3D reconstruction from body-worn camera video is achievable for forensic applications.</p>","PeriodicalId":94080,"journal":{"name":"Journal of forensic sciences","volume":" ","pages":""},"PeriodicalIF":1.8,"publicationDate":"2026-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146128067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sex crime investigations often rely on evidence involving minimal amounts of seminal material, making it necessary to use sensitive biomarkers to detect semen. Thanks to its high concentration, prostate-specific antigen (PSA) has been extensively utilized as a forensic marker, but there remains a lack of consensus regarding its diagnostic cut-off value. The technique proposed in this study applies a tiered diagnostic algorithm that combines a highly sensitive screening assay with a highly specific assay. The objective was to validate PSA quantification as a screening tool and establish an optimal cut-off value based on the receiver operating characteristic curve. A total of 460 forensic samples from sex crime investigations were analyzed for PSA quantification using electrochemiluminescence immunoassay (ECLIA). Optical microscopy was used as the reference standard to detect spermatozoa. Receiver operating characteristic curve analysis established a cut-off value of 0.085 ng/mL, with an area under the curve (AUC) of 0.848, a sensitivity of 82.8%, and a negative predictive value of 92.8%, showing diagnostic performance in line with international standards. The established cut-off value was lower than those previously documented and made it possible to increase the detection of potential semen in samples, doubling the number of positive identifications. In child victims, PSA detection is particularly relevant, given that endogenous secretion begins around the age of 9. Presence in children, even at minimal levels, may be indicative of adult male semen. These findings confirm the role of PSA as a sensitive and reliable screening test in forensic diagnostics.
{"title":"PSA and ROC curve: Validation and cutoff for forensic sexual assault cases through sequential testing.","authors":"Francesca Jimeno Ruff","doi":"10.1111/1556-4029.70268","DOIUrl":"https://doi.org/10.1111/1556-4029.70268","url":null,"abstract":"<p><p>Sex crime investigations often rely on evidence involving minimal amounts of seminal material, making it necessary to use sensitive biomarkers to detect semen. Thanks to its high concentration, prostate-specific antigen (PSA) has been extensively utilized as a forensic marker, but there remains a lack of consensus regarding its diagnostic cut-off value. The technique proposed in this study applies a tiered diagnostic algorithm that combines a highly sensitive screening assay with a highly specific assay. The objective was to validate PSA quantification as a screening tool and establish an optimal cut-off value based on the receiver operating characteristic curve. A total of 460 forensic samples from sex crime investigations were analyzed for PSA quantification using electrochemiluminescence immunoassay (ECLIA). Optical microscopy was used as the reference standard to detect spermatozoa. Receiver operating characteristic curve analysis established a cut-off value of 0.085 ng/mL, with an area under the curve (AUC) of 0.848, a sensitivity of 82.8%, and a negative predictive value of 92.8%, showing diagnostic performance in line with international standards. The established cut-off value was lower than those previously documented and made it possible to increase the detection of potential semen in samples, doubling the number of positive identifications. In child victims, PSA detection is particularly relevant, given that endogenous secretion begins around the age of 9. Presence in children, even at minimal levels, may be indicative of adult male semen. These findings confirm the role of PSA as a sensitive and reliable screening test in forensic diagnostics.</p>","PeriodicalId":94080,"journal":{"name":"Journal of forensic sciences","volume":" ","pages":""},"PeriodicalIF":1.8,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146115043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ji-Woo Lee, Hye-Seon Cho, Ha-Eun Cha, Jooree Seo, Si-Keun Lim
Forensic evidence recovered from crime scenes often contains a mixture of human and bacterial DNA. Although short tandem repeat (STR) profiling of genomic DNA (gDNA) is widely used for human identification, its effectiveness can be limited in cases involving highly degraded DNA. In such cases, human mitochondrial DNA (mtDNA) and microbiome analysis may serve as alternative methods. In this study, we developed a multiplex quantification assay targeting the bacterial 16S rRNA V7 region and the human mitochondrial NADH-dehydrogenase subunit 5 (ND5) gene. Quantification was performed using TaqMan-based real-time PCR (Human-Bacteria qPCR; HBQ) and droplet digital PCR (Human-Bacteria ddPCR; HBD). Optimal primer and probe concentrations were at 7 μM for the HBQ assay, and 5 μM bacterial primer set, 7 μM human mtDNA primer set, and 700 nM probes for the HBD assay. Sensitivity testing showed that the HBQ assay detected all DNA samples-except G147A-down to 20 fg, while the HBD assay detected both bacterial and human DNA at 20 fg, demonstrating higher analytical sensitivity than the real-time PCR method. Moreover, mock forensic samples were analyzed to confirm the assay applicability, and PCR inhibitor tolerance tests using humic acid and tannic acid were conducted to further validate their performance. Furthermore, the HBQ and HBD assays may be used in quality control processes for samples potentially affected by bacterial DNA or human mtDNA contamination and could also be applied to other fields such as food safety, environmental science, and biological research involving microbial DNA and human mtDNA.
{"title":"Development of real-time PCR and droplet digital PCR assays for the simultaneous quantification of bacterial and human mitochondrial DNA for forensic analysis.","authors":"Ji-Woo Lee, Hye-Seon Cho, Ha-Eun Cha, Jooree Seo, Si-Keun Lim","doi":"10.1111/1556-4029.70276","DOIUrl":"https://doi.org/10.1111/1556-4029.70276","url":null,"abstract":"<p><p>Forensic evidence recovered from crime scenes often contains a mixture of human and bacterial DNA. Although short tandem repeat (STR) profiling of genomic DNA (gDNA) is widely used for human identification, its effectiveness can be limited in cases involving highly degraded DNA. In such cases, human mitochondrial DNA (mtDNA) and microbiome analysis may serve as alternative methods. In this study, we developed a multiplex quantification assay targeting the bacterial 16S rRNA V7 region and the human mitochondrial NADH-dehydrogenase subunit 5 (ND5) gene. Quantification was performed using TaqMan-based real-time PCR (Human-Bacteria qPCR; HBQ) and droplet digital PCR (Human-Bacteria ddPCR; HBD). Optimal primer and probe concentrations were at 7 μM for the HBQ assay, and 5 μM bacterial primer set, 7 μM human mtDNA primer set, and 700 nM probes for the HBD assay. Sensitivity testing showed that the HBQ assay detected all DNA samples-except G147A-down to 20 fg, while the HBD assay detected both bacterial and human DNA at 20 fg, demonstrating higher analytical sensitivity than the real-time PCR method. Moreover, mock forensic samples were analyzed to confirm the assay applicability, and PCR inhibitor tolerance tests using humic acid and tannic acid were conducted to further validate their performance. Furthermore, the HBQ and HBD assays may be used in quality control processes for samples potentially affected by bacterial DNA or human mtDNA contamination and could also be applied to other fields such as food safety, environmental science, and biological research involving microbial DNA and human mtDNA.</p>","PeriodicalId":94080,"journal":{"name":"Journal of forensic sciences","volume":" ","pages":""},"PeriodicalIF":1.8,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146115107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sex estimation methods from the pelvis have been well-studied in research settings to estimate accuracy, error, and bias. However, patterns in casework are minimally described. We uniquely examine forensic anthropology casework in the United States retrospectively for the Phenice and Klales et al.'s sex estimation methods. Our hypothesis is that casework patterns will reflect the greater literature derived from research settings that show Phenice's method is more accurate and has lower error and sex bias. We use the publicly available Forensic Anthropology Database for Assessing Methods Accuracy. A sample of 229 cases from the United States reported the outcomes of applying these methods. McNemar's tests evaluate whether estimated sex is consistent with documented sex, and a Fisher's exact test compared the performance of the two methods. We further calculated accuracy, error, and sex biases of the methods. The McNemar's and Fisher's exact tests were not statistically significant, which indicates that both methods estimated sex at a rate close to the documented sex and to each other. Phenice's method displayed an accuracy of 99.4%, an error of 0.6%, and a sex bias of -2.4%. Alternatively, the Klales et al.'s method performed slightly lower with a 97.5% accuracy, 2.5% error, and 3.5% sex bias. Forensic anthropology casework in the United States reflects broader patterns in accuracy, error, and bias in the research setting literature, where Phenice outperforms the Klales et al.'s method, despite the values from casework probably reflecting practitioners using information beyond the method reported to make a final sex estimate.
{"title":"Patterns in the Phenice (1969) and Klales et al. (2012) methods of sex estimation using forensic casework from the United States.","authors":"Nayeli A Zermeño, K Godde","doi":"10.1111/1556-4029.70279","DOIUrl":"https://doi.org/10.1111/1556-4029.70279","url":null,"abstract":"<p><p>Sex estimation methods from the pelvis have been well-studied in research settings to estimate accuracy, error, and bias. However, patterns in casework are minimally described. We uniquely examine forensic anthropology casework in the United States retrospectively for the Phenice and Klales et al.'s sex estimation methods. Our hypothesis is that casework patterns will reflect the greater literature derived from research settings that show Phenice's method is more accurate and has lower error and sex bias. We use the publicly available Forensic Anthropology Database for Assessing Methods Accuracy. A sample of 229 cases from the United States reported the outcomes of applying these methods. McNemar's tests evaluate whether estimated sex is consistent with documented sex, and a Fisher's exact test compared the performance of the two methods. We further calculated accuracy, error, and sex biases of the methods. The McNemar's and Fisher's exact tests were not statistically significant, which indicates that both methods estimated sex at a rate close to the documented sex and to each other. Phenice's method displayed an accuracy of 99.4%, an error of 0.6%, and a sex bias of -2.4%. Alternatively, the Klales et al.'s method performed slightly lower with a 97.5% accuracy, 2.5% error, and 3.5% sex bias. Forensic anthropology casework in the United States reflects broader patterns in accuracy, error, and bias in the research setting literature, where Phenice outperforms the Klales et al.'s method, despite the values from casework probably reflecting practitioners using information beyond the method reported to make a final sex estimate.</p>","PeriodicalId":94080,"journal":{"name":"Journal of forensic sciences","volume":" ","pages":""},"PeriodicalIF":1.8,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146109242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The demand for analyzing images from sources such as closed-circuit television cameras has increased significantly. Conventional analyses, including gait and soft biometrics, typically require the comparison of two video footage clips, as these methods are predicated on video-to-video comparisons. Moreover, numerous prerequisites often limit their applicability, particularly in the field of gait biometrics. To address these limitations, this paper introduces a simple yet effective image-to-person comparison method, leveraging image reproduction from a structure from motion (SfM)/photogrammetry-based three-dimensional (3D) computer graphics reference virtual avatar. This avatar is generated from a reference real person. It is demonstrated that the proposed method, by applying 3D joint manipulations to the reference virtual avatar, qualitatively reproduces a person captured in a target image with high fidelity. Furthermore, quantitative silhouette comparisons successfully confirm distributions for forensic image-to-person comparison. The proposed method holds promise as a body shape-based forensic image-to-person comparison tool in scenarios where a real person can be used as a reference.
{"title":"Comparing a single target image with a reference three-dimensional (3D) virtual avatar of a real person.","authors":"Daisuke Imoto, Masakatsu Honma, Daiki Kato, Masato Asano, Wataru Sakurai","doi":"10.1111/1556-4029.70272","DOIUrl":"https://doi.org/10.1111/1556-4029.70272","url":null,"abstract":"<p><p>The demand for analyzing images from sources such as closed-circuit television cameras has increased significantly. Conventional analyses, including gait and soft biometrics, typically require the comparison of two video footage clips, as these methods are predicated on video-to-video comparisons. Moreover, numerous prerequisites often limit their applicability, particularly in the field of gait biometrics. To address these limitations, this paper introduces a simple yet effective image-to-person comparison method, leveraging image reproduction from a structure from motion (SfM)/photogrammetry-based three-dimensional (3D) computer graphics reference virtual avatar. This avatar is generated from a reference real person. It is demonstrated that the proposed method, by applying 3D joint manipulations to the reference virtual avatar, qualitatively reproduces a person captured in a target image with high fidelity. Furthermore, quantitative silhouette comparisons successfully confirm distributions for forensic image-to-person comparison. The proposed method holds promise as a body shape-based forensic image-to-person comparison tool in scenarios where a real person can be used as a reference.</p>","PeriodicalId":94080,"journal":{"name":"Journal of forensic sciences","volume":" ","pages":""},"PeriodicalIF":1.8,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146145321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The rapid advancement of deepfake technology poses a significant threat to digital content authenticity and public trust. Deepfakes leverage artificial intelligence to generate realistic yet manipulated images and videos, often for deceptive purposes. This study introduced an enhanced version of the MesoNet convolutional neural network tailored for deepfake detection. The model incorporates two additional convolutional layers, resulting in substantial performance gains across various metrics. It achieved a precision of 96.60%, recall of 95.33%, F1-score of 95.96%, accuracy of 95.59%, and a Matthews Correlation Coefficient (MCC) of 91.11%, outperforming baseline models such as ResNet-50, VGG variants, and AlexNet. Additionally, a real-time detection system was developed using a React frontend and Flask backend, demonstrating the model's potential for practical deployment. This research contributed a robust and scalable approach to deepfake detection and lays the groundwork for real-world applications in digital forensics and content authenticity verification.
{"title":"Enhanced MesoNet-based deepfake detection using deep learning: A robust framework for multimedia forensics.","authors":"Deepak Joshi, Abhishek Kashyap, Parul Arora","doi":"10.1111/1556-4029.70275","DOIUrl":"https://doi.org/10.1111/1556-4029.70275","url":null,"abstract":"<p><p>The rapid advancement of deepfake technology poses a significant threat to digital content authenticity and public trust. Deepfakes leverage artificial intelligence to generate realistic yet manipulated images and videos, often for deceptive purposes. This study introduced an enhanced version of the MesoNet convolutional neural network tailored for deepfake detection. The model incorporates two additional convolutional layers, resulting in substantial performance gains across various metrics. It achieved a precision of 96.60%, recall of 95.33%, F<sub>1</sub>-score of 95.96%, accuracy of 95.59%, and a Matthews Correlation Coefficient (MCC) of 91.11%, outperforming baseline models such as ResNet-50, VGG variants, and AlexNet. Additionally, a real-time detection system was developed using a React frontend and Flask backend, demonstrating the model's potential for practical deployment. This research contributed a robust and scalable approach to deepfake detection and lays the groundwork for real-world applications in digital forensics and content authenticity verification.</p>","PeriodicalId":94080,"journal":{"name":"Journal of forensic sciences","volume":" ","pages":""},"PeriodicalIF":1.8,"publicationDate":"2026-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146055580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}