Genevieve Chyrmang , Barun Barua , Kangkana Bora , R. Suresh
{"title":"探索从枪支子弹图像中分割条纹痕迹的轻量级卷积神经网络","authors":"Genevieve Chyrmang , Barun Barua , Kangkana Bora , R. Suresh","doi":"10.1016/j.fri.2024.200611","DOIUrl":null,"url":null,"abstract":"<div><div>In the field of forensic ballistic science, the identification of firearms is accomplished by examining the class and individual distinctive marks of fired bullets or cartridge casings instances discovered at the crime scene. The distinctive striation mark, which is engraved on bullets by gun barrels while firing owing to rifling, is one of the important characteristics examined. These striation marks serve as “fingerprints” of firearms. However, manual identification is time-consuming and arduous, necessitating the need for automation. This study focuses on automating striation mark segmentation using novel lightweight deep-learning segmentation models. The motivation behind this study is two-fold: first to assess if lightweight models can replace larger models without sacrificing accuracy and secondly to leverage their efficiency for resource-limited hardware, paving the way for real-time solutions. Proposed models include Mobile Striation-Net (MSN), Attention Gatted MobileStriation-Net (AMSN), Depthwise Attention Gatted Mobile Striation-Net (D-AMSN), and two derivatives of it, which are a pruned version and a quantized variant named PD-AMSN and QD-AMSN respectively. Extensive evaluation of models includes metrics Accuracy, Dice coefficient, Intersection over Union (IOU), Precision, and Recall. A thorough comparative analysis takes into account all models based on parameter counts, frames per second, inference time, and size. Findings shows, all models attains above 95% Accuracy. Dice coefficient and IOU ranges from 0.48 to 0.54 & 0.59 to 0.6 respectively. Precision and Recall consistently range between 63.42% to 64.26% and 73.6% to 77.68% respectively. The “Pruned” variant PD-AMSN performs notably better across metrics than the D-AMSN model demonstrating effective pruning. On the other hand, quantized QD-AMSN have only 6 MB size and 95.42% accuracy. Our models are positioned as forerunners in terms of lightweight design, attention gate integration, decreased parameter counts, and improved accuracy when compared to other previous models. The effectiveness of our models for segmentation tasks and their potential for developing into a portable, real-time automated striation mark segmentation systems in the future, are highlighted through the in-depth analysis.</div></div>","PeriodicalId":40763,"journal":{"name":"Forensic Imaging","volume":"39 ","pages":"Article 200611"},"PeriodicalIF":0.8000,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Exploring lightweight convolution neural networks for segmenting striation marks from firearm bullet images\",\"authors\":\"Genevieve Chyrmang , Barun Barua , Kangkana Bora , R. Suresh\",\"doi\":\"10.1016/j.fri.2024.200611\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In the field of forensic ballistic science, the identification of firearms is accomplished by examining the class and individual distinctive marks of fired bullets or cartridge casings instances discovered at the crime scene. The distinctive striation mark, which is engraved on bullets by gun barrels while firing owing to rifling, is one of the important characteristics examined. These striation marks serve as “fingerprints” of firearms. However, manual identification is time-consuming and arduous, necessitating the need for automation. This study focuses on automating striation mark segmentation using novel lightweight deep-learning segmentation models. The motivation behind this study is two-fold: first to assess if lightweight models can replace larger models without sacrificing accuracy and secondly to leverage their efficiency for resource-limited hardware, paving the way for real-time solutions. Proposed models include Mobile Striation-Net (MSN), Attention Gatted MobileStriation-Net (AMSN), Depthwise Attention Gatted Mobile Striation-Net (D-AMSN), and two derivatives of it, which are a pruned version and a quantized variant named PD-AMSN and QD-AMSN respectively. Extensive evaluation of models includes metrics Accuracy, Dice coefficient, Intersection over Union (IOU), Precision, and Recall. A thorough comparative analysis takes into account all models based on parameter counts, frames per second, inference time, and size. Findings shows, all models attains above 95% Accuracy. Dice coefficient and IOU ranges from 0.48 to 0.54 & 0.59 to 0.6 respectively. Precision and Recall consistently range between 63.42% to 64.26% and 73.6% to 77.68% respectively. The “Pruned” variant PD-AMSN performs notably better across metrics than the D-AMSN model demonstrating effective pruning. On the other hand, quantized QD-AMSN have only 6 MB size and 95.42% accuracy. Our models are positioned as forerunners in terms of lightweight design, attention gate integration, decreased parameter counts, and improved accuracy when compared to other previous models. The effectiveness of our models for segmentation tasks and their potential for developing into a portable, real-time automated striation mark segmentation systems in the future, are highlighted through the in-depth analysis.</div></div>\",\"PeriodicalId\":40763,\"journal\":{\"name\":\"Forensic Imaging\",\"volume\":\"39 \",\"pages\":\"Article 200611\"},\"PeriodicalIF\":0.8000,\"publicationDate\":\"2024-11-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Forensic Imaging\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2666225624000344\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Forensic Imaging","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666225624000344","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
Exploring lightweight convolution neural networks for segmenting striation marks from firearm bullet images
In the field of forensic ballistic science, the identification of firearms is accomplished by examining the class and individual distinctive marks of fired bullets or cartridge casings instances discovered at the crime scene. The distinctive striation mark, which is engraved on bullets by gun barrels while firing owing to rifling, is one of the important characteristics examined. These striation marks serve as “fingerprints” of firearms. However, manual identification is time-consuming and arduous, necessitating the need for automation. This study focuses on automating striation mark segmentation using novel lightweight deep-learning segmentation models. The motivation behind this study is two-fold: first to assess if lightweight models can replace larger models without sacrificing accuracy and secondly to leverage their efficiency for resource-limited hardware, paving the way for real-time solutions. Proposed models include Mobile Striation-Net (MSN), Attention Gatted MobileStriation-Net (AMSN), Depthwise Attention Gatted Mobile Striation-Net (D-AMSN), and two derivatives of it, which are a pruned version and a quantized variant named PD-AMSN and QD-AMSN respectively. Extensive evaluation of models includes metrics Accuracy, Dice coefficient, Intersection over Union (IOU), Precision, and Recall. A thorough comparative analysis takes into account all models based on parameter counts, frames per second, inference time, and size. Findings shows, all models attains above 95% Accuracy. Dice coefficient and IOU ranges from 0.48 to 0.54 & 0.59 to 0.6 respectively. Precision and Recall consistently range between 63.42% to 64.26% and 73.6% to 77.68% respectively. The “Pruned” variant PD-AMSN performs notably better across metrics than the D-AMSN model demonstrating effective pruning. On the other hand, quantized QD-AMSN have only 6 MB size and 95.42% accuracy. Our models are positioned as forerunners in terms of lightweight design, attention gate integration, decreased parameter counts, and improved accuracy when compared to other previous models. The effectiveness of our models for segmentation tasks and their potential for developing into a portable, real-time automated striation mark segmentation systems in the future, are highlighted through the in-depth analysis.