{"title":"An explainable deep learning model for automated classification and localization of microrobots by functionality using ultrasound images","authors":"","doi":"10.1016/j.robot.2024.104841","DOIUrl":null,"url":null,"abstract":"<div><div>The rapid advancements of untethered microrobots offer exciting opportunities in fields such as targeted drug delivery and minimally invasive surgical procedures. However, several challenges remain, especially in achieving precise localization and classification of microrobots within living organisms using ultrasound (US) imaging. Current US-based detection algorithms often suffer from inaccurate visual feedback, causing positioning errors. This paper presents a novel explainable deep learning model for the localization and classification of eight different types of microrobots using US images. We introduce the Attention-Fused Bottleneck Module (AFBM), which enhances feature extraction and improves the performance of microrobot classification and localization tasks. Our model consistently outperforms baseline models such as YOLOR, YOLOv5-C3HB, YOLOv5-TBH, YOLOv5 m, and YOLOv7. The proposed model achieved mean Average Precision (mAP) of 0.861 and 0.909 at an IoU threshold of 0.95 which is 2% and 1.5% higher than the YOLOv5 m model in training and testing, respectively. Multi-thresh IoU analysis was performed at IoU thresholds of 0.6, 0.75, and 0.95, and demonstrated that the microrobot localization accuracy of our model is superior. A robustness analysis was performed based on high and low frequencies, gain, and speckle in our test data set, and our model demonstrated higher overall accuracy. UsingScore-CAM in our framework enhances interpretability, allowing for transparent insights into the model’s decision-making process. Our work signifies a notable advancement in microrobot classification and detection, with potential applications in real-world scenarios using the newly available USMicroMagset dataset for benchmarking.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":null,"pages":null},"PeriodicalIF":4.3000,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Robotics and Autonomous Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0921889024002252","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
The rapid advancements of untethered microrobots offer exciting opportunities in fields such as targeted drug delivery and minimally invasive surgical procedures. However, several challenges remain, especially in achieving precise localization and classification of microrobots within living organisms using ultrasound (US) imaging. Current US-based detection algorithms often suffer from inaccurate visual feedback, causing positioning errors. This paper presents a novel explainable deep learning model for the localization and classification of eight different types of microrobots using US images. We introduce the Attention-Fused Bottleneck Module (AFBM), which enhances feature extraction and improves the performance of microrobot classification and localization tasks. Our model consistently outperforms baseline models such as YOLOR, YOLOv5-C3HB, YOLOv5-TBH, YOLOv5 m, and YOLOv7. The proposed model achieved mean Average Precision (mAP) of 0.861 and 0.909 at an IoU threshold of 0.95 which is 2% and 1.5% higher than the YOLOv5 m model in training and testing, respectively. Multi-thresh IoU analysis was performed at IoU thresholds of 0.6, 0.75, and 0.95, and demonstrated that the microrobot localization accuracy of our model is superior. A robustness analysis was performed based on high and low frequencies, gain, and speckle in our test data set, and our model demonstrated higher overall accuracy. UsingScore-CAM in our framework enhances interpretability, allowing for transparent insights into the model’s decision-making process. Our work signifies a notable advancement in microrobot classification and detection, with potential applications in real-world scenarios using the newly available USMicroMagset dataset for benchmarking.
期刊介绍:
Robotics and Autonomous Systems will carry articles describing fundamental developments in the field of robotics, with special emphasis on autonomous systems. An important goal of this journal is to extend the state of the art in both symbolic and sensory based robot control and learning in the context of autonomous systems.
Robotics and Autonomous Systems will carry articles on the theoretical, computational and experimental aspects of autonomous systems, or modules of such systems.