Derek Allman, Fabrizio R. Assis, J. Chrispin, M. Bell
{"title":"Deep learning to detect catheter tips in vivo during photoacoustic-guided catheter interventions : Invited Presentation","authors":"Derek Allman, Fabrizio R. Assis, J. Chrispin, M. Bell","doi":"10.1109/CISS.2019.8692864","DOIUrl":null,"url":null,"abstract":"Catheter guidance is typically performed with fluoroscopy, which requires patient and operator exposure to ionizing radiation. Our group is exploring robotic photoacoustic imaging as an alternative to fluoroscopy to track catheter tips. However, the catheter tip segmentation step in the photoacoustic-based robotic visual servoing algorithm is limited by the presence of confusing photoacoustic artifacts. We previously demonstrated that a deep neural network is capable of detecting photoacoustic sources in the presence of artifacts in simulated, phantom, and in vivo data. This paper directly compares the in vivo results obtained with linear and phased ultrasound receiver arrays. Two convolutional neural networks (CNNs) were trained to detect point sources in simulated photoacoustic channel data and tested with in vivo images from a swine catheterization procedure. The CNN trained with a linear array receiver model correctly classified 88.8% of sources, and the CNN trained with a phased array receiver model correctly classified 91.4% of sources. These results demonstrate that a deep learning approach to photoacoustic image formation is capable of detecting catheter tips during interventional procedures. Therefore, the proposed approach is a promising replacement to the segmentation step in photoacoustic-based robotic visual servoing algorithms.","PeriodicalId":123696,"journal":{"name":"2019 53rd Annual Conference on Information Sciences and Systems (CISS)","volume":"110 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 53rd Annual Conference on Information Sciences and Systems (CISS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CISS.2019.8692864","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
Catheter guidance is typically performed with fluoroscopy, which requires patient and operator exposure to ionizing radiation. Our group is exploring robotic photoacoustic imaging as an alternative to fluoroscopy to track catheter tips. However, the catheter tip segmentation step in the photoacoustic-based robotic visual servoing algorithm is limited by the presence of confusing photoacoustic artifacts. We previously demonstrated that a deep neural network is capable of detecting photoacoustic sources in the presence of artifacts in simulated, phantom, and in vivo data. This paper directly compares the in vivo results obtained with linear and phased ultrasound receiver arrays. Two convolutional neural networks (CNNs) were trained to detect point sources in simulated photoacoustic channel data and tested with in vivo images from a swine catheterization procedure. The CNN trained with a linear array receiver model correctly classified 88.8% of sources, and the CNN trained with a phased array receiver model correctly classified 91.4% of sources. These results demonstrate that a deep learning approach to photoacoustic image formation is capable of detecting catheter tips during interventional procedures. Therefore, the proposed approach is a promising replacement to the segmentation step in photoacoustic-based robotic visual servoing algorithms.