Simulation and synthesis in medical imaging : ... International Workshop, SASHIMI ..., held in conjunction with MICCAI ..., proceedings. SASHIMI (Workshop)最新文献
Pub Date : 2023-10-01Epub Date: 2023-10-07DOI: 10.1007/978-3-031-44689-4_2
Ziteng Liu, Yubo Fan, Ange Lou, Jack H Noble
Cochlear implants (CIs) are considered the standard-of-care treatment for profound sensory-based hearing loss. Several groups have proposed computational models of the cochlea in order to study the neural activation patterns in response to CI stimulation. However, most of the current implementations either rely on high-resolution histological images that cannot be customized for CI users or CT images that lack the spatial resolution to show cochlear structures. In this work, we propose to use a deep learning-based method to obtain μCT level tissue labels using patient CT images. Experiments showed that the proposed super-resolution segmentation architecture achieved very good performance on the inner-ear tissue segmentation. Our best-performing model (0.871) outperformed the UNet (0.746), VNet (0.853), nnUNet (0.861), TransUNet (0.848), and SRGAN (0.780) in terms of mean dice score.
人工耳蜗(CI)被认为是治疗深度感官性听力损失的标准方法。一些研究小组提出了耳蜗计算模型,以研究神经激活模式对 CI 刺激的反应。然而,目前大多数的实现要么依赖于无法为 CI 用户定制的高分辨率组织学图像,要么依赖于缺乏空间分辨率以显示耳蜗结构的 CT 图像。在这项工作中,我们建议使用基于深度学习的方法,利用患者的 CT 图像获取 μCT 级别的组织标签。实验表明,所提出的超分辨率分割架构在内耳组织分割方面取得了非常好的性能。就平均骰子得分而言,我们表现最好的模型(0.871)优于 UNet(0.746)、VNet(0.853)、nnUNet(0.861)、TransUNet(0.848)和 SRGAN(0.780)。
{"title":"Super-resolution segmentation network for inner-ear tissue segmentation.","authors":"Ziteng Liu, Yubo Fan, Ange Lou, Jack H Noble","doi":"10.1007/978-3-031-44689-4_2","DOIUrl":"10.1007/978-3-031-44689-4_2","url":null,"abstract":"<p><p>Cochlear implants (CIs) are considered the standard-of-care treatment for profound sensory-based hearing loss. Several groups have proposed computational models of the cochlea in order to study the neural activation patterns in response to CI stimulation. However, most of the current implementations either rely on high-resolution histological images that cannot be customized for CI users or CT images that lack the spatial resolution to show cochlear structures. In this work, we propose to use a deep learning-based method to obtain μCT level tissue labels using patient CT images. Experiments showed that the proposed super-resolution segmentation architecture achieved very good performance on the inner-ear tissue segmentation. Our best-performing model (0.871) outperformed the UNet (0.746), VNet (0.853), nnUNet (0.861), TransUNet (0.848), and SRGAN (0.780) in terms of mean dice score.</p>","PeriodicalId":91967,"journal":{"name":"Simulation and synthesis in medical imaging : ... International Workshop, SASHIMI ..., held in conjunction with MICCAI ..., proceedings. SASHIMI (Workshop)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10979466/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140338011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01Epub Date: 2023-10-07DOI: 10.1007/978-3-031-44689-4_7
Xueqi Guo, Luyao Shi, Xiongchao Chen, Bo Zhou, Qiong Liu, Huidong Xie, Yi-Hwa Liu, Richard Palyo, Edward J Miller, Albert J Sinusas, Bruce Spottiswoode, Chi Liu, Nicha C Dvornek
The rapid tracer kinetics of rubidium-82 (82Rb) and high variation of cross-frame distribution in dynamic cardiac positron emission tomography (PET) raise significant challenges for inter-frame motion correction, particularly for the early frames where conventional intensity-based image registration techniques are not applicable. Alternatively, a promising approach utilizes generative methods to handle the tracer distribution changes to assist existing registration methods. To improve frame-wise registration and parametric quantification, we propose a Temporally and Anatomically Informed Generative Adversarial Network (TAI-GAN) to transform the early frames into the late reference frame using an all-to-one mapping. Specifically, a feature-wise linear modulation layer encodes channel-wise parameters generated from temporal tracer kinetics information, and rough cardiac segmentations with local shifts serve as the anatomical information. We validated our proposed method on a clinical 82Rb PET dataset and found that our TAI-GAN can produce converted early frames with high image quality, comparable to the real reference frames. After TAI-GAN conversion, motion estimation accuracy and clinical myocardial blood flow (MBF) quantification were improved compared to using the original frames. Our code is published at https://github.com/gxq1998/TAI-GAN.
动态心脏正电子发射断层扫描(PET)中铷-82(82Rb)示踪剂的快速动力学和跨帧分布的高度变化给帧间运动校正带来了巨大挑战,特别是在早期帧中,传统的基于强度的图像配准技术并不适用。另外,一种很有前途的方法是利用生成方法来处理示踪剂分布的变化,以辅助现有的配准方法。为了改进帧配准和参数量化,我们提出了一种时间和解剖信息生成对抗网络(TAI-GAN),利用全对一映射将早期帧转换为晚期参考帧。具体来说,一个特征线性调制层对由时间示踪剂动力学信息生成的通道参数进行编码,而带有局部偏移的粗略心脏分割则作为解剖信息。我们在一个临床 82Rb PET 数据集上验证了我们提出的方法,发现我们的 TAI-GAN 可以生成图像质量很高的转换早期帧,可与真实参考帧相媲美。与使用原始帧相比,TAI-GAN 转换后的运动估计精度和临床心肌血流(MBF)定量都有所提高。我们的代码发布在 https://github.com/gxq1998/TAI-GAN。
{"title":"TAI-GAN: Temporally and Anatomically Informed GAN for Early-to-Late Frame Conversion in Dynamic Cardiac PET Motion Correction.","authors":"Xueqi Guo, Luyao Shi, Xiongchao Chen, Bo Zhou, Qiong Liu, Huidong Xie, Yi-Hwa Liu, Richard Palyo, Edward J Miller, Albert J Sinusas, Bruce Spottiswoode, Chi Liu, Nicha C Dvornek","doi":"10.1007/978-3-031-44689-4_7","DOIUrl":"10.1007/978-3-031-44689-4_7","url":null,"abstract":"<p><p>The rapid tracer kinetics of rubidium-82 (<sup>82</sup>Rb) and high variation of cross-frame distribution in dynamic cardiac positron emission tomography (PET) raise significant challenges for inter-frame motion correction, particularly for the early frames where conventional intensity-based image registration techniques are not applicable. Alternatively, a promising approach utilizes generative methods to handle the tracer distribution changes to assist existing registration methods. To improve frame-wise registration and parametric quantification, we propose a Temporally and Anatomically Informed Generative Adversarial Network (TAI-GAN) to transform the early frames into the late reference frame using an all-to-one mapping. Specifically, a feature-wise linear modulation layer encodes channel-wise parameters generated from temporal tracer kinetics information, and rough cardiac segmentations with local shifts serve as the anatomical information. We validated our proposed method on a clinical <sup>82</sup>Rb PET dataset and found that our TAI-GAN can produce converted early frames with high image quality, comparable to the real reference frames. After TAI-GAN conversion, motion estimation accuracy and clinical myocardial blood flow (MBF) quantification were improved compared to using the original frames. Our code is published at https://github.com/gxq1998/TAI-GAN.</p>","PeriodicalId":91967,"journal":{"name":"Simulation and synthesis in medical imaging : ... International Workshop, SASHIMI ..., held in conjunction with MICCAI ..., proceedings. SASHIMI (Workshop)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10923183/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140095313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Laser interstitial thermal therapy (LITT) is a novel minimally invasive treatment that is used to ablate intracranial structures to treat mesial temporal lobe epilepsy (MTLE). Region of interest (ROI) segmentation before and after LITT would enable automated lesion quantification to objectively assess treatment efficacy. Deep learning techniques, such as convolutional neural networks (CNNs) are state-of-the-art solutions for ROI segmentation, but require large amounts of annotated data during the training. However, collecting large datasets from emerging treatments such as LITT is impractical. In this paper, we propose a progressive brain lesion synthesis framework (PAVAE) to expand both the quantity and diversity of the training dataset. Concretely, our framework consists of two sequential networks: a mask synthesis network and a mask-guided lesion synthesis network. To better employ extrinsic information to provide additional supervision during network training, we design a condition embedding block (CEB) and a mask embedding block (MEB) to encode inherent conditions of masks to the feature space. Finally, a segmentation network is trained using raw and synthetic lesion images to evaluate the effectiveness of the proposed framework. Experimental results show that our method can achieve realistic synthetic results and boost the performance of down-stream segmentation tasks above traditional data augmentation techniques.
激光间质热疗(LITT)是一种新型微创疗法,用于消融颅内结构以治疗颞叶中叶癫痫(MTLE)。LITT 治疗前后的兴趣区域(ROI)分割可实现自动病变量化,从而客观评估治疗效果。卷积神经网络(CNN)等深度学习技术是最先进的 ROI 分割解决方案,但在训练过程中需要大量的注释数据。然而,从 LITT 等新兴疗法中收集大量数据集是不切实际的。在本文中,我们提出了一个渐进式脑病变合成框架(PAVAE),以扩大训练数据集的数量和多样性。具体来说,我们的框架由两个连续的网络组成:一个掩膜合成网络和一个掩膜引导的病变合成网络。为了在网络训练过程中更好地利用外在信息提供额外的监督,我们设计了一个条件嵌入块(CEB)和一个掩膜嵌入块(MEB),将掩膜的固有条件编码到特征空间中。最后,我们使用原始病变图像和合成病变图像对分割网络进行了训练,以评估所提出的框架的有效性。实验结果表明,我们的方法能获得逼真的合成结果,并能提升下流分割任务的性能,超过传统的数据增强技术。
{"title":"Brain Lesion Synthesis via Progressive Adversarial Variational Auto-Encoder.","authors":"Jiayu Huo, Vejay Vakharia, Chengyuan Wu, Ashwini Sharan, Andrew Ko, Sébastien Ourselin, Rachel Sparks","doi":"10.1007/978-3-031-16980-9_10","DOIUrl":"10.1007/978-3-031-16980-9_10","url":null,"abstract":"<p><p>Laser interstitial thermal therapy (LITT) is a novel minimally invasive treatment that is used to ablate intracranial structures to treat mesial temporal lobe epilepsy (MTLE). Region of interest (ROI) segmentation before and after LITT would enable automated lesion quantification to objectively assess treatment efficacy. Deep learning techniques, such as convolutional neural networks (CNNs) are state-of-the-art solutions for ROI segmentation, but require large amounts of annotated data during the training. However, collecting large datasets from emerging treatments such as LITT is impractical. In this paper, we propose a progressive brain lesion synthesis framework (PAVAE) to expand both the quantity and diversity of the training dataset. Concretely, our framework consists of two sequential networks: a mask synthesis network and a mask-guided lesion synthesis network. To better employ extrinsic information to provide additional supervision during network training, we design a condition embedding block (CEB) and a mask embedding block (MEB) to encode inherent conditions of masks to the feature space. Finally, a segmentation network is trained using raw and synthetic lesion images to evaluate the effectiveness of the proposed framework. Experimental results show that our method can achieve realistic synthetic results and boost the performance of down-stream segmentation tasks above traditional data augmentation techniques.</p>","PeriodicalId":91967,"journal":{"name":"Simulation and synthesis in medical imaging : ... International Workshop, SASHIMI ..., held in conjunction with MICCAI ..., proceedings. SASHIMI (Workshop)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7616255/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141725236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-01Epub Date: 2022-09-21DOI: 10.1007/978-3-031-16980-9_6
Yuan Xue, Blake E Dewey, Lianrui Zuo, Shuo Han, Aaron Carass, Peiyu Duan, Samuel W Remedios, Dzung L Pham, Shiv Saidha, Peter A Calabresi, Jerry L Prince
Magnetic resonance imaging (MRI) with gadolinium contrast is widely used for tissue enhancement and better identification of active lesions and tumors. Recent studies have shown that gadolinium deposition can accumulate in tissues including the brain, which raises safety concerns. Prior works have tried to synthesize post-contrast T1-weighted MRIs from pre-contrast MRIs to avoid the use of gadolinium. However, contrast and image representations are often entangled during the synthesis process, resulting in synthetic post-contrast MRIs with undesirable contrast enhancements. Moreover, the synthesis of pre-contrast MRIs from post-contrast MRIs which can be useful for volumetric analysis is rarely investigated in the literature. To tackle pre- and post- contrast MRI synthesis, we propose a BI-directional Contrast Enhancement Prediction and Synthesis (BICEPS) network that enables disentanglement of contrast and image representations via a bi-directional image-to-image translation(I2I)model. Our proposed model can perform both pre-to-post and post-to-pre contrast synthesis, and provides an interpretable synthesis process by predicting contrast enhancement maps from the learned contrast embedding. Extensive experiments on a multiple sclerosis dataset demonstrate the feasibility of applying our bidirectional synthesis and show that BICEPS outperforms current methods.
{"title":"Bi-directional Synthesis of Pre- and Post-contrast MRI via Guided Feature Disentanglement.","authors":"Yuan Xue, Blake E Dewey, Lianrui Zuo, Shuo Han, Aaron Carass, Peiyu Duan, Samuel W Remedios, Dzung L Pham, Shiv Saidha, Peter A Calabresi, Jerry L Prince","doi":"10.1007/978-3-031-16980-9_6","DOIUrl":"10.1007/978-3-031-16980-9_6","url":null,"abstract":"<p><p>Magnetic resonance imaging (MRI) with gadolinium contrast is widely used for tissue enhancement and better identification of active lesions and tumors. Recent studies have shown that gadolinium deposition can accumulate in tissues including the brain, which raises safety concerns. Prior works have tried to synthesize post-contrast T1-weighted MRIs from pre-contrast MRIs to avoid the use of gadolinium. However, contrast and image representations are often entangled during the synthesis process, resulting in synthetic post-contrast MRIs with undesirable contrast enhancements. Moreover, the synthesis of pre-contrast MRIs from post-contrast MRIs which can be useful for volumetric analysis is rarely investigated in the literature. To tackle pre- and post- contrast MRI synthesis, we propose a BI-directional Contrast Enhancement Prediction and Synthesis (BICEPS) network that enables disentanglement of contrast and image representations via a bi-directional image-to-image translation(I2I)model. Our proposed model can perform both pre-to-post and post-to-pre contrast synthesis, and provides an interpretable synthesis process by predicting contrast enhancement maps from the learned contrast embedding. Extensive experiments on a multiple sclerosis dataset demonstrate the feasibility of applying our bidirectional synthesis and show that BICEPS outperforms current methods.</p>","PeriodicalId":91967,"journal":{"name":"Simulation and synthesis in medical imaging : ... International Workshop, SASHIMI ..., held in conjunction with MICCAI ..., proceedings. SASHIMI (Workshop)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9623769/pdf/nihms-1845155.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40444210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.1007/978-3-031-16980-9
{"title":"Simulation and Synthesis in Medical Imaging: 7th International Workshop, SASHIMI 2022, Held in Conjunction with MICCAI 2022, Singapore, September 18, 2022, Proceedings","authors":"","doi":"10.1007/978-3-031-16980-9","DOIUrl":"https://doi.org/10.1007/978-3-031-16980-9","url":null,"abstract":"","PeriodicalId":91967,"journal":{"name":"Simulation and synthesis in medical imaging : ... International Workshop, SASHIMI ..., held in conjunction with MICCAI ..., proceedings. SASHIMI (Workshop)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91331397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.1007/978-3-030-87592-3_2
Samuel W. Remedios, Shuo Han, B. Dewey, D. Pham, Jerry L Prince, A. Carass
{"title":"Joint Image and Label Self-super-Resolution","authors":"Samuel W. Remedios, Shuo Han, B. Dewey, D. Pham, Jerry L Prince, A. Carass","doi":"10.1007/978-3-030-87592-3_2","DOIUrl":"https://doi.org/10.1007/978-3-030-87592-3_2","url":null,"abstract":"","PeriodicalId":91967,"journal":{"name":"Simulation and synthesis in medical imaging : ... International Workshop, SASHIMI ..., held in conjunction with MICCAI ..., proceedings. SASHIMI (Workshop)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77174170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01Epub Date: 2021-09-21DOI: 10.1007/978-3-030-87592-3_5
Adrià Casamitjana, Matteo Mancini, Juan Eugenio Iglesias
Nonlinear inter-modality registration is often challenging due to the lack of objective functions that are good proxies for alignment. Here we propose a synthesis-by-registration method to convert this problem into an easier intra-modality task. We introduce a registration loss for weakly supervised image translation between domains that does not require perfectly aligned training data. This loss capitalises on a registration U-Net with frozen weights, to drive a synthesis CNN towards the desired translation. We complement this loss with a structure preserving constraint based on contrastive learning, which prevents blurring and content shifts due to overfitting. We apply this method to the registration of histological sections to MRI slices, a key step in 3D histology reconstruction. Results on two public datasets show improvements over registration based on mutual information (13% reduction in landmark error) and synthesis-based algorithms such as CycleGAN (11% reduction), and are comparable to registration with label supervision. Code and data are publicly available at https://github.com/acasamitjana/SynthByReg.
{"title":"Synth-by-Reg (SbR): Contrastive learning for synthesis-based registration of paired images.","authors":"Adrià Casamitjana, Matteo Mancini, Juan Eugenio Iglesias","doi":"10.1007/978-3-030-87592-3_5","DOIUrl":"10.1007/978-3-030-87592-3_5","url":null,"abstract":"<p><p>Nonlinear inter-modality registration is often challenging due to the lack of objective functions that are good proxies for alignment. Here we propose a synthesis-by-registration method to convert this problem into an easier intra-modality task. We introduce a registration loss for weakly supervised image translation between domains that does not require perfectly aligned training data. This loss capitalises on a registration U-Net with frozen weights, to drive a synthesis CNN towards the desired translation. We complement this loss with a structure preserving constraint based on contrastive learning, which prevents blurring and content shifts due to overfitting. We apply this method to the registration of histological sections to MRI slices, a key step in 3D histology reconstruction. Results on two public datasets show improvements over registration based on mutual information (13% reduction in landmark error) and synthesis-based algorithms such as CycleGAN (11% reduction), and are comparable to registration with label supervision. Code and data are publicly available at https://github.com/acasamitjana/SynthByReg.</p>","PeriodicalId":91967,"journal":{"name":"Simulation and synthesis in medical imaging : ... International Workshop, SASHIMI ..., held in conjunction with MICCAI ..., proceedings. SASHIMI (Workshop)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8582976/pdf/nihms-1753298.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39733092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.1007/978-3-030-87592-3
{"title":"Simulation and Synthesis in Medical Imaging: 6th International Workshop, SASHIMI 2021, Held in Conjunction with MICCAI 2021, Strasbourg, France, September 27, 2021, Proceedings","authors":"","doi":"10.1007/978-3-030-87592-3","DOIUrl":"https://doi.org/10.1007/978-3-030-87592-3","url":null,"abstract":"","PeriodicalId":91967,"journal":{"name":"Simulation and synthesis in medical imaging : ... International Workshop, SASHIMI ..., held in conjunction with MICCAI ..., proceedings. SASHIMI (Workshop)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90322319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-01Epub Date: 2020-09-23DOI: 10.1007/978-3-030-59520-3_19
Ziteng Liu, Ahmet Cakir, Jack H Noble
Cochlear implants (CIs) restore hearing using an array of electrodes implanted in the cochlea to directly stimulate auditory nerve fibers (ANFs). Hearing outcomes with CIs are dependent on the health of the ANFs. In this research, we developed an approach to estimate the health of ANFs using patient-customized, image-based computational models of CI stimulation. Our stimulation models build on a previous model-based solution to estimate the intra-cochlear electric field (EF) created by the CI. Herein, we propose to use the estimated EF to drive ANF models representing 75 nerve bundles along the length of the cochlea. We propose a method to detect the neural health of the ANF models by optimizing neural health parameters to minimize the sum of squared differences between simulated and the physiological measurements available via patients' CIs. The resulting health parameters provide an estimate of the health of ANF bundles. Experiments with 8 subjects show promising model prediction accuracy, with excellent agreement between neural stimulation responses that are clinically measured and those that are predicted by our parameter optimized models. These results suggest our modeling approach may provide an accurate estimation of ANF health for CI users.
{"title":"Auditory Nerve Fiber Health Estimation Using Patient Specific Cochlear Implant Stimulation Models.","authors":"Ziteng Liu, Ahmet Cakir, Jack H Noble","doi":"10.1007/978-3-030-59520-3_19","DOIUrl":"https://doi.org/10.1007/978-3-030-59520-3_19","url":null,"abstract":"<p><p>Cochlear implants (CIs) restore hearing using an array of electrodes implanted in the cochlea to directly stimulate auditory nerve fibers (ANFs). Hearing outcomes with CIs are dependent on the health of the ANFs. In this research, we developed an approach to estimate the health of ANFs using patient-customized, image-based computational models of CI stimulation. Our stimulation models build on a previous model-based solution to estimate the intra-cochlear electric field (EF) created by the CI. Herein, we propose to use the estimated EF to drive ANF models representing 75 nerve bundles along the length of the cochlea. We propose a method to detect the neural health of the ANF models by optimizing neural health parameters to minimize the sum of squared differences between simulated and the physiological measurements available via patients' CIs. The resulting health parameters provide an estimate of the health of ANF bundles. Experiments with 8 subjects show promising model prediction accuracy, with excellent agreement between neural stimulation responses that are clinically measured and those that are predicted by our parameter optimized models. These results suggest our modeling approach may provide an accurate estimation of ANF health for CI users.</p>","PeriodicalId":91967,"journal":{"name":"Simulation and synthesis in medical imaging : ... International Workshop, SASHIMI ..., held in conjunction with MICCAI ..., proceedings. SASHIMI (Workshop)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8054972/pdf/nihms-1683800.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38897049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.1007/978-3-030-59520-3
Ninon Burgos, D. Svoboda, J. Wolterink, Can Zhao
{"title":"Simulation and Synthesis in Medical Imaging: 5th International Workshop, SASHIMI 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 4, 2020, Proceedings","authors":"Ninon Burgos, D. Svoboda, J. Wolterink, Can Zhao","doi":"10.1007/978-3-030-59520-3","DOIUrl":"https://doi.org/10.1007/978-3-030-59520-3","url":null,"abstract":"","PeriodicalId":91967,"journal":{"name":"Simulation and synthesis in medical imaging : ... International Workshop, SASHIMI ..., held in conjunction with MICCAI ..., proceedings. SASHIMI (Workshop)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88209507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simulation and synthesis in medical imaging : ... International Workshop, SASHIMI ..., held in conjunction with MICCAI ..., proceedings. SASHIMI (Workshop)