首页 > 最新文献

Simulation and synthesis in medical imaging : ... International Workshop, SASHIMI ..., held in conjunction with MICCAI ..., proceedings. SASHIMI (Workshop)最新文献

英文 中文
Super-resolution segmentation network for inner-ear tissue segmentation. 用于内耳组织分割的超分辨率分割网络。
Ziteng Liu, Yubo Fan, Ange Lou, Jack H Noble

Cochlear implants (CIs) are considered the standard-of-care treatment for profound sensory-based hearing loss. Several groups have proposed computational models of the cochlea in order to study the neural activation patterns in response to CI stimulation. However, most of the current implementations either rely on high-resolution histological images that cannot be customized for CI users or CT images that lack the spatial resolution to show cochlear structures. In this work, we propose to use a deep learning-based method to obtain μCT level tissue labels using patient CT images. Experiments showed that the proposed super-resolution segmentation architecture achieved very good performance on the inner-ear tissue segmentation. Our best-performing model (0.871) outperformed the UNet (0.746), VNet (0.853), nnUNet (0.861), TransUNet (0.848), and SRGAN (0.780) in terms of mean dice score.

人工耳蜗(CI)被认为是治疗深度感官性听力损失的标准方法。一些研究小组提出了耳蜗计算模型,以研究神经激活模式对 CI 刺激的反应。然而,目前大多数的实现要么依赖于无法为 CI 用户定制的高分辨率组织学图像,要么依赖于缺乏空间分辨率以显示耳蜗结构的 CT 图像。在这项工作中,我们建议使用基于深度学习的方法,利用患者的 CT 图像获取 μCT 级别的组织标签。实验表明,所提出的超分辨率分割架构在内耳组织分割方面取得了非常好的性能。就平均骰子得分而言,我们表现最好的模型(0.871)优于 UNet(0.746)、VNet(0.853)、nnUNet(0.861)、TransUNet(0.848)和 SRGAN(0.780)。
{"title":"Super-resolution segmentation network for inner-ear tissue segmentation.","authors":"Ziteng Liu, Yubo Fan, Ange Lou, Jack H Noble","doi":"10.1007/978-3-031-44689-4_2","DOIUrl":"10.1007/978-3-031-44689-4_2","url":null,"abstract":"<p><p>Cochlear implants (CIs) are considered the standard-of-care treatment for profound sensory-based hearing loss. Several groups have proposed computational models of the cochlea in order to study the neural activation patterns in response to CI stimulation. However, most of the current implementations either rely on high-resolution histological images that cannot be customized for CI users or CT images that lack the spatial resolution to show cochlear structures. In this work, we propose to use a deep learning-based method to obtain μCT level tissue labels using patient CT images. Experiments showed that the proposed super-resolution segmentation architecture achieved very good performance on the inner-ear tissue segmentation. Our best-performing model (0.871) outperformed the UNet (0.746), VNet (0.853), nnUNet (0.861), TransUNet (0.848), and SRGAN (0.780) in terms of mean dice score.</p>","PeriodicalId":91967,"journal":{"name":"Simulation and synthesis in medical imaging : ... International Workshop, SASHIMI ..., held in conjunction with MICCAI ..., proceedings. SASHIMI (Workshop)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10979466/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140338011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TAI-GAN: Temporally and Anatomically Informed GAN for Early-to-Late Frame Conversion in Dynamic Cardiac PET Motion Correction. TAI-GAN:用于动态心脏 PET 运动校正中早期到晚期帧转换的时间和解剖信息 GAN。
Xueqi Guo, Luyao Shi, Xiongchao Chen, Bo Zhou, Qiong Liu, Huidong Xie, Yi-Hwa Liu, Richard Palyo, Edward J Miller, Albert J Sinusas, Bruce Spottiswoode, Chi Liu, Nicha C Dvornek

The rapid tracer kinetics of rubidium-82 (82Rb) and high variation of cross-frame distribution in dynamic cardiac positron emission tomography (PET) raise significant challenges for inter-frame motion correction, particularly for the early frames where conventional intensity-based image registration techniques are not applicable. Alternatively, a promising approach utilizes generative methods to handle the tracer distribution changes to assist existing registration methods. To improve frame-wise registration and parametric quantification, we propose a Temporally and Anatomically Informed Generative Adversarial Network (TAI-GAN) to transform the early frames into the late reference frame using an all-to-one mapping. Specifically, a feature-wise linear modulation layer encodes channel-wise parameters generated from temporal tracer kinetics information, and rough cardiac segmentations with local shifts serve as the anatomical information. We validated our proposed method on a clinical 82Rb PET dataset and found that our TAI-GAN can produce converted early frames with high image quality, comparable to the real reference frames. After TAI-GAN conversion, motion estimation accuracy and clinical myocardial blood flow (MBF) quantification were improved compared to using the original frames. Our code is published at https://github.com/gxq1998/TAI-GAN.

动态心脏正电子发射断层扫描(PET)中铷-82(82Rb)示踪剂的快速动力学和跨帧分布的高度变化给帧间运动校正带来了巨大挑战,特别是在早期帧中,传统的基于强度的图像配准技术并不适用。另外,一种很有前途的方法是利用生成方法来处理示踪剂分布的变化,以辅助现有的配准方法。为了改进帧配准和参数量化,我们提出了一种时间和解剖信息生成对抗网络(TAI-GAN),利用全对一映射将早期帧转换为晚期参考帧。具体来说,一个特征线性调制层对由时间示踪剂动力学信息生成的通道参数进行编码,而带有局部偏移的粗略心脏分割则作为解剖信息。我们在一个临床 82Rb PET 数据集上验证了我们提出的方法,发现我们的 TAI-GAN 可以生成图像质量很高的转换早期帧,可与真实参考帧相媲美。与使用原始帧相比,TAI-GAN 转换后的运动估计精度和临床心肌血流(MBF)定量都有所提高。我们的代码发布在 https://github.com/gxq1998/TAI-GAN。
{"title":"TAI-GAN: Temporally and Anatomically Informed GAN for Early-to-Late Frame Conversion in Dynamic Cardiac PET Motion Correction.","authors":"Xueqi Guo, Luyao Shi, Xiongchao Chen, Bo Zhou, Qiong Liu, Huidong Xie, Yi-Hwa Liu, Richard Palyo, Edward J Miller, Albert J Sinusas, Bruce Spottiswoode, Chi Liu, Nicha C Dvornek","doi":"10.1007/978-3-031-44689-4_7","DOIUrl":"10.1007/978-3-031-44689-4_7","url":null,"abstract":"<p><p>The rapid tracer kinetics of rubidium-82 (<sup>82</sup>Rb) and high variation of cross-frame distribution in dynamic cardiac positron emission tomography (PET) raise significant challenges for inter-frame motion correction, particularly for the early frames where conventional intensity-based image registration techniques are not applicable. Alternatively, a promising approach utilizes generative methods to handle the tracer distribution changes to assist existing registration methods. To improve frame-wise registration and parametric quantification, we propose a Temporally and Anatomically Informed Generative Adversarial Network (TAI-GAN) to transform the early frames into the late reference frame using an all-to-one mapping. Specifically, a feature-wise linear modulation layer encodes channel-wise parameters generated from temporal tracer kinetics information, and rough cardiac segmentations with local shifts serve as the anatomical information. We validated our proposed method on a clinical <sup>82</sup>Rb PET dataset and found that our TAI-GAN can produce converted early frames with high image quality, comparable to the real reference frames. After TAI-GAN conversion, motion estimation accuracy and clinical myocardial blood flow (MBF) quantification were improved compared to using the original frames. Our code is published at https://github.com/gxq1998/TAI-GAN.</p>","PeriodicalId":91967,"journal":{"name":"Simulation and synthesis in medical imaging : ... International Workshop, SASHIMI ..., held in conjunction with MICCAI ..., proceedings. SASHIMI (Workshop)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10923183/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140095313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Brain Lesion Synthesis via Progressive Adversarial Variational Auto-Encoder. 通过渐进式对抗变异自动编码器合成脑损伤。
Jiayu Huo, Vejay Vakharia, Chengyuan Wu, Ashwini Sharan, Andrew Ko, Sébastien Ourselin, Rachel Sparks

Laser interstitial thermal therapy (LITT) is a novel minimally invasive treatment that is used to ablate intracranial structures to treat mesial temporal lobe epilepsy (MTLE). Region of interest (ROI) segmentation before and after LITT would enable automated lesion quantification to objectively assess treatment efficacy. Deep learning techniques, such as convolutional neural networks (CNNs) are state-of-the-art solutions for ROI segmentation, but require large amounts of annotated data during the training. However, collecting large datasets from emerging treatments such as LITT is impractical. In this paper, we propose a progressive brain lesion synthesis framework (PAVAE) to expand both the quantity and diversity of the training dataset. Concretely, our framework consists of two sequential networks: a mask synthesis network and a mask-guided lesion synthesis network. To better employ extrinsic information to provide additional supervision during network training, we design a condition embedding block (CEB) and a mask embedding block (MEB) to encode inherent conditions of masks to the feature space. Finally, a segmentation network is trained using raw and synthetic lesion images to evaluate the effectiveness of the proposed framework. Experimental results show that our method can achieve realistic synthetic results and boost the performance of down-stream segmentation tasks above traditional data augmentation techniques.

激光间质热疗(LITT)是一种新型微创疗法,用于消融颅内结构以治疗颞叶中叶癫痫(MTLE)。LITT 治疗前后的兴趣区域(ROI)分割可实现自动病变量化,从而客观评估治疗效果。卷积神经网络(CNN)等深度学习技术是最先进的 ROI 分割解决方案,但在训练过程中需要大量的注释数据。然而,从 LITT 等新兴疗法中收集大量数据集是不切实际的。在本文中,我们提出了一个渐进式脑病变合成框架(PAVAE),以扩大训练数据集的数量和多样性。具体来说,我们的框架由两个连续的网络组成:一个掩膜合成网络和一个掩膜引导的病变合成网络。为了在网络训练过程中更好地利用外在信息提供额外的监督,我们设计了一个条件嵌入块(CEB)和一个掩膜嵌入块(MEB),将掩膜的固有条件编码到特征空间中。最后,我们使用原始病变图像和合成病变图像对分割网络进行了训练,以评估所提出的框架的有效性。实验结果表明,我们的方法能获得逼真的合成结果,并能提升下流分割任务的性能,超过传统的数据增强技术。
{"title":"Brain Lesion Synthesis via Progressive Adversarial Variational Auto-Encoder.","authors":"Jiayu Huo, Vejay Vakharia, Chengyuan Wu, Ashwini Sharan, Andrew Ko, Sébastien Ourselin, Rachel Sparks","doi":"10.1007/978-3-031-16980-9_10","DOIUrl":"10.1007/978-3-031-16980-9_10","url":null,"abstract":"<p><p>Laser interstitial thermal therapy (LITT) is a novel minimally invasive treatment that is used to ablate intracranial structures to treat mesial temporal lobe epilepsy (MTLE). Region of interest (ROI) segmentation before and after LITT would enable automated lesion quantification to objectively assess treatment efficacy. Deep learning techniques, such as convolutional neural networks (CNNs) are state-of-the-art solutions for ROI segmentation, but require large amounts of annotated data during the training. However, collecting large datasets from emerging treatments such as LITT is impractical. In this paper, we propose a progressive brain lesion synthesis framework (PAVAE) to expand both the quantity and diversity of the training dataset. Concretely, our framework consists of two sequential networks: a mask synthesis network and a mask-guided lesion synthesis network. To better employ extrinsic information to provide additional supervision during network training, we design a condition embedding block (CEB) and a mask embedding block (MEB) to encode inherent conditions of masks to the feature space. Finally, a segmentation network is trained using raw and synthetic lesion images to evaluate the effectiveness of the proposed framework. Experimental results show that our method can achieve realistic synthetic results and boost the performance of down-stream segmentation tasks above traditional data augmentation techniques.</p>","PeriodicalId":91967,"journal":{"name":"Simulation and synthesis in medical imaging : ... International Workshop, SASHIMI ..., held in conjunction with MICCAI ..., proceedings. SASHIMI (Workshop)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7616255/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141725236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bi-directional Synthesis of Pre- and Post-contrast MRI via Guided Feature Disentanglement. 通过引导特征分解实现对比前和对比后磁共振成像的双向合成
Yuan Xue, Blake E Dewey, Lianrui Zuo, Shuo Han, Aaron Carass, Peiyu Duan, Samuel W Remedios, Dzung L Pham, Shiv Saidha, Peter A Calabresi, Jerry L Prince

Magnetic resonance imaging (MRI) with gadolinium contrast is widely used for tissue enhancement and better identification of active lesions and tumors. Recent studies have shown that gadolinium deposition can accumulate in tissues including the brain, which raises safety concerns. Prior works have tried to synthesize post-contrast T1-weighted MRIs from pre-contrast MRIs to avoid the use of gadolinium. However, contrast and image representations are often entangled during the synthesis process, resulting in synthetic post-contrast MRIs with undesirable contrast enhancements. Moreover, the synthesis of pre-contrast MRIs from post-contrast MRIs which can be useful for volumetric analysis is rarely investigated in the literature. To tackle pre- and post- contrast MRI synthesis, we propose a BI-directional Contrast Enhancement Prediction and Synthesis (BICEPS) network that enables disentanglement of contrast and image representations via a bi-directional image-to-image translation(I2I)model. Our proposed model can perform both pre-to-post and post-to-pre contrast synthesis, and provides an interpretable synthesis process by predicting contrast enhancement maps from the learned contrast embedding. Extensive experiments on a multiple sclerosis dataset demonstrate the feasibility of applying our bidirectional synthesis and show that BICEPS outperforms current methods.

使用钆对比剂的磁共振成像(MRI)被广泛用于增强组织和更好地识别活动性病变和肿瘤。最近的研究表明,钆沉积会在包括大脑在内的组织中积累,这引起了人们对安全性的担忧。以前的研究曾尝试用对比前的磁共振成像合成对比后的 T1 加权磁共振成像,以避免使用钆。然而,在合成过程中,对比度和图像表现往往会纠缠在一起,导致合成的对比度增强后磁共振成像效果不理想。此外,从对比后核磁共振成像合成对比前核磁共振成像可用于容积分析的文献也很少。为了解决对比前和对比后核磁共振成像合成问题,我们提出了双向对比度增强预测与合成(BICEPS)网络,通过双向图像到图像转换(I2I)模型实现对比度和图像表征的分离。我们提出的模型可以执行前对后和后对前对比度合成,并通过从学习到的对比度嵌入预测对比度增强图,提供可解释的合成过程。在多发性硬化症数据集上进行的大量实验证明了应用我们的双向合成的可行性,并表明 BICEPS 优于当前的方法。
{"title":"Bi-directional Synthesis of Pre- and Post-contrast MRI via Guided Feature Disentanglement.","authors":"Yuan Xue, Blake E Dewey, Lianrui Zuo, Shuo Han, Aaron Carass, Peiyu Duan, Samuel W Remedios, Dzung L Pham, Shiv Saidha, Peter A Calabresi, Jerry L Prince","doi":"10.1007/978-3-031-16980-9_6","DOIUrl":"10.1007/978-3-031-16980-9_6","url":null,"abstract":"<p><p>Magnetic resonance imaging (MRI) with gadolinium contrast is widely used for tissue enhancement and better identification of active lesions and tumors. Recent studies have shown that gadolinium deposition can accumulate in tissues including the brain, which raises safety concerns. Prior works have tried to synthesize post-contrast T1-weighted MRIs from pre-contrast MRIs to avoid the use of gadolinium. However, contrast and image representations are often entangled during the synthesis process, resulting in synthetic post-contrast MRIs with undesirable contrast enhancements. Moreover, the synthesis of pre-contrast MRIs from post-contrast MRIs which can be useful for volumetric analysis is rarely investigated in the literature. To tackle pre- and post- contrast MRI synthesis, we propose a BI-directional Contrast Enhancement Prediction and Synthesis (BICEPS) network that enables disentanglement of contrast and image representations via a bi-directional image-to-image translation(I2I)model. Our proposed model can perform both pre-to-post and post-to-pre contrast synthesis, and provides an interpretable synthesis process by predicting contrast enhancement maps from the learned contrast embedding. Extensive experiments on a multiple sclerosis dataset demonstrate the feasibility of applying our bidirectional synthesis and show that BICEPS outperforms current methods.</p>","PeriodicalId":91967,"journal":{"name":"Simulation and synthesis in medical imaging : ... International Workshop, SASHIMI ..., held in conjunction with MICCAI ..., proceedings. SASHIMI (Workshop)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9623769/pdf/nihms-1845155.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40444210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simulation and Synthesis in Medical Imaging: 7th International Workshop, SASHIMI 2022, Held in Conjunction with MICCAI 2022, Singapore, September 18, 2022, Proceedings 医学成像的模拟和综合:第七届国际研讨会,SASHIMI 2022,与MICCAI 2022一起举行,新加坡,2022年9月18日,会议录
{"title":"Simulation and Synthesis in Medical Imaging: 7th International Workshop, SASHIMI 2022, Held in Conjunction with MICCAI 2022, Singapore, September 18, 2022, Proceedings","authors":"","doi":"10.1007/978-3-031-16980-9","DOIUrl":"https://doi.org/10.1007/978-3-031-16980-9","url":null,"abstract":"","PeriodicalId":91967,"journal":{"name":"Simulation and synthesis in medical imaging : ... International Workshop, SASHIMI ..., held in conjunction with MICCAI ..., proceedings. SASHIMI (Workshop)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91331397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Joint Image and Label Self-super-Resolution 联合图像和标签自超分辨率
Samuel W. Remedios, Shuo Han, B. Dewey, D. Pham, Jerry L Prince, A. Carass
{"title":"Joint Image and Label Self-super-Resolution","authors":"Samuel W. Remedios, Shuo Han, B. Dewey, D. Pham, Jerry L Prince, A. Carass","doi":"10.1007/978-3-030-87592-3_2","DOIUrl":"https://doi.org/10.1007/978-3-030-87592-3_2","url":null,"abstract":"","PeriodicalId":91967,"journal":{"name":"Simulation and synthesis in medical imaging : ... International Workshop, SASHIMI ..., held in conjunction with MICCAI ..., proceedings. SASHIMI (Workshop)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77174170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Synth-by-Reg (SbR): Contrastive learning for synthesis-based registration of paired images. Synth-by-Reg (SbR):基于合成的配对图像配准的对比学习。
Adrià Casamitjana, Matteo Mancini, Juan Eugenio Iglesias

Nonlinear inter-modality registration is often challenging due to the lack of objective functions that are good proxies for alignment. Here we propose a synthesis-by-registration method to convert this problem into an easier intra-modality task. We introduce a registration loss for weakly supervised image translation between domains that does not require perfectly aligned training data. This loss capitalises on a registration U-Net with frozen weights, to drive a synthesis CNN towards the desired translation. We complement this loss with a structure preserving constraint based on contrastive learning, which prevents blurring and content shifts due to overfitting. We apply this method to the registration of histological sections to MRI slices, a key step in 3D histology reconstruction. Results on two public datasets show improvements over registration based on mutual information (13% reduction in landmark error) and synthesis-based algorithms such as CycleGAN (11% reduction), and are comparable to registration with label supervision. Code and data are publicly available at https://github.com/acasamitjana/SynthByReg.

由于缺乏作为对齐的良好代理的目标函数,非线性模态间配准通常具有挑战性。在这里,我们提出了一种通过配准进行综合的方法,将这个问题转化为一个更容易的模态内任务。我们为不需要完全对齐的训练数据的域之间的弱监督图像转换引入了配准损失。这一损失利用了具有冻结权重的注册U-Net,以推动合成CNN朝着所需的翻译方向发展。我们用基于对比学习的结构保持约束来弥补这种损失,该约束防止了由于过拟合而导致的模糊和内容偏移。我们将这种方法应用于组织学切片与MRI切片的配准,这是3D组织学重建的关键步骤。在两个公共数据集上的结果显示,与基于互信息的注册(里程碑误差减少13%)和基于合成的算法(如CycleGAN)(减少11%)相比,有了改进,并且与使用标签监督的注册相当。代码和数据可在https://github.com/acasamitjana/SynthByReg.
{"title":"Synth-by-Reg (SbR): Contrastive learning for synthesis-based registration of paired images.","authors":"Adrià Casamitjana,&nbsp;Matteo Mancini,&nbsp;Juan Eugenio Iglesias","doi":"10.1007/978-3-030-87592-3_5","DOIUrl":"10.1007/978-3-030-87592-3_5","url":null,"abstract":"<p><p>Nonlinear inter-modality registration is often challenging due to the lack of objective functions that are good proxies for alignment. Here we propose a synthesis-by-registration method to convert this problem into an easier intra-modality task. We introduce a registration loss for weakly supervised image translation between domains that does not require perfectly aligned training data. This loss capitalises on a registration U-Net with frozen weights, to drive a synthesis CNN towards the desired translation. We complement this loss with a structure preserving constraint based on contrastive learning, which prevents blurring and content shifts due to overfitting. We apply this method to the registration of histological sections to MRI slices, a key step in 3D histology reconstruction. Results on two public datasets show improvements over registration based on mutual information (13% reduction in landmark error) and synthesis-based algorithms such as CycleGAN (11% reduction), and are comparable to registration with label supervision. Code and data are publicly available at https://github.com/acasamitjana/SynthByReg.</p>","PeriodicalId":91967,"journal":{"name":"Simulation and synthesis in medical imaging : ... International Workshop, SASHIMI ..., held in conjunction with MICCAI ..., proceedings. SASHIMI (Workshop)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8582976/pdf/nihms-1753298.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39733092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Simulation and Synthesis in Medical Imaging: 6th International Workshop, SASHIMI 2021, Held in Conjunction with MICCAI 2021, Strasbourg, France, September 27, 2021, Proceedings 医学成像的模拟和综合:第六届国际研讨会,SASHIMI 2021,与MICCAI 2021一起举行,斯特拉斯堡,法国,2021年9月27日,论文集
{"title":"Simulation and Synthesis in Medical Imaging: 6th International Workshop, SASHIMI 2021, Held in Conjunction with MICCAI 2021, Strasbourg, France, September 27, 2021, Proceedings","authors":"","doi":"10.1007/978-3-030-87592-3","DOIUrl":"https://doi.org/10.1007/978-3-030-87592-3","url":null,"abstract":"","PeriodicalId":91967,"journal":{"name":"Simulation and synthesis in medical imaging : ... International Workshop, SASHIMI ..., held in conjunction with MICCAI ..., proceedings. SASHIMI (Workshop)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90322319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Auditory Nerve Fiber Health Estimation Using Patient Specific Cochlear Implant Stimulation Models. 使用患者特定人工耳蜗刺激模型评估听神经纤维健康状况。
Ziteng Liu, Ahmet Cakir, Jack H Noble

Cochlear implants (CIs) restore hearing using an array of electrodes implanted in the cochlea to directly stimulate auditory nerve fibers (ANFs). Hearing outcomes with CIs are dependent on the health of the ANFs. In this research, we developed an approach to estimate the health of ANFs using patient-customized, image-based computational models of CI stimulation. Our stimulation models build on a previous model-based solution to estimate the intra-cochlear electric field (EF) created by the CI. Herein, we propose to use the estimated EF to drive ANF models representing 75 nerve bundles along the length of the cochlea. We propose a method to detect the neural health of the ANF models by optimizing neural health parameters to minimize the sum of squared differences between simulated and the physiological measurements available via patients' CIs. The resulting health parameters provide an estimate of the health of ANF bundles. Experiments with 8 subjects show promising model prediction accuracy, with excellent agreement between neural stimulation responses that are clinically measured and those that are predicted by our parameter optimized models. These results suggest our modeling approach may provide an accurate estimation of ANF health for CI users.

人工耳蜗(CIs)通过植入耳蜗的电极阵列直接刺激听觉神经纤维(ANFs)来恢复听力。ci患者的听力结果取决于anf患者的健康状况。在这项研究中,我们开发了一种方法,使用患者定制的基于图像的脑内刺激计算模型来估计anf的健康状况。我们的刺激模型建立在先前基于模型的解决方案上,以估计由CI产生的耳蜗内电场(EF)。在此,我们建议使用估计的EF来驱动代表沿耳蜗长度的75个神经束的ANF模型。我们提出了一种方法,通过优化神经健康参数来检测ANF模型的神经健康状况,以最小化模拟和通过患者ci可获得的生理测量值之间的平方和。生成的运行状况参数提供了对ANF包运行状况的估计。8个被试的实验表明,模型预测的准确性很好,临床测量的神经刺激反应与我们的参数优化模型预测的神经刺激反应非常吻合。这些结果表明,我们的建模方法可以为CI用户提供对ANF健康状况的准确估计。
{"title":"Auditory Nerve Fiber Health Estimation Using Patient Specific Cochlear Implant Stimulation Models.","authors":"Ziteng Liu,&nbsp;Ahmet Cakir,&nbsp;Jack H Noble","doi":"10.1007/978-3-030-59520-3_19","DOIUrl":"https://doi.org/10.1007/978-3-030-59520-3_19","url":null,"abstract":"<p><p>Cochlear implants (CIs) restore hearing using an array of electrodes implanted in the cochlea to directly stimulate auditory nerve fibers (ANFs). Hearing outcomes with CIs are dependent on the health of the ANFs. In this research, we developed an approach to estimate the health of ANFs using patient-customized, image-based computational models of CI stimulation. Our stimulation models build on a previous model-based solution to estimate the intra-cochlear electric field (EF) created by the CI. Herein, we propose to use the estimated EF to drive ANF models representing 75 nerve bundles along the length of the cochlea. We propose a method to detect the neural health of the ANF models by optimizing neural health parameters to minimize the sum of squared differences between simulated and the physiological measurements available via patients' CIs. The resulting health parameters provide an estimate of the health of ANF bundles. Experiments with 8 subjects show promising model prediction accuracy, with excellent agreement between neural stimulation responses that are clinically measured and those that are predicted by our parameter optimized models. These results suggest our modeling approach may provide an accurate estimation of ANF health for CI users.</p>","PeriodicalId":91967,"journal":{"name":"Simulation and synthesis in medical imaging : ... International Workshop, SASHIMI ..., held in conjunction with MICCAI ..., proceedings. SASHIMI (Workshop)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8054972/pdf/nihms-1683800.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38897049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Simulation and Synthesis in Medical Imaging: 5th International Workshop, SASHIMI 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 4, 2020, Proceedings 医学成像的模拟和综合:第五届国际研讨会,SASHIMI 2020,与MICCAI 2020一起举行,秘鲁利马,2020年10月4日,会议录
Ninon Burgos, D. Svoboda, J. Wolterink, Can Zhao
{"title":"Simulation and Synthesis in Medical Imaging: 5th International Workshop, SASHIMI 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 4, 2020, Proceedings","authors":"Ninon Burgos, D. Svoboda, J. Wolterink, Can Zhao","doi":"10.1007/978-3-030-59520-3","DOIUrl":"https://doi.org/10.1007/978-3-030-59520-3","url":null,"abstract":"","PeriodicalId":91967,"journal":{"name":"Simulation and synthesis in medical imaging : ... International Workshop, SASHIMI ..., held in conjunction with MICCAI ..., proceedings. SASHIMI (Workshop)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88209507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
Simulation and synthesis in medical imaging : ... International Workshop, SASHIMI ..., held in conjunction with MICCAI ..., proceedings. SASHIMI (Workshop)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1