首页 > 最新文献

Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention最新文献

英文 中文
LSOR: Longitudinally-Consistent Self-Organized Representation Learning. 纵向一致自组织表征学习。
Jiahong Ouyang, Qingyu Zhao, Ehsan Adeli, Wei Peng, Greg Zaharchuk, Kilian M Pohl

Interpretability is a key issue when applying deep learning models to longitudinal brain MRIs. One way to address this issue is by visualizing the high-dimensional latent spaces generated by deep learning via self-organizing maps (SOM). SOM separates the latent space into clusters and then maps the cluster centers to a discrete (typically 2D) grid preserving the high-dimensional relationship between clusters. However, learning SOM in a high-dimensional latent space tends to be unstable, especially in a self-supervision setting. Furthermore, the learned SOM grid does not necessarily capture clinically interesting information, such as brain age. To resolve these issues, we propose the first self-supervised SOM approach that derives a high-dimensional, interpretable representation stratified by brain age solely based on longitudinal brain MRIs (i.e., without demographic or cognitive information). Called Longitudinally-consistent Self-Organized Representation learning (LSOR), the method is stable during training as it relies on soft clustering (vs. the hard cluster assignments used by existing SOM). Furthermore, our approach generates a latent space stratified according to brain age by aligning trajectories inferred from longitudinal MRIs to the reference vector associated with the corresponding SOM cluster. When applied to longitudinal MRIs of the Alzheimer's Disease Neuroimaging Initiative (ADNI, N=632), LSOR generates an interpretable latent space and achieves comparable or higher accuracy than the state-of-the-art representations with respect to the downstream tasks of classification (static vs. progressive mild cognitive impairment) and regression (determining ADAS-Cog score of all subjects). The code is available at https://github.com/ouyangjiahong/longitudinal-som-single-modality.

将深度学习模型应用于纵向脑核磁共振成像时,可解释性是一个关键问题。解决这个问题的一种方法是通过自组织地图(SOM)可视化深度学习产生的高维潜在空间。SOM将潜在空间分成簇,然后将簇中心映射到一个离散的(通常是二维的)网格,以保持簇之间的高维关系。然而,在高维潜在空间中学习SOM往往是不稳定的,尤其是在自我监督的环境中。此外,习得的SOM网格不一定能捕捉到临床上有趣的信息,比如大脑年龄。为了解决这些问题,我们提出了第一种自我监督的SOM方法,该方法仅基于纵向脑mri(即没有人口统计学或认知信息)获得高维,可解释的脑年龄分层表示。这种方法被称为纵向一致自组织表示学习(LSOR),在训练期间是稳定的,因为它依赖于软聚类(相对于现有SOM使用的硬聚类分配)。此外,我们的方法通过将从纵向mri推断的轨迹与相应SOM集群相关的参考向量对齐,生成了一个根据脑年龄分层的潜在空间。当应用于阿尔茨海默病神经成像计划(ADNI, N=632)的纵向mri时,LSOR产生了一个可解释的潜在空间,并且在分类(静态与进行性轻度认知障碍)和回归(确定所有受试者的ADAS-Cog评分)的下游任务方面达到了与最先进的表征相当或更高的准确性。代码可在https://github.com/ouyangjiahong/longitudinal-som-single-modality上获得。
{"title":"LSOR: Longitudinally-Consistent Self-Organized Representation Learning.","authors":"Jiahong Ouyang, Qingyu Zhao, Ehsan Adeli, Wei Peng, Greg Zaharchuk, Kilian M Pohl","doi":"10.1007/978-3-031-43907-0_27","DOIUrl":"10.1007/978-3-031-43907-0_27","url":null,"abstract":"<p><p>Interpretability is a key issue when applying deep learning models to longitudinal brain MRIs. One way to address this issue is by visualizing the high-dimensional latent spaces generated by deep learning via self-organizing maps (SOM). SOM separates the latent space into clusters and then maps the cluster centers to a discrete (typically 2D) grid preserving the high-dimensional relationship between clusters. However, learning SOM in a high-dimensional latent space tends to be unstable, especially in a self-supervision setting. Furthermore, the learned SOM grid does not necessarily capture clinically interesting information, such as brain age. To resolve these issues, we propose the first self-supervised SOM approach that derives a high-dimensional, interpretable representation stratified by brain age solely based on longitudinal brain MRIs (i.e., without demographic or cognitive information). Called <b>L</b>ongitudinally-consistent <b>S</b>elf-<b>O</b>rganized <b>R</b>epresentation learning (LSOR), the method is stable during training as it relies on soft clustering (vs. the hard cluster assignments used by existing SOM). Furthermore, our approach generates a latent space stratified according to brain age by aligning trajectories inferred from longitudinal MRIs to the reference vector associated with the corresponding SOM cluster. When applied to longitudinal MRIs of the Alzheimer's Disease Neuroimaging Initiative (ADNI, <math><mi>N</mi><mspace></mspace><mo>=</mo><mspace></mspace><mn>632</mn></math>), LSOR generates an interpretable latent space and achieves comparable or higher accuracy than the state-of-the-art representations with respect to the downstream tasks of classification (static vs. progressive mild cognitive impairment) and regression (determining ADAS-Cog score of all subjects). The code is available at https://github.com/ouyangjiahong/longitudinal-som-single-modality.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14220 ","pages":"279-289"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10642576/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92158078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pelphix: Surgical Phase Recognition from X-ray Images in Percutaneous Pelvic Fixation. Pelphix:从 X 光图像识别经皮骨盆固定术中的手术期。
Benjamin D Killeen, Han Zhang, Jan Mangulabnan, Mehran Armand, Russell H Taylor, Greg Osgood, Mathias Unberath

Surgical phase recognition (SPR) is a crucial element in the digital transformation of the modern operating theater. While SPR based on video sources is well-established, incorporation of interventional X-ray sequences has not yet been explored. This paper presents Pelphix, a first approach to SPR for X-ray-guided percutaneous pelvic fracture fixation, which models the procedure at four levels of granularity - corridor, activity, view, and frame value - simulating the pelvic fracture fixation workflow as a Markov process to provide fully annotated training data. Using added supervision from detection of bony corridors, tools, and anatomy, we learn image representations that are fed into a transformer model to regress surgical phases at the four granularity levels. Our approach demonstrates the feasibility of X-ray-based SPR, achieving an average accuracy of 99.2% on simulated sequences and 71.7% in cadaver across all granularity levels, with up to 84% accuracy for the target corridor in real data. This work constitutes the first step toward SPR for the X-ray domain, establishing an approach to categorizing phases in X-ray-guided surgery, simulating realistic image sequences to enable machine learning model development, and demonstrating that this approach is feasible for the analysis of real procedures. As X-ray-based SPR continues to mature, it will benefit procedures in orthopedic surgery, angiography, and interventional radiology by equipping intelligent surgical systems with situational awareness in the operating room.

手术相位识别(SPR)是现代手术室数字化转型的关键因素。虽然基于视频源的 SPR 已经得到广泛认可,但将介入性 X 射线序列纳入其中的做法尚未得到探索。本文介绍了 Pelphix,这是第一种用于 X 光引导下经皮骨盆骨折固定的 SPR 方法,它从走廊、活动、视图和帧值四个粒度层面对手术过程进行建模,将骨盆骨折固定工作流程模拟为马尔可夫过程,从而提供完全注释的训练数据。通过对骨走廊、工具和解剖结构的检测,我们学习了图像表征,并将其输入变换器模型,从而在四个粒度水平上对手术阶段进行回归。我们的方法证明了基于 X 射线的 SPR 的可行性,在所有粒度水平上,模拟序列的平均准确率达到 99.2%,在尸体中达到 71.7%,在真实数据中,目标走廊的准确率高达 84%。这项工作迈出了 X 射线领域 SPR 的第一步,建立了 X 射线引导手术中阶段分类的方法,模拟了真实的图像序列以实现机器学习模型的开发,并证明了这种方法在真实手术分析中的可行性。随着基于 X 射线的 SPR 技术的不断成熟,它将通过为智能手术系统配备手术室中的态势感知功能,使骨科手术、血管造影术和介入放射学手术受益匪浅。
{"title":"Pelphix: Surgical Phase Recognition from X-ray Images in Percutaneous Pelvic Fixation.","authors":"Benjamin D Killeen, Han Zhang, Jan Mangulabnan, Mehran Armand, Russell H Taylor, Greg Osgood, Mathias Unberath","doi":"10.1007/978-3-031-43996-4_13","DOIUrl":"https://doi.org/10.1007/978-3-031-43996-4_13","url":null,"abstract":"<p><p>Surgical phase recognition (SPR) is a crucial element in the digital transformation of the modern operating theater. While SPR based on video sources is well-established, incorporation of interventional X-ray sequences has not yet been explored. This paper presents Pelphix, a first approach to SPR for X-ray-guided percutaneous pelvic fracture fixation, which models the procedure at four levels of granularity - corridor, activity, view, and frame value - simulating the pelvic fracture fixation workflow as a Markov process to provide fully annotated training data. Using added supervision from detection of bony corridors, tools, and anatomy, we learn image representations that are fed into a transformer model to regress surgical phases at the four granularity levels. Our approach demonstrates the feasibility of X-ray-based SPR, achieving an average accuracy of 99.2% on simulated sequences and 71.7% in cadaver across all granularity levels, with up to 84% accuracy for the target corridor in real data. This work constitutes the first step toward SPR for the X-ray domain, establishing an approach to categorizing phases in X-ray-guided surgery, simulating realistic image sequences to enable machine learning model development, and demonstrating that this approach is feasible for the analysis of real procedures. As X-ray-based SPR continues to mature, it will benefit procedures in orthopedic surgery, angiography, and interventional radiology by equipping intelligent surgical systems with situational awareness in the operating room.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14228 ","pages":"133-143"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11016332/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140862109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CTFlow: Mitigating Effects of Computed Tomography Acquisition and Reconstruction with Normalizing Flows. CTFlow:利用归一化流量减轻计算机断层扫描采集和重建的影响。
Leihao Wei, Anil Yadav, William Hsu

Mitigating the effects of image appearance due to variations in computed tomography (CT) acquisition and reconstruction parameters is a challenging inverse problem. We present CTFlow, a normalizing flows-based method for harmonizing CT scans acquired and reconstructed using different doses and kernels to a target scan. Unlike existing state-of-the-art image harmonization approaches that only generate a single output, flow-based methods learn the explicit conditional density and output the entire spectrum of plausible reconstruction, reflecting the underlying uncertainty of the problem. We demonstrate how normalizing flows reduces variability in image quality and the performance of a machine learning algorithm for lung nodule detection. We evaluate the performance of CTFlow by 1) comparing it with other techniques on a denoising task using the AAPM-Mayo Clinical Low-Dose CT Grand Challenge dataset, and 2) demonstrating consistency in nodule detection performance across 186 real-world low-dose CT chest scans acquired at our institution. CTFlow performs better in the denoising task for both peak signal-to-noise ratio and perceptual quality metrics. Moreover, CTFlow produces more consistent predictions across all dose and kernel conditions than generative adversarial network (GAN)-based image harmonization on a lung nodule detection task. The code is available at https://github.com/hsu-lab/ctflow.

减轻因计算机断层扫描(CT)采集和重建参数变化而造成的图像外观影响是一个具有挑战性的逆问题。我们提出的 CTFlow 是一种基于归一化流量的方法,用于协调使用不同剂量和内核采集和重建的 CT 扫描与目标扫描。现有的先进图像协调方法只能生成单一输出,而基于流量的方法则不同,它能学习明确的条件密度,并输出整个可信重建谱,从而反映出问题的潜在不确定性。我们展示了流量归一化如何减少图像质量的变化以及肺结节检测机器学习算法的性能。我们通过以下方法评估 CTFlow 的性能:1)使用 AAPM-Mayo 临床低剂量 CT 大挑战数据集,在去噪任务中将 CTFlow 与其他技术进行比较;2)在本机构获取的 186 个真实世界低剂量 CT 胸部扫描中证明结节检测性能的一致性。在峰值信噪比和感知质量指标方面,CTFlow 在去噪任务中表现更好。此外,与基于生成式对抗网络(GAN)的图像协调相比,CTFlow 在肺结节检测任务中的所有剂量和内核条件下都能产生更一致的预测结果。代码见 https://github.com/hsu-lab/ctflow。
{"title":"CTFlow: Mitigating Effects of Computed Tomography Acquisition and Reconstruction with Normalizing Flows.","authors":"Leihao Wei, Anil Yadav, William Hsu","doi":"10.1007/978-3-031-43990-2_39","DOIUrl":"10.1007/978-3-031-43990-2_39","url":null,"abstract":"<p><p>Mitigating the effects of image appearance due to variations in computed tomography (CT) acquisition and reconstruction parameters is a challenging inverse problem. We present CTFlow, a normalizing flows-based method for harmonizing CT scans acquired and reconstructed using different doses and kernels to a target scan. Unlike existing state-of-the-art image harmonization approaches that only generate a single output, flow-based methods learn the explicit conditional density and output the entire spectrum of plausible reconstruction, reflecting the underlying uncertainty of the problem. We demonstrate how normalizing flows reduces variability in image quality and the performance of a machine learning algorithm for lung nodule detection. We evaluate the performance of CTFlow by 1) comparing it with other techniques on a denoising task using the AAPM-Mayo Clinical Low-Dose CT Grand Challenge dataset, and 2) demonstrating consistency in nodule detection performance across 186 real-world low-dose CT chest scans acquired at our institution. CTFlow performs better in the denoising task for both peak signal-to-noise ratio and perceptual quality metrics. Moreover, CTFlow produces more consistent predictions across all dose and kernel conditions than generative adversarial network (GAN)-based image harmonization on a lung nodule detection task. The code is available at https://github.com/hsu-lab/ctflow.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14226 ","pages":"413-422"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11086056/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140913633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Implicit Anatomical Rendering for Medical Image Segmentation with Stochastic Experts. 利用随机专家为医学图像分割进行隐式解剖渲染
Chenyu You, Weicheng Dai, Yifei Min, Lawrence Staib, James S Duncan

Integrating high-level semantically correlated contents and low-level anatomical features is of central importance in medical image segmentation. Towards this end, recent deep learning-based medical segmentation methods have shown great promise in better modeling such information. However, convolution operators for medical segmentation typically operate on regular grids, which inherently blur the high-frequency regions, i.e., boundary regions. In this work, we propose MORSE, a generic implicit neural rendering framework designed at an anatomical level to assist learning in medical image segmentation. Our method is motivated by the fact that implicit neural representation has been shown to be more effective in fitting complex signals and solving computer graphics problems than discrete grid-based representation. The core of our approach is to formulate medical image segmentation as a rendering problem in an end-to-end manner. Specifically, we continuously align the coarse segmentation prediction with the ambiguous coordinate-based point representations and aggregate these features to adaptively refine the boundary region. To parallelly optimize multi-scale pixel-level features, we leverage the idea from Mixture-of-Expert (MoE) to design and train our MORSE with a stochastic gating mechanism. Our experiments demonstrate that MORSE can work well with different medical segmentation backbones, consistently achieving competitive performance improvements in both 2D and 3D supervised medical segmentation methods. We also theoretically analyze the superiority of MORSE.

整合高级语义相关内容和低级解剖特征在医学图像分割中至关重要。为此,最近基于深度学习的医学分割方法在更好地模拟此类信息方面大有可为。然而,用于医学分割的卷积算子通常是在规则网格上运行的,这就从本质上模糊了高频区域,即边界区域。在这项工作中,我们提出了 MORSE,这是一种在解剖学层面设计的通用隐式神经渲染框架,用于辅助医学图像分割的学习。与基于离散网格的表示法相比,隐式神经表示法在拟合复杂信号和解决计算机图形问题方面更有效。我们方法的核心是以端到端的方式将医学图像分割表述为渲染问题。具体来说,我们不断将粗略的分割预测与模糊的基于坐标的点表示相一致,并将这些特征汇总以自适应地完善边界区域。为了并行优化多尺度像素级特征,我们利用专家混合(MoE)的理念,设计并训练具有随机门控机制的 MORSE。我们的实验证明,MORSE 可以与不同的医疗分割骨干技术很好地配合使用,在二维和三维监督医疗分割方法中不断取得具有竞争力的性能改进。我们还从理论上分析了 MORSE 的优越性。
{"title":"Implicit Anatomical Rendering for Medical Image Segmentation with Stochastic Experts.","authors":"Chenyu You, Weicheng Dai, Yifei Min, Lawrence Staib, James S Duncan","doi":"10.1007/978-3-031-43898-1_54","DOIUrl":"10.1007/978-3-031-43898-1_54","url":null,"abstract":"<p><p>Integrating high-level semantically correlated contents and low-level anatomical features is of central importance in medical image segmentation. Towards this end, recent deep learning-based medical segmentation methods have shown great promise in better modeling such information. However, convolution operators for medical segmentation typically operate on regular grids, which inherently blur the high-frequency regions, <i>i.e</i>., boundary regions. In this work, we propose MORSE, a generic implicit neural rendering framework designed at an anatomical level to assist learning in medical image segmentation. Our method is motivated by the fact that implicit neural representation has been shown to be more effective in fitting complex signals and solving computer graphics problems than discrete grid-based representation. The core of our approach is to formulate medical image segmentation as a rendering problem in an end-to-end manner. Specifically, we continuously align the coarse segmentation prediction with the ambiguous coordinate-based point representations and aggregate these features to adaptively refine the boundary region. To parallelly optimize multi-scale pixel-level features, we leverage the idea from Mixture-of-Expert (MoE) to design and train our MORSE with a stochastic gating mechanism. Our experiments demonstrate that MORSE can work well with different medical segmentation backbones, consistently achieving competitive performance improvements in both 2D and 3D supervised medical segmentation methods. We also theoretically analyze the superiority of MORSE.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14222 ","pages":"561-571"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11151725/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141262863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image2SSM: Reimagining Statistical Shape Models from Images with Radial Basis Functions. Image2SSM:利用径向基函数从图像中重塑统计形状模型。
Hong Xu, Shireen Y Elhabian

Statistical shape modeling (SSM) is an essential tool for analyzing variations in anatomical morphology. In a typical SSM pipeline, 3D anatomical images, gone through segmentation and rigid registration, are represented using lower-dimensional shape features, on which statistical analysis can be performed. Various methods for constructing compact shape representations have been proposed, but they involve laborious and costly steps. We propose Image2SSM, a novel deep-learning-based approach for SSM that leverages image-segmentation pairs to learn a radial-basis-function (RBF)-based representation of shapes directly from images. This RBF-based shape representation offers a rich self-supervised signal for the network to estimate a continuous, yet compact representation of the underlying surface that can adapt to complex geometries in a data-driven manner. Image2SSM can characterize populations of biological structures of interest by constructing statistical landmark-based shape models of ensembles of anatomical shapes while requiring minimal parameter tuning and no user assistance. Once trained, Image2SSM can be used to infer low-dimensional shape representations from new unsegmented images, paving the way toward scalable approaches for SSM, especially when dealing with large cohorts. Experiments on synthetic and real datasets show the efficacy of the proposed method compared to the state-of-art correspondence-based method for SSM.

统计形状建模(SSM)是分析解剖形态变化的重要工具。在典型的统计形状建模流程中,三维解剖图像经过分割和刚性配准后,使用低维形状特征来表示,并在此基础上进行统计分析。目前已提出了多种构建紧凑形状表示的方法,但这些方法都涉及费力且昂贵的步骤。我们提出的 Image2SSM 是一种基于深度学习的新型 SSM 方法,它利用图像分割对直接从图像中学习基于径向基函数(RBF)的形状表示。这种基于 RBF 的形状表示法为网络提供了丰富的自监督信号,以估计底层表面的连续而紧凑的表示法,并能以数据驱动的方式适应复杂的几何形状。Image2SSM 可以通过构建解剖形状集合的基于统计地标的形状模型来描述感兴趣的生物结构群,同时只需极少的参数调整,无需用户协助。训练完成后,Image2SSM 可用于从新的未分割图像中推断低维形状表示,为 SSM 的可扩展方法铺平道路,尤其是在处理大型队列时。在合成和真实数据集上的实验表明,与基于对应关系的 SSM 方法相比,所提出的方法非常有效。
{"title":"Image2SSM: Reimagining Statistical Shape Models from Images with Radial Basis Functions.","authors":"Hong Xu, Shireen Y Elhabian","doi":"10.1007/978-3-031-43907-0_49","DOIUrl":"10.1007/978-3-031-43907-0_49","url":null,"abstract":"<p><p>Statistical shape modeling (SSM) is an essential tool for analyzing variations in anatomical morphology. In a typical SSM pipeline, 3D anatomical images, gone through segmentation and rigid registration, are represented using lower-dimensional shape features, on which statistical analysis can be performed. Various methods for constructing compact shape representations have been proposed, but they involve laborious and costly steps. We propose Image2SSM, a novel deep-learning-based approach for SSM that leverages image-segmentation pairs to learn a radial-basis-function (RBF)-based representation of shapes directly from images. This RBF-based shape representation offers a rich self-supervised signal for the network to estimate a continuous, yet compact representation of the underlying surface that can adapt to complex geometries in a data-driven manner. Image2SSM can characterize populations of biological structures of interest by constructing statistical landmark-based shape models of ensembles of anatomical shapes while requiring minimal parameter tuning and no user assistance. Once trained, Image2SSM can be used to infer low-dimensional shape representations from new unsegmented images, paving the way toward scalable approaches for SSM, especially when dealing with large cohorts. Experiments on synthetic and real datasets show the efficacy of the proposed method compared to the state-of-art correspondence-based method for SSM.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14220 ","pages":"508-517"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11555643/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142635487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gadolinium-Free Cardiac MRI Myocardial Scar Detection by 4D Convolution Factorization. 无钆心脏MRI心肌瘢痕的4D卷积分解检测。
Amine Amyar, Shiro Nakamori, Manuel Morales, Siyeop Yoon, Jennifer Rodriguez, Jiwon Kim, Robert M Judd, Jonathan W Weinsaft, Reza Nezafat

Gadolinium-based contrast agents are commonly used in cardiac magnetic resonance (CMR) imaging to characterize myocardial scar tissue. Recent works using deep learning have shown the promise of contrast-free short-axis cine images to detect scars based on wall motion abnormalities (WMA) in ischemic patients. However, WMA can occur in patients without a scar. Moreover, the presence of a scar may not always be accompanied by WMA, particularly in non-ischemic heart disease, posing a significant challenge in detecting scars in such cases. To overcome this limitation, we propose a novel deep spatiotemporal residual attention network (ST-RAN) that leverages temporal and spatial information at different scales to detect scars in both ischemic and non-ischemic heart diseases. Our model comprises three primary components. First, we develop a novel factorized 4D (3D+time) convolutional layer that extracts 3D spatial features of the heart and a deep 1D kernel in the temporal direction to extract heart motion. Secondly, we enhance the power of the 4D (3D+time) layer with spatiotemporal attention to extract rich whole-heart features while tracking the long-range temporal relationship between the frames. Lastly, we introduce a residual attention block that extracts spatial and temporal features at different scales to obtain global and local motion features and to detect subtle changes in contrast related to scar. We train and validate our model on a large dataset of 3000 patients who underwent clinical CMR with various indications and different field strengths (1.5T, 3T) from multiple vendors (GE, Siemens) to demonstrate the generalizability and robustness of our model. We show that our model works on both ischemic and non-ischemic heart diseases outperforming state-of-the-art methods. Our code is available at https://github.com/HMS-CardiacMR/Myocardial_Scar_Detection.

钆基造影剂通常用于心脏磁共振(CMR)成像,以表征心肌疤痕组织。最近使用深度学习的研究表明,无对比度短轴电影图像有望检测基于壁运动异常(WMA)的缺血性患者疤痕。然而,无瘢痕的患者也可能发生WMA。此外,疤痕的存在可能并不总是伴随着WMA,特别是在非缺血性心脏病中,这对在这种情况下检测疤痕提出了重大挑战。为了克服这一限制,我们提出了一种新的深度时空剩余注意网络(ST-RAN),该网络利用不同尺度的时空信息来检测缺血性和非缺血性心脏病的疤痕。我们的模型包括三个主要部分。首先,我们开发了一种新的分解4D (3D+时间)卷积层来提取心脏的3D空间特征,并在时间方向上开发了一个深1D核来提取心脏运动。其次,利用时空关注增强4D (3D+time)层的能力,提取丰富的全心特征,同时跟踪帧间的长期时间关系;最后,我们引入了残差注意块,提取不同尺度的空间和时间特征,以获得全局和局部运动特征,并检测与疤痕相关的对比度的细微变化。我们在3000名临床CMR患者的大型数据集上训练和验证了我们的模型,这些患者具有不同的适应症和不同的场强(1.5T, 3T),来自多个供应商(GE, Siemens),以证明我们模型的通用性和稳健性。我们表明,我们的模型对缺血性和非缺血性心脏病都有效,优于最先进的方法。我们的代码可在https://github.com/HMS-CardiacMR/Myocardial_Scar_Detection上获得。
{"title":"Gadolinium-Free Cardiac MRI Myocardial Scar Detection by 4D Convolution Factorization.","authors":"Amine Amyar, Shiro Nakamori, Manuel Morales, Siyeop Yoon, Jennifer Rodriguez, Jiwon Kim, Robert M Judd, Jonathan W Weinsaft, Reza Nezafat","doi":"10.1007/978-3-031-43895-0_60","DOIUrl":"10.1007/978-3-031-43895-0_60","url":null,"abstract":"<p><p>Gadolinium-based contrast agents are commonly used in cardiac magnetic resonance (CMR) imaging to characterize myocardial scar tissue. Recent works using deep learning have shown the promise of contrast-free short-axis cine images to detect scars based on wall motion abnormalities (WMA) in ischemic patients. However, WMA can occur in patients without a scar. Moreover, the presence of a scar may not always be accompanied by WMA, particularly in non-ischemic heart disease, posing a significant challenge in detecting scars in such cases. To overcome this limitation, we propose a novel deep spatiotemporal residual attention network (ST-RAN) that leverages temporal and spatial information at different scales to detect scars in both ischemic and non-ischemic heart diseases. Our model comprises three primary components. First, we develop a novel factorized 4D (3D+time) convolutional layer that extracts 3D spatial features of the heart and a deep 1D kernel in the temporal direction to extract heart motion. Secondly, we enhance the power of the 4D (3D+time) layer with spatiotemporal attention to extract rich whole-heart features while tracking the long-range temporal relationship between the frames. Lastly, we introduce a residual attention block that extracts spatial and temporal features at different scales to obtain global and local motion features and to detect subtle changes in contrast related to scar. We train and validate our model on a large dataset of 3000 patients who underwent clinical CMR with various indications and different field strengths (1.5T, 3T) from multiple vendors (GE, Siemens) to demonstrate the generalizability and robustness of our model. We show that our model works on both ischemic and non-ischemic heart diseases outperforming state-of-the-art methods. Our code is available at https://github.com/HMS-CardiacMR/Myocardial_Scar_Detection.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14221 ","pages":"639-648"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11741542/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143019444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can point cloud networks learn statistical shape models of anatomies? 点云网络能否学习解剖学的统计形状模型?
Jadie Adams, Shireen Elhabian

Statistical Shape Modeling (SSM) is a valuable tool for investigating and quantifying anatomical variations within populations of anatomies. However, traditional correspondence-based SSM generation methods have a prohibitive inference process and require complete geometric proxies (e.g., high-resolution binary volumes or surface meshes) as input shapes to construct the SSM. Unordered 3D point cloud representations of shapes are more easily acquired from various medical imaging practices (e.g., thresholded images and surface scanning). Point cloud deep networks have recently achieved remarkable success in learning permutation-invariant features for different point cloud tasks (e.g., completion, semantic segmentation, classification). However, their application to learning SSM from point clouds is to-date unexplored. In this work, we demonstrate that existing point cloud encoder-decoder-based completion networks can provide an untapped potential for SSM, capturing population-level statistical representations of shapes while reducing the inference burden and relaxing the input requirement. We discuss the limitations of these techniques to the SSM application and suggest future improvements. Our work paves the way for further exploration of point cloud deep learning for SSM, a promising avenue for advancing shape analysis literature and broadening SSM to diverse use cases.

统计形状建模(SSM)是研究和量化解剖群体内部解剖变异的重要工具。然而,传统的基于对应关系的 SSM 生成方法推理过程繁琐,需要完整的几何代型(如高分辨率二元体积或表面网格)作为输入形状来构建 SSM。形状的无序三维点云表示更容易从各种医学成像实践(如阈值图像和表面扫描)中获取。最近,点云深度网络在为不同的点云任务(如补全、语义分割、分类)学习包络不变特征方面取得了显著的成功。然而,它们在从点云学习 SSM 方面的应用至今尚未得到探索。在这项工作中,我们证明了现有的基于点云编码器-解码器的补全网络可以为 SSM 提供尚未开发的潜力,在捕捉形状的群体级统计表示的同时,减轻推理负担并放宽输入要求。我们讨论了这些技术在 SSM 应用中的局限性,并提出了未来的改进建议。我们的工作为进一步探索用于 SSM 的点云深度学习铺平了道路,这是推进形状分析文献和将 SSM 扩展到各种使用案例的一条大有可为的途径。
{"title":"Can point cloud networks learn statistical shape models of anatomies?","authors":"Jadie Adams, Shireen Elhabian","doi":"10.1007/978-3-031-43907-0_47","DOIUrl":"10.1007/978-3-031-43907-0_47","url":null,"abstract":"<p><p>Statistical Shape Modeling (SSM) is a valuable tool for investigating and quantifying anatomical variations within populations of anatomies. However, traditional correspondence-based SSM generation methods have a prohibitive inference process and require complete geometric proxies (e.g., high-resolution binary volumes or surface meshes) as input shapes to construct the SSM. Unordered 3D point cloud representations of shapes are more easily acquired from various medical imaging practices (e.g., thresholded images and surface scanning). Point cloud deep networks have recently achieved remarkable success in learning permutation-invariant features for different point cloud tasks (e.g., completion, semantic segmentation, classification). However, their application to learning SSM from point clouds is to-date unexplored. In this work, we demonstrate that existing point cloud encoder-decoder-based completion networks can provide an untapped potential for SSM, capturing population-level statistical representations of shapes while reducing the inference burden and relaxing the input requirement. We discuss the limitations of these techniques to the SSM application and suggest future improvements. Our work paves the way for further exploration of point cloud deep learning for SSM, a promising avenue for advancing shape analysis literature and broadening SSM to diverse use cases.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14220 ","pages":"486-496"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11534086/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142577292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fully Bayesian VIB-DeepSSM. 完全贝叶斯 VIB-DeepSSM
Jadie Adams, Shireen Y Elhabian

Statistical shape modeling (SSM) enables population-based quantitative analysis of anatomical shapes, informing clinical diagnosis. Deep learning approaches predict correspondence-based SSM directly from unsegmented 3D images but require calibrated uncertainty quantification, motivating Bayesian formulations. Variational information bottleneck DeepSSM (VIB-DeepSSM) is an effective, principled framework for predicting probabilistic shapes of anatomy from images with aleatoric uncertainty quantification. However, VIB is only half-Bayesian and lacks epistemic uncertainty inference. We derive a fully Bayesian VIB formulation and demonstrate the efficacy of two scalable implementation approaches: concrete dropout and batch ensemble. Additionally, we introduce a novel combination of the two that further enhances uncertainty calibration via multimodal marginalization. Experiments on synthetic shapes and left atrium data demonstrate that the fully Bayesian VIB network predicts SSM from images with improved uncertainty reasoning without sacrificing accuracy.

统计形状建模(SSM)可对解剖形状进行基于群体的定量分析,为临床诊断提供信息。深度学习方法可直接从未分离的三维图像中预测基于对应关系的 SSM,但需要校准的不确定性量化,因此需要贝叶斯公式。变异信息瓶颈深度SSM(VIB-DeepSSM)是一种有效的原则性框架,可通过图像预测解剖学的概率形状,并进行不确定性量化。然而,VIB 只是半贝叶斯方法,缺乏认识论不确定性推断。我们推导出了完全贝叶斯的 VIB 方案,并展示了两种可扩展的实施方法的功效:具体剔除和批量集合。此外,我们还介绍了这两种方法的新型组合,通过多模态边际化进一步增强了不确定性校准。在合成形状和左心房数据上的实验表明,全贝叶斯 VIB 网络可以在不牺牲准确性的情况下,通过改进的不确定性推理从图像中预测 SSM。
{"title":"Fully Bayesian VIB-DeepSSM.","authors":"Jadie Adams, Shireen Y Elhabian","doi":"10.1007/978-3-031-43898-1_34","DOIUrl":"10.1007/978-3-031-43898-1_34","url":null,"abstract":"<p><p>Statistical shape modeling (SSM) enables population-based quantitative analysis of anatomical shapes, informing clinical diagnosis. Deep learning approaches predict correspondence-based SSM directly from unsegmented 3D images but require calibrated uncertainty quantification, motivating Bayesian formulations. Variational information bottleneck DeepSSM (VIB-DeepSSM) is an effective, principled framework for predicting probabilistic shapes of anatomy from images with aleatoric uncertainty quantification. However, VIB is only half-Bayesian and lacks epistemic uncertainty inference. We derive a fully Bayesian VIB formulation and demonstrate the efficacy of two scalable implementation approaches: concrete dropout and batch ensemble. Additionally, we introduce a novel combination of the two that further enhances uncertainty calibration via multimodal marginalization. Experiments on synthetic shapes and left atrium data demonstrate that the fully Bayesian VIB network predicts SSM from images with improved uncertainty reasoning without sacrificing accuracy.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14222 ","pages":"346-356"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11536909/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142585366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generating Realistic Brain MRIs via a Conditional Diffusion Probabilistic Model. 通过条件扩散概率模型生成逼真的大脑 MRI 图像
Wei Peng, Ehsan Adeli, Tomas Bosschieter, Sang Hyun Park, Qingyu Zhao, Kilian M Pohl

As acquiring MRIs is expensive, neuroscience studies struggle to attain a sufficient number of them for properly training deep learning models. This challenge could be reduced by MRI synthesis, for which Generative Adversarial Networks (GANs) are popular. GANs, however, are commonly unstable and struggle with creating diverse and high-quality data. A more stable alternative is Diffusion Probabilistic Models (DPMs) with a fine-grained training strategy. To overcome their need for extensive computational resources, we propose a conditional DPM (cDPM) with a memory-efficient process that generates realistic-looking brain MRIs. To this end, we train a 2D cDPM to generate an MRI subvolume conditioned on another subset of slices from the same MRI. By generating slices using arbitrary combinations between condition and target slices, the model only requires limited computational resources to learn interdependencies between slices even if they are spatially far apart. After having learned these dependencies via an attention network, a new anatomy-consistent 3D brain MRI is generated by repeatedly applying the cDPM. Our experiments demonstrate that our method can generate high-quality 3D MRIs that share a similar distribution to real MRIs while still diversifying the training set. The code is available at https://github.com/xiaoiker/mask3DMRI_diffusion and also will be released as part of MONAI, at https://github.com/Project-MONAI/GenerativeModels.

由于磁共振成像的获取成本高昂,神经科学研究很难获得足够数量的磁共振成像来正确训练深度学习模型。通过磁共振成像合成可以减少这一挑战,生成对抗网络(GAN)在这方面很受欢迎。然而,GANs 通常不稳定,难以创建多样化和高质量的数据。更稳定的替代方案是采用细粒度训练策略的扩散概率模型(DPM)。为了克服对大量计算资源的需求,我们提出了一种条件 DPM(cDPM),它具有记忆效率高的过程,能生成逼真的大脑 MRI。为此,我们对二维 cDPM 进行训练,以生成以同一 MRI 的另一个切片子集为条件的 MRI 子卷。通过使用条件切片和目标切片之间的任意组合生成切片,该模型只需要有限的计算资源就能学习切片之间的相互依存关系,即使它们在空间上相距甚远。通过注意力网络学习到这些依赖关系后,重复应用 cDPM 就能生成新的解剖一致的三维大脑 MRI。实验证明,我们的方法可以生成高质量的三维核磁共振成像,其分布与真实核磁共振成像相似,同时还能使训练集多样化。代码可在 https://github.com/xiaoiker/mask3DMRI_diffusion 上获取,也将作为 MONAI 的一部分在 https://github.com/Project-MONAI/GenerativeModels 上发布。
{"title":"Generating Realistic Brain MRIs via a Conditional Diffusion Probabilistic Model.","authors":"Wei Peng, Ehsan Adeli, Tomas Bosschieter, Sang Hyun Park, Qingyu Zhao, Kilian M Pohl","doi":"10.1007/978-3-031-43993-3_2","DOIUrl":"10.1007/978-3-031-43993-3_2","url":null,"abstract":"<p><p>As acquiring MRIs is expensive, neuroscience studies struggle to attain a sufficient number of them for properly training deep learning models. This challenge could be reduced by MRI synthesis, for which Generative Adversarial Networks (GANs) are popular. GANs, however, are commonly unstable and struggle with creating diverse and high-quality data. A more stable alternative is Diffusion Probabilistic Models (DPMs) with a fine-grained training strategy. To overcome their need for extensive computational resources, we propose a conditional DPM (cDPM) with a memory-efficient process that generates realistic-looking brain MRIs. To this end, we train a 2D cDPM to generate an MRI subvolume conditioned on another subset of slices from the same MRI. By generating slices using arbitrary combinations between condition and target slices, the model only requires limited computational resources to learn interdependencies between slices even if they are spatially far apart. After having learned these dependencies via an attention network, a new anatomy-consistent 3D brain MRI is generated by repeatedly applying the cDPM. Our experiments demonstrate that our method can generate high-quality 3D MRIs that share a similar distribution to real MRIs while still diversifying the training set. The code is available at https://github.com/xiaoiker/mask3DMRI_diffusion and also will be released as part of MONAI, at https://github.com/Project-MONAI/GenerativeModels.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14227 ","pages":"14-24"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10758344/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139089834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Expected Appearances for Intraoperative Registration during Neurosurgery. 学习神经外科手术中术中注册的预期外观
Nazim Haouchine, Reuben Dorent, Parikshit Juvekar, Erickson Torio, William M Wells, Tina Kapur, Alexandra J Golby, Sarah Frisken

We present a novel method for intraoperative patient-to-image registration by learning Expected Appearances. Our method uses preoperative imaging to synthesize patient-specific expected views through a surgical microscope for a predicted range of transformations. Our method estimates the camera pose by minimizing the dissimilarity between the intraoperative 2D view through the optical microscope and the synthesized expected texture. In contrast to conventional methods, our approach transfers the processing tasks to the preoperative stage, reducing thereby the impact of low-resolution, distorted, and noisy intraoperative images, that often degrade the registration accuracy. We applied our method in the context of neuronavigation during brain surgery. We evaluated our approach on synthetic data and on retrospective data from 6 clinical cases. Our method outperformed state-of-the-art methods and achieved accuracies that met current clinical standards.

我们提出了一种通过学习预期外观进行术中患者与图像配准的新方法。我们的方法利用术前成像,通过手术显微镜合成患者特定的预期视图,以预测变换范围。我们的方法通过最小化术中光学显微镜二维视图与合成的预期纹理之间的差异来估计相机姿态。与传统方法不同的是,我们的方法将处理任务转移到术前阶段,从而减少了低分辨率、扭曲和嘈杂的术中图像的影响,因为这些图像通常会降低配准精度。我们将这种方法应用于脑外科手术中的神经导航。我们在合成数据和 6 个临床病例的回顾性数据上对我们的方法进行了评估。我们的方法优于最先进的方法,并达到了目前的临床标准。
{"title":"Learning Expected Appearances for Intraoperative Registration during Neurosurgery.","authors":"Nazim Haouchine, Reuben Dorent, Parikshit Juvekar, Erickson Torio, William M Wells, Tina Kapur, Alexandra J Golby, Sarah Frisken","doi":"10.1007/978-3-031-43996-4_22","DOIUrl":"10.1007/978-3-031-43996-4_22","url":null,"abstract":"<p><p>We present a novel method for intraoperative patient-to-image registration by learning Expected Appearances. Our method uses preoperative imaging to synthesize patient-specific expected views through a surgical microscope for a predicted range of transformations. Our method estimates the camera pose by minimizing the dissimilarity between the intraoperative 2D view through the optical microscope and the synthesized expected texture. In contrast to conventional methods, our approach transfers the processing tasks to the preoperative stage, reducing thereby the impact of low-resolution, distorted, and noisy intraoperative images, that often degrade the registration accuracy. We applied our method in the context of neuronavigation during brain surgery. We evaluated our approach on synthetic data and on retrospective data from 6 clinical cases. Our method outperformed state-of-the-art methods and achieved accuracies that met current clinical standards.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14228 ","pages":"227-237"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10870253/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139901119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1