首页 > 最新文献

Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention最新文献

英文 中文
Cross-Slice Attention and Evidential Critical Loss for Uncertainty-Aware Prostate Cancer Detection. 用于不确定性感知前列腺癌检测的跨片注意力和证据临界损失。
Alex Ling Yu Hung, Haoxin Zheng, Kai Zhao, Kaifeng Pang, Demetri Terzopoulos, Kyunghyun Sung

Current deep learning-based models typically analyze medical images in either 2D or 3D albeit disregarding volumetric information or suffering sub-optimal performance due to the anisotropic resolution of MR data. Furthermore, providing an accurate uncertainty estimation is beneficial to clinicians, as it indicates how confident a model is about its prediction. We propose a novel 2.5D cross-slice attention model that utilizes both global and local information, along with an evidential critical loss, to perform evidential deep learning for the detection in MR images of prostate cancer, one of the most common cancers and a leading cause of cancer-related death in men. We perform extensive experiments with our model on two different datasets and achieve state-of-the-art performance in prostate cancer detection along with improved epistemic uncertainty estimation. The implementation of the model is available at https://github.com/aL3x-O-o-Hung/GLCSA_ECLoss.

目前基于深度学习的模型通常分析二维或三维医学图像,但会忽略容积信息,或因磁共振数据的各向异性分辨率而导致性能不达标。此外,提供准确的不确定性估计对临床医生也有好处,因为这表明了模型对其预测的信心程度。我们提出了一种新型 2.5D 交叉切片注意力模型,该模型利用全局和局部信息以及证据临界损失来执行证据深度学习,以检测 MR 图像中的前列腺癌,前列腺癌是最常见的癌症之一,也是男性癌症相关死亡的主要原因。我们用我们的模型在两个不同的数据集上进行了广泛的实验,在前列腺癌检测方面取得了最先进的性能,并改进了认识不确定性估计。该模型的实现可在 https://github.com/aL3x-O-o-Hung/GLCSA_ECLoss 上获得。
{"title":"Cross-Slice Attention and Evidential Critical Loss for Uncertainty-Aware Prostate Cancer Detection.","authors":"Alex Ling Yu Hung, Haoxin Zheng, Kai Zhao, Kaifeng Pang, Demetri Terzopoulos, Kyunghyun Sung","doi":"10.1007/978-3-031-72111-3_11","DOIUrl":"10.1007/978-3-031-72111-3_11","url":null,"abstract":"<p><p>Current deep learning-based models typically analyze medical images in either 2D or 3D albeit disregarding volumetric information or suffering sub-optimal performance due to the anisotropic resolution of MR data. Furthermore, providing an accurate uncertainty estimation is beneficial to clinicians, as it indicates how confident a model is about its prediction. We propose a novel 2.5D cross-slice attention model that utilizes both global and local information, along with an evidential critical loss, to perform evidential deep learning for the detection in MR images of prostate cancer, one of the most common cancers and a leading cause of cancer-related death in men. We perform extensive experiments with our model on two different datasets and achieve state-of-the-art performance in prostate cancer detection along with improved epistemic uncertainty estimation. The implementation of the model is available at https://github.com/aL3x-O-o-Hung/GLCSA_ECLoss.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15008 ","pages":"113-123"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11646698/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142831545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intraoperative Registration by Cross-Modal Inverse Neural Rendering. 跨模态逆向神经渲染的术中配准。
Maximilian Fehrentz, Mohammad Farid Azampour, Reuben Dorent, Hassan Rasheed, Colin Galvin, Alexandra Golby, William M Wells, Sarah Frisken, Nassir Navab, Nazim Haouchine

We present in this paper a novel approach for 3D/2D intraoperative registration during neurosurgery via cross-modal inverse neural rendering. Our approach separates implicit neural representation into two components, handling anatomical structure preoperatively and appearance intraoperatively. This disentanglement is achieved by controlling a Neural Radiance Field's appearance with a multi-style hypernetwork. Once trained, the implicit neural representation serves as a differentiable rendering engine, which can be used to estimate the surgical camera pose by minimizing the dissimilarity between its rendered images and the target intraoperative image. We tested our method on retrospective patients' data from clinical cases, showing that our method outperforms state-of-the-art while meeting current clinical standards for registration. Code and additional resources can be found at https://maxfehrentz.github.io/style-ngp/.

我们在这篇论文中提出了一种新的方法,在神经外科手术中通过交叉模态逆神经渲染进行3D/2D术中注册。我们的方法将隐式神经表征分为两个部分,术前处理解剖结构和术中处理外观。这种解纠缠是通过使用多样式超网络控制神经辐射场的外观来实现的。经过训练后,隐式神经表示作为一个可微渲染引擎,可以通过最小化其渲染图像与目标术中图像之间的不相似性来估计手术相机的姿势。我们在临床病例的回顾性患者数据上测试了我们的方法,表明我们的方法在满足当前临床注册标准的同时优于最先进的技术。代码和其他资源可以在https://maxfehrentz.github.io/style-ngp/上找到。
{"title":"Intraoperative Registration by Cross-Modal Inverse Neural Rendering.","authors":"Maximilian Fehrentz, Mohammad Farid Azampour, Reuben Dorent, Hassan Rasheed, Colin Galvin, Alexandra Golby, William M Wells, Sarah Frisken, Nassir Navab, Nazim Haouchine","doi":"10.1007/978-3-031-72089-5_30","DOIUrl":"10.1007/978-3-031-72089-5_30","url":null,"abstract":"<p><p>We present in this paper a novel approach for 3D/2D intraoperative registration during neurosurgery via cross-modal inverse neural rendering. Our approach separates implicit neural representation into two components, handling anatomical structure preoperatively and appearance intraoperatively. This disentanglement is achieved by controlling a Neural Radiance Field's appearance with a multi-style hypernetwork. Once trained, the implicit neural representation serves as a differentiable rendering engine, which can be used to estimate the surgical camera pose by minimizing the dissimilarity between its rendered images and the target intraoperative image. We tested our method on retrospective patients' data from clinical cases, showing that our method outperforms state-of-the-art while meeting current clinical standards for registration. Code and additional resources can be found at https://maxfehrentz.github.io/style-ngp/.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15006 ","pages":"317-327"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12714352/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145807091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conditional Diffusion Model with Spatial Attention and Latent Embedding for Medical Image Segmentation. 基于空间注意和隐嵌入的条件扩散模型医学图像分割。
Behzad Hejrati, Soumyanil Banerjee, Carri Glide-Hurst, Ming Dong

Diffusion models have been used extensively for high quality image and video generation tasks. In this paper, we propose a novel conditional diffusion model with spatial attention and latent embedding (cDAL) for medical image segmentation. In cDAL, a convolutional neural network (CNN) based discriminator is used at every time-step of the diffusion process to distinguish between the generated labels and the real ones. A spatial attention map is computed based on the features learned by the discriminator to help cDAL generate more accurate segmentation of discriminative regions in an input image. Additionally, we incorporated a random latent embedding into each layer of our model to significantly reduce the number of training and sampling time-steps, thereby making it much faster than other diffusion models for image segmentation. We applied cDAL on 3 publicly available medical image segmentation datasets (MoNuSeg, Chest X-ray and Hippocampus) and observed significant qualitative and quantitative improvements with higher Dice scores and mIoU over the state-of-the-art algorithms. The source code is publicly available at https://github.com/Hejrati/cDAL/.

扩散模型已广泛用于高质量的图像和视频生成任务。本文提出了一种基于空间注意和潜在嵌入的医学图像分割条件扩散模型。在cDAL中,在扩散过程的每个时间步使用基于卷积神经网络(CNN)的鉴别器来区分生成的标签和真实的标签。基于鉴别器学习到的特征计算空间注意图,以帮助cDAL对输入图像中的判别区域产生更准确的分割。此外,我们在模型的每一层中加入了一个随机潜伏嵌入,以显着减少训练和采样时间步数,从而使其比其他图像分割扩散模型快得多。我们将cDAL应用于3个公开可用的医学图像分割数据集(MoNuSeg,胸部x射线和海马),并观察到与最先进的算法相比,具有更高的Dice分数和mIoU的显著定性和定量改进。源代码可在https://github.com/Hejrati/cDAL/上公开获得。
{"title":"Conditional Diffusion Model with Spatial Attention and Latent Embedding for Medical Image Segmentation.","authors":"Behzad Hejrati, Soumyanil Banerjee, Carri Glide-Hurst, Ming Dong","doi":"10.1007/978-3-031-72114-4_20","DOIUrl":"10.1007/978-3-031-72114-4_20","url":null,"abstract":"<p><p>Diffusion models have been used extensively for high quality image and video generation tasks. In this paper, we propose a novel conditional diffusion model with spatial attention and latent embedding (cDAL) for medical image segmentation. In cDAL, a convolutional neural network (CNN) based discriminator is used at every time-step of the diffusion process to distinguish between the generated labels and the real ones. A spatial attention map is computed based on the features learned by the discriminator to help cDAL generate more accurate segmentation of discriminative regions in an input image. Additionally, we incorporated a random latent embedding into each layer of our model to significantly reduce the number of training and sampling time-steps, thereby making it much faster than other diffusion models for image segmentation. We applied cDAL on 3 publicly available medical image segmentation datasets (MoNuSeg, Chest X-ray and Hippocampus) and observed significant qualitative and quantitative improvements with higher Dice scores and mIoU over the state-of-the-art algorithms. The source code is publicly available at https://github.com/Hejrati/cDAL/.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15009 ","pages":"202-212"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11974562/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143805308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Longitudinally Consistent Individualized Prediction of Infant Cortical Morphological Development. 纵向一致的婴儿皮质形态发育个体化预测。
Xinrui Yuan, Jiale Cheng, Dan Hu, Zhengwang Wu, Li Wang, Weili Lin, Gang Li

Neurodevelopment is exceptionally dynamic and critical during infancy, as many neurodevelopmental disorders emerge from abnormal brain development during this stage. Obtaining a full trajectory of neurodevelopment from existing incomplete longitudinal data can enrich our limited understanding of normal early brain development and help identify neurodevelopmental disorders. Although many regression models and deep learning methods have been proposed for longitudinal prediction based on incomplete datasets, they have two major drawbacks. First, regression models suffered from the strict requirements of input and output time points, which is less useful in practical scenarios. Second, although existing deep learning methods could predict cortical development at multiple ages, they predicted missing data independently with each available scan, yielding inconsistent predictions for a target time point given multiple inputs, which ignores longitudinal dependencies and introduces ambiguity in practical applications. To this end, we emphasize temporal consistency and develop a novel, flexible framework named longitudinally consistent triplet disentanglement autoencoder to predict an individualized longitudinal cortical developmental trajectory based on each available input by encouraging the similarity among trajectories with a dynamic time-warping loss. Specifically, to achieve individualized prediction, we employ a surfaced-based autoencoder, which decomposes the encoded latent features into identity-related and age-related features with an age estimation task and identity similarity loss as supervisions. These identity-related features are further combined with age conditions in the latent space to generate longitudinal developmental trajectories with the decoder. Experiments on predicting longitudinal infant cortical property maps validate the superior longitudinal consistency and exactness of our results compared to baselines'.

在婴儿期神经发育异常动态和关键,因为许多神经发育障碍出现在这一阶段的大脑发育异常。从现有的不完整的纵向数据中获得神经发育的完整轨迹可以丰富我们对正常早期大脑发育的有限理解,并有助于识别神经发育障碍。虽然已经提出了许多基于不完整数据集的纵向预测的回归模型和深度学习方法,但它们有两个主要缺点。首先,回归模型对输入和输出时间点有严格的要求,在实际场景中用处不大。其次,尽管现有的深度学习方法可以预测多个年龄段的皮质发育,但它们在每次可用扫描中独立预测缺失的数据,在给定多个输入的目标时间点上产生不一致的预测,这忽略了纵向依赖关系,并在实际应用中引入了模糊性。为此,我们强调时间一致性,并开发了一种名为纵向一致三重态解纠缠自编码器的新颖灵活框架,通过鼓励具有动态时间扭曲损失的轨迹之间的相似性来预测基于每个可用输入的个性化纵向皮质发育轨迹。具体而言,为了实现个性化预测,我们采用了基于表面的自编码器,该编码器将编码的潜在特征分解为身份相关特征和年龄相关特征,并以年龄估计任务和身份相似度损失作为监督。这些身份相关特征进一步与潜在空间中的年龄条件相结合,与解码器一起产生纵向发展轨迹。预测纵向婴儿皮质属性图的实验验证了我们的结果与基线相比具有优越的纵向一致性和准确性。
{"title":"Longitudinally Consistent Individualized Prediction of Infant Cortical Morphological Development.","authors":"Xinrui Yuan, Jiale Cheng, Dan Hu, Zhengwang Wu, Li Wang, Weili Lin, Gang Li","doi":"10.1007/978-3-031-72086-4_42","DOIUrl":"10.1007/978-3-031-72086-4_42","url":null,"abstract":"<p><p>Neurodevelopment is exceptionally dynamic and critical during infancy, as many neurodevelopmental disorders emerge from abnormal brain development during this stage. Obtaining a full trajectory of neurodevelopment from existing incomplete longitudinal data can enrich our limited understanding of normal early brain development and help identify neurodevelopmental disorders. Although many regression models and deep learning methods have been proposed for longitudinal prediction based on incomplete datasets, they have two major drawbacks. First, regression models suffered from the strict requirements of input and output time points, which is less useful in practical scenarios. Second, although existing deep learning methods could predict cortical development at multiple ages, they predicted missing data independently with each available scan, yielding inconsistent predictions for a target time point given multiple inputs, which ignores longitudinal dependencies and introduces ambiguity in practical applications. To this end, we emphasize temporal consistency and develop a novel, flexible framework named longitudinally consistent triplet disentanglement autoencoder to predict an individualized longitudinal cortical developmental trajectory based on each available input by encouraging the similarity among trajectories with a dynamic time-warping loss. Specifically, to achieve individualized prediction, we employ a surfaced-based autoencoder, which decomposes the encoded latent features into identity-related and age-related features with an age estimation task and identity similarity loss as supervisions. These identity-related features are further combined with age conditions in the latent space to generate longitudinal developmental trajectories with the decoder. Experiments on predicting longitudinal infant cortical property maps validate the superior longitudinal consistency and exactness of our results compared to baselines'.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15005 ","pages":"447-457"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12974619/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147438642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Weakly Supervised Cerebellar Cortical Surface Parcellation with Self-Visual Representation Learning. 自我视觉表征学习的弱监督小脑皮质表面包裹化。
Zhengwang Wu, Jiale Cheng, Fenqiang Zhao, Ya Wang, Yue Sun, Dajiang Zhu, Tianming Liu, Valerie Jewells, Weili Lin, Li Wang, Gang Li

The cerebellum (i.e., little brain) plays an important role in motion and balances control abilities, despite its much smaller size and deeper sulci compared to the cerebrum. Previous cerebellum studies mainly relied on and focused on conventional volumetric analysis, which ignores the extremely deep and highly convoluted nature of the cerebellar cortex. To better reveal localized functional and structural changes, we propose cortical surface-based analysis of the cerebellar cortex. Specifically, we first reconstruct the cerebellar cortical surfaces to represent and characterize the highly folded cerebellar cortex in a geometrically accurate and topologically correct manner. Then, we propose a novel method to automatically parcellate the cerebellar cortical surface into anatomically meaningful regions by a weakly supervised graph convolutional neural network. Instead of relying on registration or requiring mapping the cerebellar surface to a sphere, which are either inaccurate or have large geometric distortions due to the deep cerebellar sulci, our learning-based model directly deals with the original cerebellar cortical surface by decomposing this challenging task into two steps. First, we learn the effective representation of the cerebellar cortical surface patches with a contrastive self-learning framework. Then, we map the learned representations to parcellation labels. We have validated our method using data from the Baby Connectome Project and the experimental results demonstrate its superior effectiveness and accuracy, compared to existing methods.

小脑(即小脑)在运动和平衡控制能力方面起着重要作用,尽管它的体积比大脑小得多,脑沟也深得多。以往的小脑研究主要依赖于传统的体积分析,忽视了小脑皮层极其深层和高度复杂的本质。为了更好地揭示局部功能和结构变化,我们提出了基于皮质表面的小脑皮层分析。具体来说,我们首先重建小脑皮层表面,以几何精确和拓扑正确的方式表示和表征高度折叠的小脑皮层。然后,我们提出了一种利用弱监督图卷积神经网络将小脑皮层表面自动分割成解剖意义区域的新方法。我们的基于学习的模型将这一具有挑战性的任务分解为两个步骤,直接处理原始的小脑皮质表面,而不是依赖于配位或需要将小脑表面映射到球体上,这要么是不准确的,要么是由于小脑沟深而产生巨大的几何扭曲。首先,我们用对比自学习框架学习了小脑皮层表面斑块的有效表征。然后,我们将学习到的表示映射到包装标签。我们使用婴儿连接体项目的数据验证了我们的方法,实验结果表明,与现有方法相比,它具有更高的有效性和准确性。
{"title":"Weakly Supervised Cerebellar Cortical Surface Parcellation with Self-Visual Representation Learning.","authors":"Zhengwang Wu, Jiale Cheng, Fenqiang Zhao, Ya Wang, Yue Sun, Dajiang Zhu, Tianming Liu, Valerie Jewells, Weili Lin, Li Wang, Gang Li","doi":"10.1007/978-3-031-43993-3_42","DOIUrl":"https://doi.org/10.1007/978-3-031-43993-3_42","url":null,"abstract":"<p><p>The cerebellum (i.e., little brain) plays an important role in motion and balances control abilities, despite its much smaller size and deeper sulci compared to the cerebrum. Previous cerebellum studies mainly relied on and focused on conventional volumetric analysis, which ignores the extremely deep and highly convoluted nature of the cerebellar cortex. To better reveal localized functional and structural changes, we propose cortical surface-based analysis of the cerebellar cortex. Specifically, we first reconstruct the cerebellar cortical surfaces to represent and characterize the highly folded cerebellar cortex in a geometrically accurate and topologically correct manner. Then, we propose a novel method to automatically parcellate the cerebellar cortical surface into anatomically meaningful regions by a weakly supervised graph convolutional neural network. Instead of relying on registration or requiring mapping the cerebellar surface to a sphere, which are either inaccurate or have large geometric distortions due to the deep cerebellar sulci, our learning-based model directly deals with the original cerebellar cortical surface by decomposing this challenging task into two steps. First, we learn the effective representation of the cerebellar cortical surface patches with a contrastive self-learning framework. Then, we map the learned representations to parcellation labels. We have validated our method using data from the Baby Connectome Project and the experimental results demonstrate its superior effectiveness and accuracy, compared to existing methods.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14227 ","pages":"429-438"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12030008/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144036370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MRIS: A Multi-modal Retrieval Approach for Image Synthesis on Diverse Modalities. MRIS:多模态图像合成的多模态检索方法。
Boqi Chen, Marc Niethammer

Multiple imaging modalities are often used for disease diagnosis, prediction, or population-based analyses. However, not all modalities might be available due to cost, different study designs, or changes in imaging technology. If the differences between the types of imaging are small, data harmonization approaches can be used; for larger changes, direct image synthesis approaches have been explored. In this paper, we develop an approach based on multi-modal metric learning to synthesize images of diverse modalities. We use metric learning via multi-modal image retrieval, resulting in embeddings that can relate images of different modalities. Given a large image database, the learned image embeddings allow us to use k-nearest neighbor (k-NN) regression for image synthesis. Our driving medical problem is knee osteoarthritis (KOA), but our developed method is general after proper image alignment. We test our approach by synthesizing cartilage thickness maps obtained from 3D magnetic resonance (MR) images using 2D radiographs. Our experiments show that the proposed method outperforms direct image synthesis and that the synthesized thickness maps retain information relevant to downstream tasks such as progression prediction and Kellgren-Lawrence grading (KLG). Our results suggest that retrieval approaches can be used to obtain high-quality and meaningful image synthesis results given large image databases.

多种成像模式通常用于疾病诊断、预测或基于人群的分析。然而,由于成本、研究设计不同或成像技术变化等原因,并非所有成像模式都可用。如果成像类型之间的差异较小,可以使用数据协调方法;如果差异较大,则可以探索直接图像合成方法。在本文中,我们开发了一种基于多模态度量学习的方法,用于合成不同模态的图像。我们通过多模态图像检索来进行度量学习,从而得到能将不同模态图像联系起来的嵌入。给定一个大型图像数据库,学习到的图像嵌入允许我们使用 k 近邻(k-NN)回归进行图像合成。我们要解决的医学问题是膝关节骨性关节炎(KOA),但我们开发的方法在适当的图像配准后具有通用性。我们通过使用二维射线照片合成从三维磁共振(MR)图像中获得的软骨厚度图来测试我们的方法。我们的实验表明,所提出的方法优于直接合成图像的方法,而且合成的厚度图保留了与进展预测和 Kellgren-Lawrence 分级(KLG)等下游任务相关的信息。我们的研究结果表明,在大型图像数据库中,检索方法可用于获得高质量和有意义的图像合成结果。
{"title":"MRIS: A Multi-modal Retrieval Approach for Image Synthesis on Diverse Modalities.","authors":"Boqi Chen, Marc Niethammer","doi":"10.1007/978-3-031-43999-5_26","DOIUrl":"10.1007/978-3-031-43999-5_26","url":null,"abstract":"<p><p>Multiple imaging modalities are often used for disease diagnosis, prediction, or population-based analyses. However, not all modalities might be available due to cost, different study designs, or changes in imaging technology. If the differences between the types of imaging are small, data harmonization approaches can be used; for larger changes, direct image synthesis approaches have been explored. In this paper, we develop an approach based on multi-modal metric learning to synthesize images of diverse modalities. We use metric learning via multi-modal image retrieval, resulting in embeddings that can relate images of different modalities. Given a large image database, the learned image embeddings allow us to use k-nearest neighbor (<i>k</i>-NN) regression for image synthesis. Our driving medical problem is knee osteoarthritis (KOA), but our developed method is general after proper image alignment. We test our approach by synthesizing cartilage thickness maps obtained from 3D magnetic resonance (MR) images using 2D radiographs. Our experiments show that the proposed method outperforms direct image synthesis and that the synthesized thickness maps retain information relevant to downstream tasks such as progression prediction and Kellgren-Lawrence grading (KLG). Our results suggest that retrieval approaches can be used to obtain high-quality and meaningful image synthesis results given large image databases.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14229 ","pages":"271-281"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11378323/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142157088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How Does Pruning Impact Long-Tailed Multi-label Medical Image Classifiers? 修剪如何影响长尾多标签医学图像分类器?
Gregory Holste, Ziyu Jiang, Ajay Jaiswal, Maria Hanna, Shlomo Minkowitz, Alan C Legasto, Joanna G Escalon, Sharon Steinberger, Mark Bittman, Thomas C Shen, Ying Ding, Ronald M Summers, George Shih, Yifan Peng, Zhangyang Wang

Pruning has emerged as a powerful technique for compressing deep neural networks, reducing memory usage and inference time without significantly affecting overall performance. However, the nuanced ways in which pruning impacts model behavior are not well understood, particularly for long-tailed, multi-label datasets commonly found in clinical settings. This knowledge gap could have dangerous implications when deploying a pruned model for diagnosis, where unexpected model behavior could impact patient well-being. To fill this gap, we perform the first analysis of pruning's effect on neural networks trained to diagnose thorax diseases from chest X-rays (CXRs). On two large CXR datasets, we examine which diseases are most affected by pruning and characterize class "forgettability" based on disease frequency and co-occurrence behavior. Further, we identify individual CXRs where uncompressed and heavily pruned models disagree, known as pruning-identified exemplars (PIEs), and conduct a human reader study to evaluate their unifying qualities. We find that radiologists perceive PIEs as having more label noise, lower image quality, and higher diagnosis difficulty. This work represents a first step toward understanding the impact of pruning on model behavior in deep long-tailed, multi-label medical image classification. All code, model weights, and data access instructions can be found at https://github.com/VITA-Group/PruneCXR.

修剪已成为一种强大的压缩深度神经网络的技术,可以在不显著影响整体性能的情况下减少内存使用和推理时间。然而,修剪影响模型行为的细微方式尚不清楚,尤其是对于临床环境中常见的长尾多标签数据集。当部署修剪模型进行诊断时,这种知识差距可能会产生危险的影响,因为意外的模型行为可能会影响患者的健康。为了填补这一空白,我们首次分析了修剪对经过训练的神经网络的影响,这些神经网络用于通过胸部X射线(CXR)诊断胸部疾病。在两个大型CXR数据集上,我们检查了哪些疾病受到修剪的影响最大,并基于疾病频率和共现行为来表征类“可遗忘性”。此外,我们确定了未压缩和大量修剪模型不一致的单个CXR,称为修剪已识别样本(PIE),并进行了人类读者研究,以评估其统一性。我们发现放射科医生认为PIE具有更多的标签噪声、更低的图像质量和更高的诊断难度。这项工作代表着理解修剪对深度长尾、多标签医学图像分类中模型行为的影响的第一步。所有代码、模型权重和数据访问指令都可以在https://github.com/VITA-Group/PruneCXR.
{"title":"How Does Pruning Impact Long-Tailed Multi-label Medical Image Classifiers?","authors":"Gregory Holste, Ziyu Jiang, Ajay Jaiswal, Maria Hanna, Shlomo Minkowitz, Alan C Legasto, Joanna G Escalon, Sharon Steinberger, Mark Bittman, Thomas C Shen, Ying Ding, Ronald M Summers, George Shih, Yifan Peng, Zhangyang Wang","doi":"10.1007/978-3-031-43904-9_64","DOIUrl":"10.1007/978-3-031-43904-9_64","url":null,"abstract":"<p><p>Pruning has emerged as a powerful technique for compressing deep neural networks, reducing memory usage and inference time without significantly affecting overall performance. However, the nuanced ways in which pruning impacts model behavior are not well understood, particularly for <i>long-tailed</i>, <i>multi-label</i> datasets commonly found in clinical settings. This knowledge gap could have dangerous implications when deploying a pruned model for diagnosis, where unexpected model behavior could impact patient well-being. To fill this gap, we perform the first analysis of pruning's effect on neural networks trained to diagnose thorax diseases from chest X-rays (CXRs). On two large CXR datasets, we examine which diseases are most affected by pruning and characterize class \"forgettability\" based on disease frequency and co-occurrence behavior. Further, we identify individual CXRs where uncompressed and heavily pruned models disagree, known as pruning-identified exemplars (PIEs), and conduct a human reader study to evaluate their unifying qualities. We find that radiologists perceive PIEs as having more label noise, lower image quality, and higher diagnosis difficulty. This work represents a first step toward understanding the impact of pruning on model behavior in deep long-tailed, multi-label medical image classification. All code, model weights, and data access instructions can be found at https://github.com/VITA-Group/PruneCXR.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14224 ","pages":"663-673"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10568970/pdf/nihms-1936096.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41224575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prediction of Infant Cognitive Development with Cortical Surface-Based Multimodal Learning. 基于皮质表面的多模态学习对婴儿认知发展的预测。
Jiale Cheng, Xin Zhang, Fenqiang Zhao, Zhengwang Wu, Xinrui Yuan, Li Wang, Weili Lin, Gang Li

Exploring the relationship between the cognitive ability and infant cortical structural and functional development is critically important to advance our understanding of early brain development, which, however, is very challenging due to the complex and dynamic brain development in early postnatal stages. Conventional approaches typically use either the structural MRI or resting-state functional MRI and rely on the region-level features or inter-region connectivity features after cortical parcellation for predicting cognitive scores. However, these methods have two major issues: 1) spatial information loss, which discards the critical fine-grained spatial patterns containing rich information related to cognitive development; 2) modality information loss, which ignores the complementary information and the interaction between the structural and functional images. To address these issues, we unprecedentedly invent a novel framework, namely cortical surface-based multimodal learning framework (CSML), to leverage fine-grained multimodal features for cognition development prediction. First, we introduce the fine-grained surface-based data representation to capture spatially detailed structural and functional information. Then, a dual-branch network is proposed to extract the discriminative features for each modality respectively and further captures the modality-shared and complementary information with a disentanglement strategy. Finally, an age-guided cognition prediction module is developed based on the prior that the cognition develops along with age. We validate our method on an infant multimodal MRI dataset with 318 scans. Compared to state-of-the-art methods, our method consistently achieves superior performances, and for the first time suggests crucial regions and features for cognition development hidden in the fine-grained spatial details of cortical structure and function.

探索认知能力与婴儿大脑皮层结构和功能发育之间的关系对于促进我们对早期大脑发育的理解至关重要,然而,由于出生后早期大脑发育的复杂性和动态性,探索认知能力与早期大脑发育之间的关系非常具有挑战性。传统的方法通常使用结构MRI或静息状态功能MRI,并依赖于皮质分割后的区域水平特征或区域间连接特征来预测认知评分。然而,这些方法存在两个主要问题:1)空间信息丢失,丢弃了包含丰富认知发展相关信息的关键细粒度空间模式;2)情态信息缺失,忽略了结构图像和功能图像之间的互补信息和相互作用。为了解决这些问题,我们史无前例地发明了一个新的框架,即基于皮质表面的多模态学习框架(CSML),以利用细粒度的多模态特征进行认知发展预测。首先,我们引入了细粒度的基于表面的数据表示来捕获空间上详细的结构和功能信息。然后,提出了双分支网络分别提取各模态的判别特征,并利用解纠缠策略进一步捕获模态共享和互补信息。最后,基于认知随年龄发展的先验,开发了年龄导向的认知预测模块。我们在318次婴儿多模态MRI数据集上验证了我们的方法。与最先进的方法相比,我们的方法始终取得了卓越的性能,并首次揭示了隐藏在皮层结构和功能的细粒度空间细节中的认知发展的关键区域和特征。
{"title":"Prediction of Infant Cognitive Development with Cortical Surface-Based Multimodal Learning.","authors":"Jiale Cheng, Xin Zhang, Fenqiang Zhao, Zhengwang Wu, Xinrui Yuan, Li Wang, Weili Lin, Gang Li","doi":"10.1007/978-3-031-43895-0_58","DOIUrl":"10.1007/978-3-031-43895-0_58","url":null,"abstract":"<p><p>Exploring the relationship between the cognitive ability and infant cortical structural and functional development is critically important to advance our understanding of early brain development, which, however, is very challenging due to the complex and dynamic brain development in early postnatal stages. Conventional approaches typically use either the structural MRI or resting-state functional MRI and rely on the region-level features or inter-region connectivity features after cortical parcellation for predicting cognitive scores. However, these methods have two major issues: 1) <i>spatial information loss</i>, which discards the critical fine-grained spatial patterns containing rich information related to cognitive development; 2) <i>modality information loss</i>, which ignores the complementary information and the interaction between the structural and functional images. To address these issues, we unprecedentedly invent a novel framework, namely cortical surface-based multimodal learning framework (CSML), to leverage fine-grained multimodal features for cognition development prediction. First, we introduce the fine-grained surface-based data representation to capture spatially detailed structural and functional information. Then, a dual-branch network is proposed to extract the discriminative features for each modality respectively and further captures the modality-shared and complementary information with a disentanglement strategy. Finally, an age-guided cognition prediction module is developed based on the prior that the cognition develops along with age. We validate our method on an infant multimodal MRI dataset with 318 scans. Compared to state-of-the-art methods, our method consistently achieves superior performances, and for the first time suggests crucial regions and features for cognition development hidden in the fine-grained spatial details of cortical structure and function.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14221 ","pages":"618-627"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12716870/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145807054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Democratizing Pathological Image Segmentation with Lay Annotators via Molecular-empowered Learning. 通过分子赋能学习,利用非专业注释器实现病理图像分割的民主化。
Ruining Deng, Yanwei Li, Peize Li, Jiacheng Wang, Lucas W Remedios, Saydolimkhon Agzamkhodjaev, Zuhayr Asad, Quan Liu, Can Cui, Yaohong Wang, Yihan Wang, Yucheng Tang, Haichun Yang, Yuankai Huo

Multi-class cell segmentation in high-resolution Giga-pixel whole slide images (WSI) is critical for various clinical applications. Training such an AI model typically requires labor-intensive pixel-wise manual annotation from experienced domain experts (e.g., pathologists). Moreover, such annotation is error-prone when differentiating fine-grained cell types (e.g., podocyte and mesangial cells) via the naked human eye. In this study, we assess the feasibility of democratizing pathological AI deployment by only using lay annotators (annotators without medical domain knowledge). The contribution of this paper is threefold: (1) We proposed a molecular-empowered learning scheme for multi-class cell segmentation using partial labels from lay annotators; (2) The proposed method integrated Giga-pixel level molecular-morphology cross-modality registration, molecular-informed annotation, and molecular-oriented segmentation model, so as to achieve significantly superior performance via 3 lay annotators as compared with 2 experienced pathologists; (3) A deep corrective learning (learning with imperfect label) method is proposed to further improve the segmentation performance using partially annotated noisy data. From the experimental results, our learning method achieved F1 = 0.8496 using molecular-informed annotations from lay annotators, which is better than conventional morphology-based annotations (F1 = 0.7015) from experienced pathologists. Our method democratizes the development of a pathological segmentation deep model to the lay annotator level, which consequently scales up the learning process similar to a non-medical computer vision task. The official implementation and cell annotations are publicly available at https://github.com/hrlblab/MolecularEL.

高分辨率千兆像素全切片图像(WSI)中的多类细胞分割对于各种临床应用至关重要。训练这种人工智能模型通常需要经验丰富的领域专家(如病理学家)进行劳动密集型的像素人工标注。此外,在通过肉眼区分细粒度细胞类型(如荚膜细胞和间质细胞)时,这种标注容易出错。在本研究中,我们评估了仅使用非专业注释者(不具备医学领域知识的注释者)来实现病理人工智能部署民主化的可行性。本文的贡献有三:(1)我们提出了一种利用非专业注释者的部分标签进行多类细胞分割的分子赋能学习方案;(2)所提出的方法集成了千兆像素级分子形态学跨模态注册、分子信息注释和面向分子的分割模型,从而使通过 3 名非专业注释者获得的性能明显优于 2 名经验丰富的病理学家;(3)提出了一种深度矫正学习(不完美标签学习)方法,以进一步提高利用部分注释的噪声数据进行分割的性能。从实验结果来看,我们的学习方法利用非专业注释者的分子信息注释达到了 F1 = 0.8496,优于经验丰富的病理学家基于形态学的传统注释(F1 = 0.7015)。我们的方法将病理分割深度模型的开发民主化,使其达到非专业注释者的水平,从而将学习过程扩展到类似于非医学计算机视觉任务。正式实现和细胞注释可在 https://github.com/hrlblab/MolecularEL 上公开获取。
{"title":"Democratizing Pathological Image Segmentation with Lay Annotators via Molecular-empowered Learning.","authors":"Ruining Deng, Yanwei Li, Peize Li, Jiacheng Wang, Lucas W Remedios, Saydolimkhon Agzamkhodjaev, Zuhayr Asad, Quan Liu, Can Cui, Yaohong Wang, Yihan Wang, Yucheng Tang, Haichun Yang, Yuankai Huo","doi":"10.1007/978-3-031-43987-2_48","DOIUrl":"10.1007/978-3-031-43987-2_48","url":null,"abstract":"<p><p>Multi-class cell segmentation in high-resolution Giga-pixel whole slide images (WSI) is critical for various clinical applications. Training such an AI model typically requires labor-intensive pixel-wise manual annotation from experienced domain experts (e.g., pathologists). Moreover, such annotation is error-prone when differentiating fine-grained cell types (e.g., podocyte and mesangial cells) via the naked human eye. In this study, we assess the feasibility of democratizing pathological AI deployment by only using lay annotators (annotators without medical domain knowledge). The contribution of this paper is threefold: (1) We proposed a molecular-empowered learning scheme for multi-class cell segmentation using partial labels from lay annotators; (2) The proposed method integrated Giga-pixel level molecular-morphology cross-modality registration, molecular-informed annotation, and molecular-oriented segmentation model, so as to achieve significantly superior performance via 3 lay annotators as compared with 2 experienced pathologists; (3) A deep corrective learning (learning with imperfect label) method is proposed to further improve the segmentation performance using partially annotated noisy data. From the experimental results, our learning method achieved F1 = 0.8496 using molecular-informed annotations from lay annotators, which is better than conventional morphology-based annotations (F1 = 0.7015) from experienced pathologists. Our method democratizes the development of a pathological segmentation deep model to the lay annotator level, which consequently scales up the learning process similar to a non-medical computer vision task. The official implementation and cell annotations are publicly available at https://github.com/hrlblab/MolecularEL.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14225 ","pages":"497-507"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10961594/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140290108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Unified Deep-Learning-Based Framework for Cochlear Implant Electrode Array Localization. 基于深度学习的人工耳蜗植入电极阵列定位统一框架。
Yubo Fan, Jianing Wang, Yiyuan Zhao, Rui Li, Han Liu, Robert F Labadie, Jack H Noble, Benoit M Dawant

Cochlear implants (CIs) are neuroprosthetics that can provide a sense of sound to people with severe-to-profound hearing loss. A CI contains an electrode array (EA) that is threaded into the cochlea during surgery. Recent studies have shown that hearing outcomes are correlated with EA placement. An image-guided cochlear implant programming technique is based on this correlation and utilizes the EA location with respect to the intracochlear anatomy to help audiologists adjust the CI settings to improve hearing. Automated methods to localize EA in postoperative CT images are of great interest for large-scale studies and for translation into the clinical workflow. In this work, we propose a unified deep-learning-based framework for automated EA localization. It consists of a multi-task network and a series of postprocessing algorithms to localize various types of EAs. The evaluation on a dataset with 27 cadaveric samples shows that its localization error is slightly smaller than the state-of-the-art method. Another evaluation on a large-scale clinical dataset containing 561 cases across two institutions demonstrates a significant improvement in robustness compared to the state-of-the-art method. This suggests that this technique could be integrated into the clinical workflow and provide audiologists with information that facilitates the programming of the implant leading to improved patient care.

人工耳蜗(CI)是一种神经义肢,可以为重度到永久性听力损失患者提供声音感知。CI 包含一个电极阵列 (EA),在手术中被穿入耳蜗。最近的研究表明,听力效果与电极阵列的位置有关。图像引导人工耳蜗植入编程技术就是基于这种相关性,并利用 EA 位置与耳蜗内解剖结构的关系,帮助听力学家调整 CI 设置以改善听力。在术后 CT 图像中定位 EA 的自动化方法对于大规模研究和转化为临床工作流程具有重大意义。在这项工作中,我们提出了一种基于深度学习的统一框架,用于自动 EA 定位。它由一个多任务网络和一系列后处理算法组成,用于定位各种类型的 EA。在一个包含 27 个尸体样本的数据集上进行的评估表明,其定位误差略小于最先进的方法。另一项评估是在一个大规模临床数据集上进行的,该数据集包含两个机构的 561 个病例,结果表明与最先进的方法相比,该方法的鲁棒性有了显著提高。这表明这项技术可以整合到临床工作流程中,为听力学家提供有助于植入程序设计的信息,从而改善患者护理。
{"title":"A Unified Deep-Learning-Based Framework for Cochlear Implant Electrode Array Localization.","authors":"Yubo Fan, Jianing Wang, Yiyuan Zhao, Rui Li, Han Liu, Robert F Labadie, Jack H Noble, Benoit M Dawant","doi":"10.1007/978-3-031-43996-4_36","DOIUrl":"10.1007/978-3-031-43996-4_36","url":null,"abstract":"<p><p>Cochlear implants (CIs) are neuroprosthetics that can provide a sense of sound to people with severe-to-profound hearing loss. A CI contains an electrode array (EA) that is threaded into the cochlea during surgery. Recent studies have shown that hearing outcomes are correlated with EA placement. An image-guided cochlear implant programming technique is based on this correlation and utilizes the EA location with respect to the intracochlear anatomy to help audiologists adjust the CI settings to improve hearing. Automated methods to localize EA in postoperative CT images are of great interest for large-scale studies and for translation into the clinical workflow. In this work, we propose a unified deep-learning-based framework for automated EA localization. It consists of a multi-task network and a series of postprocessing algorithms to localize various types of EAs. The evaluation on a dataset with 27 cadaveric samples shows that its localization error is slightly smaller than the state-of-the-art method. Another evaluation on a large-scale clinical dataset containing 561 cases across two institutions demonstrates a significant improvement in robustness compared to the state-of-the-art method. This suggests that this technique could be integrated into the clinical workflow and provide audiologists with information that facilitates the programming of the implant leading to improved patient care.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14228 ","pages":"376-385"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10976972/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140338426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1