首页 > 最新文献

Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention最新文献

英文 中文
Hallucination Index: An Image Quality Metric for Generative Reconstruction Models. 幻觉指数:生成式重建模型的图像质量指标
Matthew Tivnan, Siyeop Yoon, Zhennong Chen, Xiang Li, Dufan Wu, Quanzheng Li

Generative image reconstruction algorithms such as measurement conditioned diffusion models are increasingly popular in the field of medical imaging. These powerful models can transform low signal-to-noise ratio (SNR) inputs into outputs with the appearance of high SNR. However, the outputs can have a new type of error called hallucinations. In medical imaging, these hallucinations may not be obvious to a Radiologist but could cause diagnostic errors. Generally, hallucination refers to error in estimation of object structure caused by a machine learning model, but there is no widely accepted method to evaluate hallucination magnitude. In this work, we propose a new image quality metric called the hallucination index. Our approach is to compute the Hellinger distance from the distribution of reconstructed images to a zero hallucination reference distribution. To evaluate our approach, we conducted a numerical experiment with electron microscopy images, simulated noisy measurements, and applied diffusion based reconstructions. We sampled the measurements and the generative reconstructions repeatedly to compute the sample mean and covariance. For the zero hallucination reference, we used the forward diffusion process applied to ground truth. Our results show that higher measurement SNR leads to lower hallucination index for the same apparent image quality. We also evaluated the impact of early stopping in the reverse diffusion process and found that more modest denoising strengths can reduce hallucination. We believe this metric could be useful for evaluation of generative image reconstructions or as a warning label to inform radiologists about the degree of hallucinations in medical images.

生成式图像重建算法,如测量条件扩散模型,在医学成像领域越来越受欢迎。这些强大的模型可以将低信噪比(SNR)的输入转换成高信噪比的输出。然而,输出可能有一种叫做幻觉的新型错误。在医学成像中,这些幻觉对放射科医生来说可能并不明显,但可能导致诊断错误。通常,幻觉是指由机器学习模型引起的对物体结构的估计误差,但目前还没有被广泛接受的方法来评估幻觉的大小。在这项工作中,我们提出了一种新的图像质量度量,称为幻觉指数。我们的方法是计算从重建图像分布到零幻觉参考分布的海灵格距离。为了评估我们的方法,我们对电子显微镜图像进行了数值实验,模拟了噪声测量,并应用了基于扩散的重建。我们对测量和生成重建进行了多次采样,以计算样本均值和协方差。对于零幻觉参考,我们使用了适用于地面真值的正向扩散过程。结果表明,在相同的视图像质量下,较高的测量信噪比会导致较低的幻觉指数。我们还评估了在反向扩散过程中早期停止的影响,发现更适度的去噪强度可以减少幻觉。我们相信这个度量可以用于生成图像重建的评估,或者作为警告标签告知放射科医生关于医学图像中的幻觉程度。
{"title":"Hallucination Index: An Image Quality Metric for Generative Reconstruction Models.","authors":"Matthew Tivnan, Siyeop Yoon, Zhennong Chen, Xiang Li, Dufan Wu, Quanzheng Li","doi":"10.1007/978-3-031-72117-5_42","DOIUrl":"10.1007/978-3-031-72117-5_42","url":null,"abstract":"<p><p>Generative image reconstruction algorithms such as measurement conditioned diffusion models are increasingly popular in the field of medical imaging. These powerful models can transform low signal-to-noise ratio (SNR) inputs into outputs with the appearance of high SNR. However, the outputs can have a new type of error called hallucinations. In medical imaging, these hallucinations may not be obvious to a Radiologist but could cause diagnostic errors. Generally, hallucination refers to error in estimation of object structure caused by a machine learning model, but there is no widely accepted method to evaluate hallucination magnitude. In this work, we propose a new image quality metric called the hallucination index. Our approach is to compute the Hellinger distance from the distribution of reconstructed images to a zero hallucination reference distribution. To evaluate our approach, we conducted a numerical experiment with electron microscopy images, simulated noisy measurements, and applied diffusion based reconstructions. We sampled the measurements and the generative reconstructions repeatedly to compute the sample mean and covariance. For the zero hallucination reference, we used the forward diffusion process applied to ground truth. Our results show that higher measurement SNR leads to lower hallucination index for the same apparent image quality. We also evaluated the impact of early stopping in the reverse diffusion process and found that more modest denoising strengths can reduce hallucination. We believe this metric could be useful for evaluation of generative image reconstructions or as a warning label to inform radiologists about the degree of hallucinations in medical images.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15010 ","pages":"449-458"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11956116/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143757111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An approach to building foundation models for brain image analysis. 建立脑图像分析基础模型的方法。
Davood Karimi

Existing machine learning methods for brain image analysis are mostly based on supervised training. They require large labeled datasets, which can be costly or impossible to obtain. Moreover, the trained models are useful only for the narrow task defined by the labels. In this work, we developed a new method, based on the concept of foundation models, to overcome these limitations. Our model is an attention-based neural network that is trained using a novel self-supervised approach. Specifically, the model is trained to generate brain images in a patch-wise manner, thereby learning the brain structure. To facilitate learning of image details, we propose a new method that encodes high-frequency information using convolutional kernels with random weights. We trained our model on a pool of 10 public datasets. We then applied the model on five independent datasets to perform segmentation, lesion detection, denoising, and brain age estimation. Results showed that the foundation model achieved competitive or better results on all tasks, while significantly reducing the required amount of labeled training data. Our method enables leveraging large unlabeled neuroimaging datasets to effectively address diverse brain image analysis tasks and reduce the time and cost requirements of acquiring labels.

现有的脑图像分析机器学习方法大多基于监督训练。它们需要大量的标记数据集,这可能是昂贵的或不可能获得。此外,训练好的模型仅对标签定义的狭窄任务有用。在这项工作中,我们开发了一种基于基础模型概念的新方法来克服这些限制。我们的模型是一个基于注意力的神经网络,使用一种新颖的自监督方法进行训练。具体来说,该模型被训练成以贴片方式生成大脑图像,从而学习大脑结构。为了便于图像细节的学习,我们提出了一种使用随机权值的卷积核编码高频信息的新方法。我们在10个公共数据集上训练我们的模型。然后,我们将该模型应用于五个独立的数据集上,进行分割、病变检测、去噪和脑年龄估计。结果表明,基础模型在所有任务上都取得了相当或更好的结果,同时显著减少了所需的标记训练数据量。我们的方法能够利用大型未标记的神经成像数据集,有效地解决各种脑图像分析任务,并减少获取标签的时间和成本要求。
{"title":"An approach to building foundation models for brain image analysis.","authors":"Davood Karimi","doi":"10.1007/978-3-031-72390-2_40","DOIUrl":"https://doi.org/10.1007/978-3-031-72390-2_40","url":null,"abstract":"<p><p>Existing machine learning methods for brain image analysis are mostly based on supervised training. They require large labeled datasets, which can be costly or impossible to obtain. Moreover, the trained models are useful only for the narrow task defined by the labels. In this work, we developed a new method, based on the concept of foundation models, to overcome these limitations. Our model is an attention-based neural network that is trained using a novel self-supervised approach. Specifically, the model is trained to generate brain images in a patch-wise manner, thereby learning the brain structure. To facilitate learning of image details, we propose a new method that encodes high-frequency information using convolutional kernels with random weights. We trained our model on a pool of 10 public datasets. We then applied the model on five independent datasets to perform segmentation, lesion detection, denoising, and brain age estimation. Results showed that the foundation model achieved competitive or better results on all tasks, while significantly reducing the required amount of labeled training data. Our method enables leveraging large unlabeled neuroimaging datasets to effectively address diverse brain image analysis tasks and reduce the time and cost requirements of acquiring labels.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15012 ","pages":"421-431"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12033034/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144048319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vessel-aware aneurysm detection using multi-scale deformable 3D attention. 血管感知动脉瘤的多尺度可变形三维关注检测。
Alberto M Ceballos-Arroyo, Hieu T Nguyen, Fangrui Zhu, Shrikanth M Yadav, Jisoo Kim, Lei Qin, Geoffrey Young, Huaizu Jiang

Manual detection of intracranial aneurysms (IAs) in computed tomography (CT) scans is a complex, time-consuming task even for expert clinicians, and automating the process is no less challenging. Critical difficulties associated with detecting aneurysms include their small (yet varied) size compared to scans and a high potential for false positive (FP) predictions. To address these issues, we propose a 3D, multi-scale neural architecture that detects aneurysms via a deformable attention mechanism that operates on vessel distance maps derived from vessel segmentations and 3D features extracted from the layers of a convolutional network. Likewise, we reformulate aneurysm segmentation as bounding cuboid prediction using binary cross entropy and three localization losses (location, size, IoU). Given three validation sets comprised of 152/138/38 CT scans and containing 126/101/58 aneurysms, we achieved a Sensitivity of 91.3%/97.0%/74.1% @ FP rates 0.53/0.56/0.87, with Sensitivity around 80% on small aneurysms. Manual inspection of outputs by experts showed our model only tends to miss aneurysms located in unusual locations. Code and model weights are available online.

在计算机断层扫描(CT)中手工检测颅内动脉瘤(IAs)是一项复杂且耗时的任务,即使对专业临床医生来说也是如此,而自动化这一过程也同样具有挑战性。与检测动脉瘤相关的关键困难包括与扫描相比,动脉瘤的尺寸较小(但变化不定),并且有很高的假阳性(FP)预测的可能性。为了解决这些问题,我们提出了一种3D、多尺度的神经结构,通过一种可变形的注意力机制来检测动脉瘤,该机制基于血管分割得出的血管距离图和从卷积网络层中提取的3D特征。同样,我们将动脉瘤分割重新定义为使用二值交叉熵和三个定位损失(位置,大小,IoU)的边界长方体预测。在包含152/138/38个CT扫描和126/101/58个动脉瘤的三个验证集中,我们获得了91.3%/97.0%/74.1% @ FP率0.53/0.56/0.87的灵敏度,对小动脉瘤的灵敏度约为80%。专家对输出的人工检查表明,我们的模型只倾向于遗漏位于不寻常位置的动脉瘤。代码和模型权重可以在线获得。
{"title":"Vessel-aware aneurysm detection using multi-scale deformable 3D attention.","authors":"Alberto M Ceballos-Arroyo, Hieu T Nguyen, Fangrui Zhu, Shrikanth M Yadav, Jisoo Kim, Lei Qin, Geoffrey Young, Huaizu Jiang","doi":"10.1007/978-3-031-72086-4_71","DOIUrl":"https://doi.org/10.1007/978-3-031-72086-4_71","url":null,"abstract":"<p><p>Manual detection of intracranial aneurysms (IAs) in computed tomography (CT) scans is a complex, time-consuming task even for expert clinicians, and automating the process is no less challenging. Critical difficulties associated with detecting aneurysms include their small (yet varied) size compared to scans and a high potential for false positive (FP) predictions. To address these issues, we propose a 3D, multi-scale neural architecture that detects aneurysms via a deformable attention mechanism that operates on vessel distance maps derived from vessel segmentations and 3D features extracted from the layers of a convolutional network. Likewise, we reformulate aneurysm segmentation as bounding cuboid prediction using binary cross entropy and three localization losses (location, size, IoU). Given three validation sets comprised of 152/138/38 CT scans and containing 126/101/58 aneurysms, we achieved a Sensitivity of 91.3%/97.0%/74.1% @ FP rates 0.53/0.56/0.87, with Sensitivity around 80% on small aneurysms. Manual inspection of outputs by experts showed our model only tends to miss aneurysms located in unusual locations. Code and model weights are available online.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15005 ","pages":"754-765"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11986933/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144013943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FastSAM3D: An Efficient Segment Anything Model for 3D Volumetric Medical Images. FastSAM3D:一个有效的分割任何模型的3D体积医学图像。
Yiqing Shen, Jingxing Li, Xinyuan Shao, Blanca Inigo Romillo, Ankush Jindal, David Dreizin, Mathias Unberath

Segment anything models (SAMs) are gaining attention for their zero-shot generalization capability in segmenting objects of unseen classes and in unseen domains when properly prompted. Interactivity is a key strength of SAMs, allowing users to iteratively provide prompts that specify objects of interest to refine outputs. However, to realize the interactive use of SAMs for 3D medical imaging tasks, rapid inference times are necessary. High memory requirements and long processing delays remain constraints that hinder the adoption of SAMs for this purpose. Specifically, while 2D SAMs applied to 3D volumes contend with repetitive computation to process all slices independently, 3D SAMs suffer from an exponential increase in model parameters and FLOPS. To address these challenges, we present FastSAM3D which accelerates SAM inference to 8 milliseconds per 128 × 128 × 128 3D volumetric image on an NVIDIA A100 GPU. This speedup is accomplished through 1) a novel layer-wise progressive distillation scheme that enables knowledge transfer from a complex 12-layer ViT-B to a lightweight 6-layer ViT-Tiny variant encoder without training from scratch; and 2) a novel 3D sparse flash attention to replace vanilla attention operators, substantially reducing memory needs and improving parallelization. Experiments on three diverse datasets reveal that FastSAM3D achieves a remarkable speedup of 527.38× compared to 2D SAMs and 8.75× compared to 3D SAMs on the same volumes without significant performance decline. Thus, FastSAM3D opens the door for low-cost truly interactive SAM-based 3D medical imaging segmentation with commonly used GPU hardware. Code is available at https://github.com/arcadelab/FastSAM3D.

在适当提示的情况下,任意分割模型(sam)因其在未知类别和未知领域中分割对象的零概率泛化能力而受到关注。交互性是sam的一个关键优势,它允许用户迭代地提供提示,指定感兴趣的对象,以改进输出。然而,为了实现sam在三维医学成像任务中的交互式使用,需要快速的推理时间。高内存需求和长处理延迟仍然是阻碍采用sam用于此目的的制约因素。具体来说,应用于3D体积的2D SAMs需要重复计算来独立处理所有切片,而3D SAMs的模型参数和FLOPS呈指数增长。为了解决这些挑战,我们提出了FastSAM3D,它在NVIDIA A100 GPU上将SAM推理加速到每128 × 128 × 128 3D体积图像8毫秒。这种加速是通过1)一种新颖的分层渐进蒸馏方案实现的,该方案使知识能够从复杂的12层viti - b转移到轻量级的6层viti - tiny变型编码器,而无需从头开始训练;2)一种新颖的3D稀疏闪光注意取代了传统的注意算子,大大降低了内存需求并提高了并行化。在三个不同的数据集上进行的实验表明,在相同的体积上,FastSAM3D比2D SAMs的速度提高了527.38倍,比3D SAMs的速度提高了8.75倍,而性能没有明显下降。因此,FastSAM3D为使用常用GPU硬件的低成本真正交互式基于sam的3D医学成像分割打开了大门。代码可从https://github.com/arcadelab/FastSAM3D获得。
{"title":"FastSAM3D: An Efficient Segment Anything Model for 3D Volumetric Medical Images.","authors":"Yiqing Shen, Jingxing Li, Xinyuan Shao, Blanca Inigo Romillo, Ankush Jindal, David Dreizin, Mathias Unberath","doi":"10.1007/978-3-031-72390-2_51","DOIUrl":"10.1007/978-3-031-72390-2_51","url":null,"abstract":"<p><p>Segment anything models (SAMs) are gaining attention for their zero-shot generalization capability in segmenting objects of unseen classes and in unseen domains when properly prompted. Interactivity is a key strength of SAMs, allowing users to iteratively provide prompts that specify objects of interest to refine outputs. However, to realize the interactive use of SAMs for 3D medical imaging tasks, rapid inference times are necessary. High memory requirements and long processing delays remain constraints that hinder the adoption of SAMs for this purpose. Specifically, while 2D SAMs applied to 3D volumes contend with repetitive computation to process all slices independently, 3D SAMs suffer from an exponential increase in model parameters and FLOPS. To address these challenges, we present FastSAM3D which accelerates SAM inference to 8 milliseconds per 128 × 128 × 128 3D volumetric image on an NVIDIA A100 GPU. This speedup is accomplished through 1) a novel layer-wise progressive distillation scheme that enables knowledge transfer from a complex 12-layer ViT-B to a lightweight 6-layer ViT-Tiny variant encoder without training from scratch; and 2) a novel 3D sparse flash attention to replace vanilla attention operators, substantially reducing memory needs and improving parallelization. Experiments on three diverse datasets reveal that FastSAM3D achieves a remarkable speedup of 527.38× compared to 2D SAMs and 8.75× compared to 3D SAMs on the same volumes without significant performance decline. Thus, FastSAM3D opens the door for low-cost truly interactive SAM-based 3D medical imaging segmentation with commonly used GPU hardware. Code is available at https://github.com/arcadelab/FastSAM3D.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15012 ","pages":"542-552"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12377522/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144984624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Whole Slide Image Classification with Discriminative and Contrastive Learning. 用判别和对比学习增强整个幻灯片图像分类。
Peixian Liang, Hao Zheng, Hongming Li, Yuxin Gong, Spyridon Bakas, Yong Fan

Whole slide image (WSI) classification plays a crucial role in digital pathology data analysis. However, the immense size of WSIs and the absence of fine-grained sub-region labels pose significant challenges for accurate WSI classification. Typical classification-driven deep learning methods often struggle to generate informative image representations, which can compromise the robustness of WSI classification. In this study, we address this challenge by incorporating both discriminative and contrastive learning techniques for WSI classification. Different from the existing contrastive learning methods for WSI classification that primarily rely on pseudo labels assigned to patches based on the WSI-level labels, our approach takes a different route to directly focus on constructing positive and negative samples at the WSI-level. Specifically, we select a subset of representative image patches to represent WSIs and create positive and negative samples at the WSI-level, facilitating effective learning of informative image features. Experimental results on two datasets and ablation studies have demonstrated that our method significantly improved the WSI classification performance compared to state-of-the-art deep learning methods and enabled learning of informative features that promoted robustness of the WSI classification.

全幻灯片图像(WSI)分类在数字病理数据分析中起着至关重要的作用。然而,WSI的巨大规模和缺乏细粒度的子区域标签对WSI的准确分类构成了重大挑战。典型的分类驱动深度学习方法通常难以生成信息丰富的图像表示,这可能会损害WSI分类的鲁棒性。在本研究中,我们通过结合WSI分类的判别和对比学习技术来解决这一挑战。现有的WSI分类对比学习方法主要依赖于基于WSI级别标签分配给补丁的伪标签,与之不同,我们的方法走了一条不同的路线,直接关注在WSI级别构建正样本和负样本。具体来说,我们选择一个具有代表性的图像补丁子集来代表wsi,并在wsi级别创建正样本和负样本,从而促进信息图像特征的有效学习。在两个数据集和烧烧研究上的实验结果表明,与最先进的深度学习方法相比,我们的方法显著提高了WSI分类性能,并且能够学习信息特征,从而提高了WSI分类的鲁棒性。
{"title":"Enhancing Whole Slide Image Classification with Discriminative and Contrastive Learning.","authors":"Peixian Liang, Hao Zheng, Hongming Li, Yuxin Gong, Spyridon Bakas, Yong Fan","doi":"10.1007/978-3-031-72083-3_10","DOIUrl":"10.1007/978-3-031-72083-3_10","url":null,"abstract":"<p><p>Whole slide image (WSI) classification plays a crucial role in digital pathology data analysis. However, the immense size of WSIs and the absence of fine-grained sub-region labels pose significant challenges for accurate WSI classification. Typical classification-driven deep learning methods often struggle to generate informative image representations, which can compromise the robustness of WSI classification. In this study, we address this challenge by incorporating both discriminative and contrastive learning techniques for WSI classification. Different from the existing contrastive learning methods for WSI classification that primarily rely on pseudo labels assigned to patches based on the WSI-level labels, our approach takes a different route to directly focus on constructing positive and negative samples at the WSI-level. Specifically, we select a subset of representative image patches to represent WSIs and create positive and negative samples at the WSI-level, facilitating effective learning of informative image features. Experimental results on two datasets and ablation studies have demonstrated that our method significantly improved the WSI classification performance compared to state-of-the-art deep learning methods and enabled learning of informative features that promoted robustness of the WSI classification.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15004 ","pages":"102-112"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11877581/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143568358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HAMIL-QA: Hierarchical Approach to Multiple Instance Learning for Atrial LGE MRI Quality Assessment. HAMIL-QA:用于心房LGE MRI质量评估的多实例分层学习方法。
K M Arefeen Sultan, Md Hasibul Husain Hisham, Benjamin Orkild, Alan Morris, Eugene Kholmovski, Erik Bieging, Eugene Kwan, Ravi Ranjan, Ed DiBella, Shireen Elhabian

The accurate evaluation of left atrial fibrosis via high-quality 3D Late Gadolinium Enhancement (LGE) MRI is crucial for atrial fibrillation management but is hindered by factors like patient movement and imaging variability. The pursuit of automated LGE MRI quality assessment is critical for enhancing diagnostic accuracy, standardizing evaluations, and improving patient outcomes. The deep learning models aimed at automating this process face significant challenges due to the scarcity of expert annotations, high computational costs, and the need to capture subtle diagnostic details in highly variable images. This study introduces HAMIL-QA, a multiple instance learning (MIL) framework, designed to overcome these obstacles. HAMIL-QA employs a hierarchical bag and sub-bag structure that allows for targeted analysis within sub-bags and aggregates insights at the volume level. This hierarchical MIL approach reduces reliance on extensive annotations, lessens computational load, and ensures clinically relevant quality predictions by focusing on diagnostically critical image features. Our experiments show that HAMIL-QA surpasses existing MIL methods and traditional supervised approaches in accuracy, AUROC, and F1-Score on an LGE MRI scan dataset, demonstrating its potential as a scalable solution for LGE MRI quality assessment automation. The code is available at: https://github.com/arf111/HAMIL-QA.

通过高质量的3D晚期钆增强(LGE) MRI准确评估左心房纤维化对房颤治疗至关重要,但受患者运动和成像变异性等因素的阻碍。追求自动化LGE MRI质量评估对于提高诊断准确性、标准化评估和改善患者预后至关重要。由于缺乏专家注释、计算成本高、需要在高度可变的图像中捕捉细微的诊断细节,旨在使这一过程自动化的深度学习模型面临着重大挑战。本研究介绍了HAMIL-QA,一个多实例学习(MIL)框架,旨在克服这些障碍。HAMIL-QA采用分层袋和子袋结构,允许在子袋内进行有针对性的分析,并在体积级别上汇总见解。这种分层MIL方法减少了对大量注释的依赖,减少了计算负荷,并通过专注于诊断关键图像特征来确保临床相关的质量预测。我们的实验表明,在LGE MRI扫描数据集上,HAMIL-QA在准确性、AUROC和F1-Score方面超过了现有的MIL方法和传统的监督方法,证明了其作为LGE MRI质量评估自动化可扩展解决方案的潜力。代码可从https://github.com/arf111/HAMIL-QA获得。
{"title":"HAMIL-QA: Hierarchical Approach to Multiple Instance Learning for Atrial LGE MRI Quality Assessment.","authors":"K M Arefeen Sultan, Md Hasibul Husain Hisham, Benjamin Orkild, Alan Morris, Eugene Kholmovski, Erik Bieging, Eugene Kwan, Ravi Ranjan, Ed DiBella, Shireen Elhabian","doi":"10.1007/978-3-031-72378-0_26","DOIUrl":"10.1007/978-3-031-72378-0_26","url":null,"abstract":"<p><p>The accurate evaluation of left atrial fibrosis via high-quality 3D Late Gadolinium Enhancement (LGE) MRI is crucial for atrial fibrillation management but is hindered by factors like patient movement and imaging variability. The pursuit of automated LGE MRI quality assessment is critical for enhancing diagnostic accuracy, standardizing evaluations, and improving patient outcomes. The deep learning models aimed at automating this process face significant challenges due to the scarcity of expert annotations, high computational costs, and the need to capture subtle diagnostic details in highly variable images. This study introduces HAMIL-QA, a multiple instance learning (MIL) framework, designed to overcome these obstacles. HAMIL-QA employs a hierarchical bag and sub-bag structure that allows for targeted analysis within sub-bags and aggregates insights at the volume level. This hierarchical MIL approach reduces reliance on extensive annotations, lessens computational load, and ensures clinically relevant quality predictions by focusing on diagnostically critical image features. Our experiments show that HAMIL-QA surpasses existing MIL methods and traditional supervised approaches in accuracy, AUROC, and F1-Score on an LGE MRI scan dataset, demonstrating its potential as a scalable solution for LGE MRI quality assessment automation. The code is available at: https://github.com/arf111/HAMIL-QA.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15001 ","pages":"275-284"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12745976/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145866931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Slice Attention and Evidential Critical Loss for Uncertainty-Aware Prostate Cancer Detection. 用于不确定性感知前列腺癌检测的跨片注意力和证据临界损失。
Alex Ling Yu Hung, Haoxin Zheng, Kai Zhao, Kaifeng Pang, Demetri Terzopoulos, Kyunghyun Sung

Current deep learning-based models typically analyze medical images in either 2D or 3D albeit disregarding volumetric information or suffering sub-optimal performance due to the anisotropic resolution of MR data. Furthermore, providing an accurate uncertainty estimation is beneficial to clinicians, as it indicates how confident a model is about its prediction. We propose a novel 2.5D cross-slice attention model that utilizes both global and local information, along with an evidential critical loss, to perform evidential deep learning for the detection in MR images of prostate cancer, one of the most common cancers and a leading cause of cancer-related death in men. We perform extensive experiments with our model on two different datasets and achieve state-of-the-art performance in prostate cancer detection along with improved epistemic uncertainty estimation. The implementation of the model is available at https://github.com/aL3x-O-o-Hung/GLCSA_ECLoss.

目前基于深度学习的模型通常分析二维或三维医学图像,但会忽略容积信息,或因磁共振数据的各向异性分辨率而导致性能不达标。此外,提供准确的不确定性估计对临床医生也有好处,因为这表明了模型对其预测的信心程度。我们提出了一种新型 2.5D 交叉切片注意力模型,该模型利用全局和局部信息以及证据临界损失来执行证据深度学习,以检测 MR 图像中的前列腺癌,前列腺癌是最常见的癌症之一,也是男性癌症相关死亡的主要原因。我们用我们的模型在两个不同的数据集上进行了广泛的实验,在前列腺癌检测方面取得了最先进的性能,并改进了认识不确定性估计。该模型的实现可在 https://github.com/aL3x-O-o-Hung/GLCSA_ECLoss 上获得。
{"title":"Cross-Slice Attention and Evidential Critical Loss for Uncertainty-Aware Prostate Cancer Detection.","authors":"Alex Ling Yu Hung, Haoxin Zheng, Kai Zhao, Kaifeng Pang, Demetri Terzopoulos, Kyunghyun Sung","doi":"10.1007/978-3-031-72111-3_11","DOIUrl":"10.1007/978-3-031-72111-3_11","url":null,"abstract":"<p><p>Current deep learning-based models typically analyze medical images in either 2D or 3D albeit disregarding volumetric information or suffering sub-optimal performance due to the anisotropic resolution of MR data. Furthermore, providing an accurate uncertainty estimation is beneficial to clinicians, as it indicates how confident a model is about its prediction. We propose a novel 2.5D cross-slice attention model that utilizes both global and local information, along with an evidential critical loss, to perform evidential deep learning for the detection in MR images of prostate cancer, one of the most common cancers and a leading cause of cancer-related death in men. We perform extensive experiments with our model on two different datasets and achieve state-of-the-art performance in prostate cancer detection along with improved epistemic uncertainty estimation. The implementation of the model is available at https://github.com/aL3x-O-o-Hung/GLCSA_ECLoss.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15008 ","pages":"113-123"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11646698/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142831545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conditional Diffusion Model with Spatial Attention and Latent Embedding for Medical Image Segmentation. 基于空间注意和隐嵌入的条件扩散模型医学图像分割。
Behzad Hejrati, Soumyanil Banerjee, Carri Glide-Hurst, Ming Dong

Diffusion models have been used extensively for high quality image and video generation tasks. In this paper, we propose a novel conditional diffusion model with spatial attention and latent embedding (cDAL) for medical image segmentation. In cDAL, a convolutional neural network (CNN) based discriminator is used at every time-step of the diffusion process to distinguish between the generated labels and the real ones. A spatial attention map is computed based on the features learned by the discriminator to help cDAL generate more accurate segmentation of discriminative regions in an input image. Additionally, we incorporated a random latent embedding into each layer of our model to significantly reduce the number of training and sampling time-steps, thereby making it much faster than other diffusion models for image segmentation. We applied cDAL on 3 publicly available medical image segmentation datasets (MoNuSeg, Chest X-ray and Hippocampus) and observed significant qualitative and quantitative improvements with higher Dice scores and mIoU over the state-of-the-art algorithms. The source code is publicly available at https://github.com/Hejrati/cDAL/.

扩散模型已广泛用于高质量的图像和视频生成任务。本文提出了一种基于空间注意和潜在嵌入的医学图像分割条件扩散模型。在cDAL中,在扩散过程的每个时间步使用基于卷积神经网络(CNN)的鉴别器来区分生成的标签和真实的标签。基于鉴别器学习到的特征计算空间注意图,以帮助cDAL对输入图像中的判别区域产生更准确的分割。此外,我们在模型的每一层中加入了一个随机潜伏嵌入,以显着减少训练和采样时间步数,从而使其比其他图像分割扩散模型快得多。我们将cDAL应用于3个公开可用的医学图像分割数据集(MoNuSeg,胸部x射线和海马),并观察到与最先进的算法相比,具有更高的Dice分数和mIoU的显著定性和定量改进。源代码可在https://github.com/Hejrati/cDAL/上公开获得。
{"title":"Conditional Diffusion Model with Spatial Attention and Latent Embedding for Medical Image Segmentation.","authors":"Behzad Hejrati, Soumyanil Banerjee, Carri Glide-Hurst, Ming Dong","doi":"10.1007/978-3-031-72114-4_20","DOIUrl":"10.1007/978-3-031-72114-4_20","url":null,"abstract":"<p><p>Diffusion models have been used extensively for high quality image and video generation tasks. In this paper, we propose a novel conditional diffusion model with spatial attention and latent embedding (cDAL) for medical image segmentation. In cDAL, a convolutional neural network (CNN) based discriminator is used at every time-step of the diffusion process to distinguish between the generated labels and the real ones. A spatial attention map is computed based on the features learned by the discriminator to help cDAL generate more accurate segmentation of discriminative regions in an input image. Additionally, we incorporated a random latent embedding into each layer of our model to significantly reduce the number of training and sampling time-steps, thereby making it much faster than other diffusion models for image segmentation. We applied cDAL on 3 publicly available medical image segmentation datasets (MoNuSeg, Chest X-ray and Hippocampus) and observed significant qualitative and quantitative improvements with higher Dice scores and mIoU over the state-of-the-art algorithms. The source code is publicly available at https://github.com/Hejrati/cDAL/.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15009 ","pages":"202-212"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11974562/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143805308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intraoperative Registration by Cross-Modal Inverse Neural Rendering. 跨模态逆向神经渲染的术中配准。
Maximilian Fehrentz, Mohammad Farid Azampour, Reuben Dorent, Hassan Rasheed, Colin Galvin, Alexandra Golby, William M Wells, Sarah Frisken, Nassir Navab, Nazim Haouchine

We present in this paper a novel approach for 3D/2D intraoperative registration during neurosurgery via cross-modal inverse neural rendering. Our approach separates implicit neural representation into two components, handling anatomical structure preoperatively and appearance intraoperatively. This disentanglement is achieved by controlling a Neural Radiance Field's appearance with a multi-style hypernetwork. Once trained, the implicit neural representation serves as a differentiable rendering engine, which can be used to estimate the surgical camera pose by minimizing the dissimilarity between its rendered images and the target intraoperative image. We tested our method on retrospective patients' data from clinical cases, showing that our method outperforms state-of-the-art while meeting current clinical standards for registration. Code and additional resources can be found at https://maxfehrentz.github.io/style-ngp/.

我们在这篇论文中提出了一种新的方法,在神经外科手术中通过交叉模态逆神经渲染进行3D/2D术中注册。我们的方法将隐式神经表征分为两个部分,术前处理解剖结构和术中处理外观。这种解纠缠是通过使用多样式超网络控制神经辐射场的外观来实现的。经过训练后,隐式神经表示作为一个可微渲染引擎,可以通过最小化其渲染图像与目标术中图像之间的不相似性来估计手术相机的姿势。我们在临床病例的回顾性患者数据上测试了我们的方法,表明我们的方法在满足当前临床注册标准的同时优于最先进的技术。代码和其他资源可以在https://maxfehrentz.github.io/style-ngp/上找到。
{"title":"Intraoperative Registration by Cross-Modal Inverse Neural Rendering.","authors":"Maximilian Fehrentz, Mohammad Farid Azampour, Reuben Dorent, Hassan Rasheed, Colin Galvin, Alexandra Golby, William M Wells, Sarah Frisken, Nassir Navab, Nazim Haouchine","doi":"10.1007/978-3-031-72089-5_30","DOIUrl":"10.1007/978-3-031-72089-5_30","url":null,"abstract":"<p><p>We present in this paper a novel approach for 3D/2D intraoperative registration during neurosurgery via cross-modal inverse neural rendering. Our approach separates implicit neural representation into two components, handling anatomical structure preoperatively and appearance intraoperatively. This disentanglement is achieved by controlling a Neural Radiance Field's appearance with a multi-style hypernetwork. Once trained, the implicit neural representation serves as a differentiable rendering engine, which can be used to estimate the surgical camera pose by minimizing the dissimilarity between its rendered images and the target intraoperative image. We tested our method on retrospective patients' data from clinical cases, showing that our method outperforms state-of-the-art while meeting current clinical standards for registration. Code and additional resources can be found at https://maxfehrentz.github.io/style-ngp/.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15006 ","pages":"317-327"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12714352/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145807091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Weakly Supervised Cerebellar Cortical Surface Parcellation with Self-Visual Representation Learning. 自我视觉表征学习的弱监督小脑皮质表面包裹化。
Zhengwang Wu, Jiale Cheng, Fenqiang Zhao, Ya Wang, Yue Sun, Dajiang Zhu, Tianming Liu, Valerie Jewells, Weili Lin, Li Wang, Gang Li

The cerebellum (i.e., little brain) plays an important role in motion and balances control abilities, despite its much smaller size and deeper sulci compared to the cerebrum. Previous cerebellum studies mainly relied on and focused on conventional volumetric analysis, which ignores the extremely deep and highly convoluted nature of the cerebellar cortex. To better reveal localized functional and structural changes, we propose cortical surface-based analysis of the cerebellar cortex. Specifically, we first reconstruct the cerebellar cortical surfaces to represent and characterize the highly folded cerebellar cortex in a geometrically accurate and topologically correct manner. Then, we propose a novel method to automatically parcellate the cerebellar cortical surface into anatomically meaningful regions by a weakly supervised graph convolutional neural network. Instead of relying on registration or requiring mapping the cerebellar surface to a sphere, which are either inaccurate or have large geometric distortions due to the deep cerebellar sulci, our learning-based model directly deals with the original cerebellar cortical surface by decomposing this challenging task into two steps. First, we learn the effective representation of the cerebellar cortical surface patches with a contrastive self-learning framework. Then, we map the learned representations to parcellation labels. We have validated our method using data from the Baby Connectome Project and the experimental results demonstrate its superior effectiveness and accuracy, compared to existing methods.

小脑(即小脑)在运动和平衡控制能力方面起着重要作用,尽管它的体积比大脑小得多,脑沟也深得多。以往的小脑研究主要依赖于传统的体积分析,忽视了小脑皮层极其深层和高度复杂的本质。为了更好地揭示局部功能和结构变化,我们提出了基于皮质表面的小脑皮层分析。具体来说,我们首先重建小脑皮层表面,以几何精确和拓扑正确的方式表示和表征高度折叠的小脑皮层。然后,我们提出了一种利用弱监督图卷积神经网络将小脑皮层表面自动分割成解剖意义区域的新方法。我们的基于学习的模型将这一具有挑战性的任务分解为两个步骤,直接处理原始的小脑皮质表面,而不是依赖于配位或需要将小脑表面映射到球体上,这要么是不准确的,要么是由于小脑沟深而产生巨大的几何扭曲。首先,我们用对比自学习框架学习了小脑皮层表面斑块的有效表征。然后,我们将学习到的表示映射到包装标签。我们使用婴儿连接体项目的数据验证了我们的方法,实验结果表明,与现有方法相比,它具有更高的有效性和准确性。
{"title":"Weakly Supervised Cerebellar Cortical Surface Parcellation with Self-Visual Representation Learning.","authors":"Zhengwang Wu, Jiale Cheng, Fenqiang Zhao, Ya Wang, Yue Sun, Dajiang Zhu, Tianming Liu, Valerie Jewells, Weili Lin, Li Wang, Gang Li","doi":"10.1007/978-3-031-43993-3_42","DOIUrl":"https://doi.org/10.1007/978-3-031-43993-3_42","url":null,"abstract":"<p><p>The cerebellum (i.e., little brain) plays an important role in motion and balances control abilities, despite its much smaller size and deeper sulci compared to the cerebrum. Previous cerebellum studies mainly relied on and focused on conventional volumetric analysis, which ignores the extremely deep and highly convoluted nature of the cerebellar cortex. To better reveal localized functional and structural changes, we propose cortical surface-based analysis of the cerebellar cortex. Specifically, we first reconstruct the cerebellar cortical surfaces to represent and characterize the highly folded cerebellar cortex in a geometrically accurate and topologically correct manner. Then, we propose a novel method to automatically parcellate the cerebellar cortical surface into anatomically meaningful regions by a weakly supervised graph convolutional neural network. Instead of relying on registration or requiring mapping the cerebellar surface to a sphere, which are either inaccurate or have large geometric distortions due to the deep cerebellar sulci, our learning-based model directly deals with the original cerebellar cortical surface by decomposing this challenging task into two steps. First, we learn the effective representation of the cerebellar cortical surface patches with a contrastive self-learning framework. Then, we map the learned representations to parcellation labels. We have validated our method using data from the Baby Connectome Project and the experimental results demonstrate its superior effectiveness and accuracy, compared to existing methods.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14227 ","pages":"429-438"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12030008/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144036370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1