首页 > 最新文献

Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention最新文献

英文 中文
Hard Negative Sample Mining for Whole Slide Image Classification. 全幻灯片图像分类的硬负样本挖掘。
Wentao Huang, Xiaoling Hu, Shahira Abousamra, Prateek Prasanna, Chao Chen

Weakly supervised whole slide image (WSI) classification is challenging due to the lack of patch-level labels and high computational costs. State-of-the-art methods use self-supervised patch-wise feature representations for multiple instance learning (MIL). Recently, methods have been proposed to fine-tune the feature representation on the downstream task using pseudo labeling, but mostly focusing on selecting high-quality positive patches. In this paper, we propose to mine hard negative samples during fine-tuning. This allows us to obtain better feature representations and reduce the training cost. Furthermore, we propose a novel patch-wise ranking loss in MIL to better exploit these hard negative samples. Experiments on two public datasets demonstrate the efficacy of these proposed ideas. Our codes are available at https://github.com/winston52/HNM-WSI.

弱监督全幻灯片图像(WSI)分类由于缺乏补丁级标签和高计算成本而具有挑战性。最先进的方法使用自监督的补丁智能特征表示进行多实例学习(MIL)。最近,人们提出了使用伪标记对下游任务的特征表示进行微调的方法,但主要集中在选择高质量的正补丁上。在本文中,我们提出在微调过程中挖掘硬负样本。这使我们能够获得更好的特征表示并降低训练成本。此外,我们提出了一种新的基于补丁的MIL排序损失,以更好地利用这些硬负样本。在两个公共数据集上的实验证明了这些方法的有效性。我们的代码可在https://github.com/winston52/HNM-WSI上获得。
{"title":"Hard Negative Sample Mining for Whole Slide Image Classification.","authors":"Wentao Huang, Xiaoling Hu, Shahira Abousamra, Prateek Prasanna, Chao Chen","doi":"10.1007/978-3-031-72083-3_14","DOIUrl":"10.1007/978-3-031-72083-3_14","url":null,"abstract":"<p><p>Weakly supervised whole slide image (WSI) classification is challenging due to the lack of patch-level labels and high computational costs. State-of-the-art methods use self-supervised patch-wise feature representations for multiple instance learning (MIL). Recently, methods have been proposed to fine-tune the feature representation on the downstream task using pseudo labeling, but mostly focusing on selecting high-quality positive patches. In this paper, we propose to mine hard negative samples during fine-tuning. This allows us to obtain better feature representations and reduce the training cost. Furthermore, we propose a novel patch-wise ranking loss in MIL to better exploit these hard negative samples. Experiments on two public datasets demonstrate the efficacy of these proposed ideas. Our codes are available at https://github.com/winston52/HNM-WSI.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15004 ","pages":"144-154"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12185924/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144487609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tagged-to-Cine MRI Sequence Synthesis via Light Spatial-Temporal Transformer. 通过光时空变换器实现标记-正片磁共振成像序列合成
Xiaofeng Liu, Fangxu Xing, Zhangxing Bian, Tomas Arias-Vergara, Paula Andrea Pérez-Toro, Andreas Maier, Maureen Stone, Jiachen Zhuo, Jerry L Prince, Jonghye Woo

Tagged magnetic resonance imaging (MRI) has been successfully used to track the motion of internal tissue points within moving organs. Typically, to analyze motion using tagged MRI, cine MRI data in the same coordinate system are acquired, incurring additional time and costs. Consequently, tagged-to-cine MR synthesis holds the potential to reduce the extra acquisition time and costs associated with cine MRI, without disrupting downstream motion analysis tasks. Previous approaches have processed each frame independently, thereby overlooking the fact that complementary information from occluded regions of the tag patterns could be present in neighboring frames exhibiting motion. Furthermore, the inconsistent visual appearance, e.g., tag fading, across frames can reduce synthesis performance. To address this, we propose an efficient framework for tagged-to-cine MR sequence synthesis, leveraging both spatial and temporal information with relatively limited data. Specifically, we follow a split-and-integral protocol to balance spatialtemporal modeling efficiency and consistency. The light spatial-temporal transformer (LiST2) is designed to exploit the local and global attention in motion sequence with relatively lightweight training parameters. The directional product relative position-time bias is adapted to make the model aware of the spatial-temporal correlation, while the shifted window is used for motion alignment. Then, a recurrent sliding fine-tuning (ReST) scheme is applied to further enhance the temporal consistency. Our framework is evaluated on paired tagged and cine MRI sequences, demonstrating superior performance over comparison methods.

标记磁共振成像(MRI)已成功用于跟踪移动器官内部组织点的运动。通常情况下,要使用标记磁共振成像分析运动,需要获取同一坐标系的 cine MRI 数据,这就需要额外的时间和成本。因此,从标记到线性磁共振合成有望减少与线性磁共振成像相关的额外采集时间和成本,同时又不会影响下游的运动分析任务。以往的方法对每一帧图像进行独立处理,从而忽略了标签图案闭塞区域的补充信息可能存在于显示运动的相邻帧图像中这一事实。此外,各帧之间不一致的视觉外观(如标签褪色)也会降低合成性能。为了解决这个问题,我们提出了一个高效的框架,利用空间和时间信息,在数据相对有限的情况下进行标记到线性 MR 序列合成。具体来说,我们采用分割-积分协议来平衡时空建模效率和一致性。轻型时空变换器(LiST2)旨在利用运动序列中的局部和全局注意力,训练参数相对较轻。通过调整方向积相对位置-时间偏置,使模型意识到时空相关性,同时使用移动窗口进行运动对齐。然后,采用循环滑动微调(ReST)方案进一步增强时间一致性。我们的框架在成对标记和电影核磁共振成像序列上进行了评估,证明其性能优于比较方法。
{"title":"Tagged-to-Cine MRI Sequence Synthesis via Light Spatial-Temporal Transformer.","authors":"Xiaofeng Liu, Fangxu Xing, Zhangxing Bian, Tomas Arias-Vergara, Paula Andrea Pérez-Toro, Andreas Maier, Maureen Stone, Jiachen Zhuo, Jerry L Prince, Jonghye Woo","doi":"10.1007/978-3-031-72104-5_67","DOIUrl":"10.1007/978-3-031-72104-5_67","url":null,"abstract":"<p><p>Tagged magnetic resonance imaging (MRI) has been successfully used to track the motion of internal tissue points within moving organs. Typically, to analyze motion using tagged MRI, cine MRI data in the same coordinate system are acquired, incurring additional time and costs. Consequently, tagged-to-cine MR synthesis holds the potential to reduce the extra acquisition time and costs associated with cine MRI, without disrupting downstream motion analysis tasks. Previous approaches have processed each frame independently, thereby overlooking the fact that complementary information from occluded regions of the tag patterns could be present in neighboring frames exhibiting motion. Furthermore, the inconsistent visual appearance, e.g., tag fading, across frames can reduce synthesis performance. To address this, we propose an efficient framework for tagged-to-cine MR sequence synthesis, leveraging both spatial and temporal information with relatively limited data. Specifically, we follow a split-and-integral protocol to balance spatialtemporal modeling efficiency and consistency. The light spatial-temporal transformer (LiST<sup>2</sup>) is designed to exploit the local and global attention in motion sequence with relatively lightweight training parameters. The directional product relative position-time bias is adapted to make the model aware of the spatial-temporal correlation, while the shifted window is used for motion alignment. Then, a recurrent sliding fine-tuning (ReST) scheme is applied to further enhance the temporal consistency. Our framework is evaluated on paired tagged and cine MRI sequences, demonstrating superior performance over comparison methods.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15007 ","pages":"701-711"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11517403/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142524019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CLEFT: Language-Image Contrastive Learning with Efficient Large Language Model and Prompt Fine-Tuning. 基于高效大语言模型和快速微调的语言-图像对比学习。
Yuexi Du, Brian Chang, Nicha C Dvornek

Recent advancements in Contrastive Language-Image Pre-training (CLIP) [21] have demonstrated notable success in self-supervised representation learning across various tasks. However, the existing CLIP-like approaches often demand extensive GPU resources and prolonged training times due to the considerable size of the model and dataset, making them poor for medical applications, in which large datasets are not always common. Meanwhile, the language model prompts are mainly manually derived from labels tied to images, potentially overlooking the richness of information within training samples. We introduce a novel language-image Contrastive Learning method with an Efficient large language model and prompt Fine-Tuning (CLEFT) that harnesses the strengths of the extensive pre-trained language and visual models. Furthermore, we present an efficient strategy for learning context-based prompts that mitigates the gap between informative clinical diagnostic data and simple class labels. Our method demonstrates state-of-the-art performance on multiple chest X-ray and mammography datasets compared with various baselines. The proposed parameter efficient framework can reduce the total trainable model size by 39% and reduce the trainable language model to only 4% compared with the current BERT encoder.

对比语言图像预训练(CLIP)的最新进展已经在各种任务的自监督表示学习中取得了显著的成功。然而,由于模型和数据集的相当大的规模,现有的类似clip的方法通常需要大量的GPU资源和较长的训练时间,这使得它们不适合医疗应用,在医疗应用中,大型数据集并不总是常见的。同时,语言模型提示主要是手动从与图像绑定的标签中获得的,可能忽略了训练样本中信息的丰富性。我们介绍了一种新的语言-图像对比学习方法,该方法利用了广泛的预训练语言和视觉模型的优势,采用高效的大语言模型和快速微调(CLEFT)。此外,我们提出了一种有效的策略来学习基于上下文的提示,以减轻信息丰富的临床诊断数据和简单的类标签之间的差距。与各种基线相比,我们的方法在多个胸部x线和乳房x线摄影数据集上展示了最先进的性能。与现有的BERT编码器相比,所提出的参数高效框架可以将可训练模型的总大小减少39%,将可训练语言模型的大小减少到4%。
{"title":"CLEFT: Language-Image Contrastive Learning with Efficient Large Language Model and Prompt Fine-Tuning.","authors":"Yuexi Du, Brian Chang, Nicha C Dvornek","doi":"10.1007/978-3-031-72390-2_44","DOIUrl":"10.1007/978-3-031-72390-2_44","url":null,"abstract":"<p><p>Recent advancements in Contrastive Language-Image Pre-training (CLIP) [21] have demonstrated notable success in self-supervised representation learning across various tasks. However, the existing CLIP-like approaches often demand extensive GPU resources and prolonged training times due to the considerable size of the model and dataset, making them poor for medical applications, in which large datasets are not always common. Meanwhile, the language model prompts are mainly manually derived from labels tied to images, potentially overlooking the richness of information within training samples. We introduce a novel language-image Contrastive Learning method with an Efficient large language model and prompt Fine-Tuning (CLEFT) that harnesses the strengths of the extensive pre-trained language and visual models. Furthermore, we present an efficient strategy for learning context-based prompts that mitigates the gap between informative clinical diagnostic data and simple class labels. Our method demonstrates state-of-the-art performance on multiple chest X-ray and mammography datasets compared with various baselines. The proposed parameter efficient framework can reduce the total trainable model size by 39% and reduce the trainable language model to only 4% compared with the current BERT encoder.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15012 ","pages":"465-475"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11709740/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142960994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TLRN: Temporal Latent Residual Networks For Large Deformation Image Registration. TLRN:用于大变形图像配准的时间隐残差网络。
Nian Wu, Jiarui Xing, Miaomiao Zhang

This paper presents a novel approach, termed Temporal Latent Residual Network (TLRN), to predict a sequence of deformation fields in time-series image registration. The challenge of registering time-series images often lies in the occurrence of large motions, especially when images differ significantly from a reference (e.g., the start of a cardiac cycle compared to the peak stretching phase). To achieve accurate and robust registration results, we leverage the nature of motion continuity and exploit the temporal smoothness in consecutive image frames. Our proposed TLRN highlights a temporal residual network with residual blocks carefully designed in latent deformation spaces, which are parameterized by time-sequential initial velocity fields. We treat a sequence of residual blocks over time as a dynamic training system, where each block is designed to learn the residual function between desired deformation features and current input accumulated from previous time frames. We validate the effectivenss of TLRN on both synthetic data and real-world cine cardiac magnetic resonance (CMR) image videos. Our experimental results shows that TLRN is able to achieve substantially improved registration accuracy compared to the state-of-the-art. Our code is publicly available at https://github.com/nellie689/TLRN.

本文提出了一种新的方法,称为时间隐残差网络(TLRN),用于预测时间序列图像配准中的一系列变形场。配准时间序列图像的挑战通常在于大运动的发生,特别是当图像与参考图像明显不同时(例如,与峰值拉伸阶段相比,心脏周期的开始)。为了获得准确和鲁棒的配准结果,我们利用了运动连续性的本质,并利用了连续图像帧的时间平滑性。我们提出的TLRN突出了在潜在变形空间中精心设计的残余块的时间残余网络,这些残余块由时间序列初速度场参数化。我们将一段时间内的残差块序列视为一个动态训练系统,其中每个块被设计用于学习期望变形特征与从以前的时间框架积累的当前输入之间的残差函数。我们在合成数据和真实的电影心脏磁共振(CMR)图像视频上验证了TLRN的有效性。我们的实验结果表明,与最先进的配准精度相比,TLRN能够实现显着提高的配准精度。我们的代码可以在https://github.com/nellie689/TLRN上公开获得。
{"title":"TLRN: Temporal Latent Residual Networks For Large Deformation Image Registration.","authors":"Nian Wu, Jiarui Xing, Miaomiao Zhang","doi":"10.1007/978-3-031-72069-7_68","DOIUrl":"10.1007/978-3-031-72069-7_68","url":null,"abstract":"<p><p>This paper presents a novel approach, termed <i>Temporal Latent Residual Network (TLRN)</i>, to predict a sequence of deformation fields in time-series image registration. The challenge of registering time-series images often lies in the occurrence of large motions, especially when images differ significantly from a reference (e.g., the start of a cardiac cycle compared to the peak stretching phase). To achieve accurate and robust registration results, we leverage the nature of motion continuity and exploit the temporal smoothness in consecutive image frames. Our proposed TLRN highlights a temporal residual network with residual blocks carefully designed in latent deformation spaces, which are parameterized by time-sequential initial velocity fields. We treat a sequence of residual blocks over time as a dynamic training system, where each block is designed to learn the residual function between desired deformation features and current input accumulated from previous time frames. We validate the effectivenss of TLRN on both synthetic data and real-world cine cardiac magnetic resonance (CMR) image videos. Our experimental results shows that TLRN is able to achieve substantially improved registration accuracy compared to the state-of-the-art. Our code is publicly available at https://github.com/nellie689/TLRN.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15002 ","pages":"728-738"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11929566/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143694983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HATs: Hierarchical Adaptive Taxonomy Segmentation for Panoramic Pathology Image Analysis. 全景病理图像分析的层次自适应分类分割。
Ruining Deng, Quan Liu, Can Cui, Tianyuan Yao, Juming Xiong, Shunxing Bao, Hao Li, Mengmeng Yin, Yu Wang, Shilin Zhao, Yucheng Tang, Haichun Yang, Yuankai Huo

Panoramic image segmentation in computational pathology presents a remarkable challenge due to the morphologically complex and variably scaled anatomy. For instance, the intricate organization in kidney pathology spans multiple layers, from regions like the cortex and medulla to functional units such as glomeruli, tubules, and vessels, down to various cell types. In this paper, we propose a novel Hierarchical Adaptive Taxonomy Segmentation (HATs) method, which is designed to thoroughly segment panoramic views of kidney structures by leveraging detailed anatomical insights. Our approach entails (1) the innovative HATs technique which translates spatial relationships among 15 distinct object classes into a versatile "plug-and-play" loss function that spans across regions, functional units, and cells, (2) the incorporation of anatomical hierarchies and scale considerations into a unified simple matrix representation for all panoramic entities, (3) the adoption of the latest AI foundation model (EfficientSAM) as a feature extraction tool to boost the model's adaptability, yet eliminating the need for manual prompt generation in conventional segment anything model (SAM). Experimental findings demonstrate that the HATs method offers an efficient and effective strategy for integrating clinical insights and imaging precedents into a unified segmentation model across more than 15 categories. The official implementation is publicly available at https://github.com/hrlblab/HATs.

全景图像分割在计算病理学提出了一个显着的挑战,由于形态复杂和可变缩放解剖。例如,肾脏病理中复杂的组织跨越多层,从皮层和髓质等区域到肾小球、小管和血管等功能单位,再到各种细胞类型。在本文中,我们提出了一种新的分层自适应分类分割(HATs)方法,该方法旨在通过利用详细的解剖学见解来彻底分割肾脏结构的全景视图。我们的方法需要(1)创新的HATs技术,将15个不同对象类别之间的空间关系转化为跨越区域、功能单元和细胞的多功能“即插即用”损失函数;(2)将解剖层次结构和尺度考虑纳入所有全景实体的统一简单矩阵表示中;(3)采用最新的人工智能基础模型(EfficientSAM)作为特征提取工具,增强了模型的适应性,同时消除了传统分段任意模型(SAM)中手动生成提示符的需求。实验结果表明,HATs方法提供了一种高效的策略,可以将临床见解和成像先例整合到超过15个类别的统一分割模型中。官方实现可以在https://github.com/hrlblab/HATs上公开获得。
{"title":"HATs: Hierarchical Adaptive Taxonomy Segmentation for Panoramic Pathology Image Analysis.","authors":"Ruining Deng, Quan Liu, Can Cui, Tianyuan Yao, Juming Xiong, Shunxing Bao, Hao Li, Mengmeng Yin, Yu Wang, Shilin Zhao, Yucheng Tang, Haichun Yang, Yuankai Huo","doi":"10.1007/978-3-031-72083-3_15","DOIUrl":"10.1007/978-3-031-72083-3_15","url":null,"abstract":"<p><p>Panoramic image segmentation in computational pathology presents a remarkable challenge due to the morphologically complex and variably scaled anatomy. For instance, the intricate organization in kidney pathology spans multiple layers, from regions like the cortex and medulla to functional units such as glomeruli, tubules, and vessels, down to various cell types. In this paper, we propose a novel Hierarchical Adaptive Taxonomy Segmentation (HATs) method, which is designed to thoroughly segment panoramic views of kidney structures by leveraging detailed anatomical insights. Our approach entails (1) the innovative HATs technique which translates spatial relationships among 15 distinct object classes into a versatile \"plug-and-play\" loss function that spans across regions, functional units, and cells, (2) the incorporation of anatomical hierarchies and scale considerations into a unified simple matrix representation for all panoramic entities, (3) the adoption of the latest AI foundation model (EfficientSAM) as a feature extraction tool to boost the model's adaptability, yet eliminating the need for manual prompt generation in conventional segment anything model (SAM). Experimental findings demonstrate that the HATs method offers an efficient and effective strategy for integrating clinical insights and imaging precedents into a unified segmentation model across more than 15 categories. The official implementation is publicly available at https://github.com/hrlblab/HATs.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15004 ","pages":"155-166"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11927787/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143694985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SelfReg-UNet: Self-Regularized UNet for Medical Image Segmentation. selfregg -UNet:用于医学图像分割的自正则化UNet。
Wenhui Zhu, Xiwen Chen, Peijie Qiu, Mohammad Farazi, Aristeidis Sotiras, Abolfazl Razi, Yalin Wang

Since its introduction, UNet has been leading a variety of medical image segmentation tasks. Although numerous follow-up studies have also been dedicated to improving the performance of standard UNet, few have conducted in-depth analyses of the underlying interest pattern of UNet in medical image segmentation. In this paper, we explore the patterns learned in a UNet and observe two important factors that potentially affect its performance: (i) irrelative feature learned caused by asymmetric supervision; (ii) feature redundancy in the feature map. To this end, we propose to balance the supervision between encoder and decoder and reduce the redundant information in the UNet. Specifically, we use the feature map that contains the most semantic information (i.e., the last layer of the decoder) to provide additional supervision to other blocks to provide additional supervision and reduce feature redundancy by leveraging feature distillation. The proposed method can be easily integrated into existing UNet architecture in a plug-and-play fashion with negligible computational cost. The experimental results suggest that the proposed method consistently improves the performance of standard UNets on four medical image segmentation datasets. The code is available at https://github.com/ChongQingNoSubway/SelfReg-UNet.

自推出以来,UNet一直引领着各种医学图像分割任务。尽管许多后续研究也致力于提高标准UNet的性能,但很少有深入分析UNet在医学图像分割中的潜在兴趣模式。在本文中,我们探讨了在UNet中学习的模式,并观察了可能影响其性能的两个重要因素:(i)不对称监督导致的学习不相关特征;(ii)特征映射中的特征冗余。为此,我们提出平衡编码器和解码器之间的监督,减少UNet中的冗余信息。具体来说,我们使用包含最多语义信息的特征映射(即解码器的最后一层)来为其他块提供额外的监督,从而通过利用特征蒸馏来提供额外的监督并减少特征冗余。所提出的方法可以很容易地以即插即用的方式集成到现有的UNet体系结构中,计算成本可以忽略不计。实验结果表明,该方法在四种医学图像分割数据集上均能提高标准UNets的性能。代码可在https://github.com/ChongQingNoSubway/SelfReg-UNet上获得。
{"title":"SelfReg-UNet: Self-Regularized UNet for Medical Image Segmentation.","authors":"Wenhui Zhu, Xiwen Chen, Peijie Qiu, Mohammad Farazi, Aristeidis Sotiras, Abolfazl Razi, Yalin Wang","doi":"10.1007/978-3-031-72111-3_56","DOIUrl":"10.1007/978-3-031-72111-3_56","url":null,"abstract":"<p><p>Since its introduction, UNet has been leading a variety of medical image segmentation tasks. Although numerous follow-up studies have also been dedicated to improving the performance of standard UNet, few have conducted in-depth analyses of the underlying interest pattern of UNet in medical image segmentation. In this paper, we explore the patterns learned in a UNet and observe two important factors that potentially affect its performance: (i) irrelative feature learned caused by asymmetric supervision; (ii) feature redundancy in the feature map. To this end, we propose to balance the supervision between encoder and decoder and reduce the redundant information in the UNet. Specifically, we use the feature map that contains the most semantic information (i.e., the last layer of the decoder) to provide additional supervision to other blocks to provide additional supervision and reduce feature redundancy by leveraging feature distillation. The proposed method can be easily integrated into existing UNet architecture in a plug-and-play fashion with negligible computational cost. The experimental results suggest that the proposed method consistently improves the performance of standard UNets on four medical image segmentation datasets. The code is available at https://github.com/ChongQingNoSubway/SelfReg-UNet.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15008 ","pages":"601-611"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12408486/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145017053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attention-Enhanced Fusion of Structural and Functional MRI for Analyzing HIV-Associated Asymptomatic Neurocognitive Impairment. 结构性和功能性核磁共振成像的注意力增强融合,用于分析艾滋病毒相关的无症状神经认知障碍。
Yuqi Fang, Wei Wang, Qianqian Wang, Hong-Jun Li, Mingxia Liu

Asymptomatic neurocognitive impairment (ANI) is a predominant form of cognitive impairment among individuals infected with human immunodeficiency virus (HIV). The current diagnostic criteria for ANI primarily rely on subjective clinical assessments, possibly leading to different interpretations among clinicians. Some recent studies leverage structural or functional MRI containing objective biomarkers for ANI analysis, offering clinicians companion diagnostic tools. However, they mainly utilize a single imaging modality, neglecting complementary information provided by structural and functional MRI. To this end, we propose an attention-enhanced structural and functional MRI fusion (ASFF) framework for HIV-associated ANI analysis. Specifically, the ASFF first extracts data-driven and human-engineered features from structural MRI, and also captures functional MRI features via a graph isomorphism network and Transformer. A mutual cross-attention fusion module is then designed to model the underlying relationship between structural and functional MRI. Additionally, a semantic inter-modality constraint is introduced to encourage consistency of multimodal features, facilitating effective feature fusion. Experimental results on 137 subjects from an HIV-associated ANI dataset with T1-weighted MRI and resting-state functional MRI show the effectiveness of our ASFF in ANI identification. Furthermore, our method can identify both modality-shared and modality-specific brain regions, which may advance our understanding of the structural and functional pathology underlying ANI.

无症状神经认知功能障碍(ANI)是人类免疫缺陷病毒(HIV)感染者认知功能障碍的主要表现形式。目前 ANI 的诊断标准主要依赖于主观临床评估,这可能会导致临床医生之间产生不同的解释。最近的一些研究利用含有客观生物标志物的结构性或功能性磁共振成像进行 ANI 分析,为临床医生提供了辅助诊断工具。然而,这些研究主要利用单一成像模式,忽略了结构性和功能性 MRI 提供的互补信息。为此,我们提出了一种用于艾滋病相关 ANI 分析的注意力增强结构和功能 MRI 融合(ASFF)框架。具体来说,ASFF 首先从结构磁共振成像中提取数据驱动和人为设计的特征,然后通过图同构网络和 Transformer 捕捉功能磁共振成像特征。然后设计一个相互交叉关注融合模块,以模拟结构性和功能性 MRI 之间的潜在关系。此外,还引入了语义跨模态约束,以鼓励多模态特征的一致性,从而促进有效的特征融合。实验结果显示,我们的 ASFF 在 ANI 识别方面非常有效。此外,我们的方法还能识别模式共享和模式特异的脑区,这可能会促进我们对 ANI 的结构和功能病理的理解。
{"title":"Attention-Enhanced Fusion of Structural and Functional MRI for Analyzing HIV-Associated Asymptomatic Neurocognitive Impairment.","authors":"Yuqi Fang, Wei Wang, Qianqian Wang, Hong-Jun Li, Mingxia Liu","doi":"10.1007/978-3-031-72120-5_11","DOIUrl":"10.1007/978-3-031-72120-5_11","url":null,"abstract":"<p><p>Asymptomatic neurocognitive impairment (ANI) is a predominant form of cognitive impairment among individuals infected with human immunodeficiency virus (HIV). The current diagnostic criteria for ANI primarily rely on subjective clinical assessments, possibly leading to different interpretations among clinicians. Some recent studies leverage structural or functional MRI containing objective biomarkers for ANI analysis, offering clinicians companion diagnostic tools. However, they mainly utilize a single imaging modality, neglecting complementary information provided by structural and functional MRI. To this end, we propose an attention-enhanced structural and functional MRI fusion (ASFF) framework for HIV-associated ANI analysis. Specifically, the ASFF first extracts data-driven and human-engineered features from structural MRI, and also captures functional MRI features via a graph isomorphism network and Transformer. A <i>mutual cross-attention fusion module</i> is then designed to model the underlying relationship between structural and functional MRI. Additionally, a <i>semantic inter-modality constraint</i> is introduced to encourage consistency of multimodal features, facilitating effective feature fusion. Experimental results on 137 subjects from an HIV-associated ANI dataset with T1-weighted MRI and resting-state functional MRI show the effectiveness of our ASFF in ANI identification. Furthermore, our method can identify both modality-shared and modality-specific brain regions, which may advance our understanding of the structural and functional pathology underlying ANI.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15011 ","pages":"113-123"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11512738/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142516842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Patient-Specific Real-Time Segmentation in Trackerless Brain Ultrasound. 无跟踪器脑超声患者特异性实时分割。
Reuben Dorent, Erickson Torio, Nazim Haouchine, Colin Galvin, Sarah Frisken, Alexandra Golby, Tina Kapur, William Wells

Intraoperative ultrasound (iUS) imaging has the potential to improve surgical outcomes in brain surgery. However, its interpretation is challenging, even for expert neurosurgeons. In this work, we designed the first patient-specific framework that performs brain tumor segmentation in trackerless iUS. To disambiguate ultrasound imaging and adapt to the neurosurgeon's surgical objective, a patient-specific real-time network is trained using synthetic ultrasound data generated by simulating virtual iUS sweep acquisitions in pre-operative MR data. Extensive experiments performed in real ultrasound data demonstrate the effectiveness of the proposed approach, allowing for adapting to the surgeon's definition of surgical targets and outperforming non-patient-specific models, neurosurgeon experts, and high-end tracking systems. Our code is available at: https://github.com/ReubenDo/MHVAE-Seg.

术中超声(iUS)成像具有改善脑外科手术结果的潜力。然而,即使对神经外科专家来说,它的解释也是具有挑战性的。在这项工作中,我们设计了第一个针对患者的框架,用于在无跟踪器iUS中进行脑肿瘤分割。为了消除超声成像的歧义并适应神经外科医生的手术目标,通过模拟术前MR数据中的虚拟iu扫描获取生成的合成超声数据,对患者特异性实时网络进行了训练。在真实超声数据中进行的大量实验证明了所提出方法的有效性,允许适应外科医生对手术目标的定义,并且优于非患者特异性模型,神经外科专家和高端跟踪系统。我们的代码可在:https://github.com/ReubenDo/MHVAE-Seg。
{"title":"Patient-Specific Real-Time Segmentation in Trackerless Brain Ultrasound.","authors":"Reuben Dorent, Erickson Torio, Nazim Haouchine, Colin Galvin, Sarah Frisken, Alexandra Golby, Tina Kapur, William Wells","doi":"10.1007/978-3-031-72089-5_45","DOIUrl":"10.1007/978-3-031-72089-5_45","url":null,"abstract":"<p><p>Intraoperative ultrasound (iUS) imaging has the potential to improve surgical outcomes in brain surgery. However, its interpretation is challenging, even for expert neurosurgeons. In this work, we designed the first patient-specific framework that performs brain tumor segmentation in trackerless iUS. To disambiguate ultrasound imaging and adapt to the neurosurgeon's surgical objective, a patient-specific real-time network is trained using synthetic ultrasound data generated by simulating virtual iUS sweep acquisitions in pre-operative MR data. Extensive experiments performed in real ultrasound data demonstrate the effectiveness of the proposed approach, allowing for adapting to the surgeon's definition of surgical targets and outperforming non-patient-specific models, neurosurgeon experts, and high-end tracking systems. Our code is available at: https://github.com/ReubenDo/MHVAE-Seg.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15006 ","pages":"477-487"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12714359/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145807089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Consecutive-Contrastive Spherical U-Net: Enhancing Reliability of Individualized Functional Brain Parcellation for Short-Duration fMRI Scans. 连续-对比球形U-Net:增强短时间fMRI扫描个体化功能性脑包裹的可靠性。
Dan Hu, Kangfu Han, Jiale Cheng, Gang Li

Individualized brain parcellations derived from functional MRI (fMRI) are essential for discerning unique functional patterns of individuals, facilitating personalized diagnoses and treatments. Unfortunately, as fMRI signals are inherently noisy, establishing reliable individualized parcellations typically necessitates long-duration fMRI scan (> 25 min), posing a major challenge and resulting in the exclusion of numerous short-duration fMRI scans from individualized studies. To address this issue, we develop a novel Consecutive-Contrastive Spherical U-net (CC-SUnet) to enable the prediction of reliable individualized brain parcellation using short-duration fMRI data, greatly expanding its practical applicability. Specifically, 1) the widely used functional diffusion map (DM), obtained from functional connectivity, is carefully selected as the predictive feature, for its advantage in tracing the transitions between regions while reducing noise. To ensure a robust depiction of brain network, we propose a dual-task model to predict DM and cortical parcellation simultaneously, fully utilizing their reciprocal relationship. 2) By constructing a stepwise dataset to capture the gradual changes of DM over increasing scan durations, a consecutive prediction framework is designed to realize the prediction from short-to-long gradually. 3) A stepwise-denoising-prediction module is further proposed. The noise representations are separated and replaced by the latent representations of a group-level diffusion map, realizing informative guidance and denoising concurrently. 4) Additionally, an N-pair contrastive loss is introduced to strengthen the discriminability of the individualized parcellations. Extensive experimental results demonstrated the superiority of our proposed CC-SUnet in enhancing the reliability of the individualized parcellation with short-duration fMRI data, thereby significantly boosting their utility in individualized studies.

功能磁共振成像(fMRI)对识别个体独特的功能模式、促进个性化诊断和治疗至关重要。不幸的是,由于功能磁共振成像信号本身是有噪声的,建立可靠的个体化包裹通常需要长时间的功能磁共振成像扫描(bbb25分钟),这是一个重大挑战,并导致许多短时间的功能磁共振成像扫描被排除在个体化研究之外。为了解决这个问题,我们开发了一种新的连续对比球形U-net (CC-SUnet),可以使用短时间fMRI数据预测可靠的个性化脑包裹,大大扩展了其实际适用性。具体来说,1)通过功能连通性得到的广泛使用的功能扩散图(DM)被仔细选择作为预测特征,因为它在跟踪区域之间的过渡同时降低了噪声。为了确保对大脑网络的鲁棒性描述,我们提出了一个双任务模型来同时预测DM和皮层包裹,充分利用它们的相互关系。2)通过构建逐级数据集,捕捉DM随扫描时间的逐渐变化,设计逐级预测框架,实现由短到长的逐步预测。3)进一步提出了逐步去噪预测模块。将噪声表示分离并替换为群体级扩散图的潜在表示,实现了信息引导和去噪并行。4)此外,引入n对对比损失来增强个性化分组的可分辨性。大量的实验结果表明,我们提出的CC-SUnet在提高短时间fMRI数据个性化分组的可靠性方面具有优势,从而显著提高了它们在个性化研究中的实用性。
{"title":"Consecutive-Contrastive Spherical U-Net: Enhancing Reliability of Individualized Functional Brain Parcellation for Short-Duration fMRI Scans.","authors":"Dan Hu, Kangfu Han, Jiale Cheng, Gang Li","doi":"10.1007/978-3-031-72069-7_9","DOIUrl":"10.1007/978-3-031-72069-7_9","url":null,"abstract":"<p><p>Individualized brain parcellations derived from functional MRI (fMRI) are essential for discerning unique functional patterns of individuals, facilitating personalized diagnoses and treatments. Unfortunately, as fMRI signals are inherently noisy, establishing reliable individualized parcellations typically necessitates long-duration fMRI scan (> 25 min), posing a major challenge and resulting in the exclusion of numerous short-duration fMRI scans from individualized studies. To address this issue, we develop a novel Consecutive-Contrastive Spherical U-net (CC-SUnet) to enable the prediction of reliable individualized brain parcellation using short-duration fMRI data, greatly expanding its practical applicability. Specifically, 1) the widely used functional diffusion map (DM), obtained from functional connectivity, is carefully selected as the predictive feature, for its advantage in tracing the transitions between regions while reducing noise. To ensure a robust depiction of brain network, we propose a dual-task model to predict DM and cortical parcellation simultaneously, fully utilizing their reciprocal relationship. 2) By constructing a stepwise dataset to capture the gradual changes of DM over increasing scan durations, a consecutive prediction framework is designed to realize the prediction from short-to-long gradually. 3) A stepwise-denoising-prediction module is further proposed. The noise representations are separated and replaced by the latent representations of a group-level diffusion map, realizing informative guidance and denoising concurrently. 4) Additionally, an N-pair contrastive loss is introduced to strengthen the discriminability of the individualized parcellations. Extensive experimental results demonstrated the superiority of our proposed CC-SUnet in enhancing the reliability of the individualized parcellation with short-duration fMRI data, thereby significantly boosting their utility in individualized studies.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15002 ","pages":"88-98"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12716869/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145807080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CryoSAM: Training-free CryoET Tomogram Segmentation with Foundation Models. CryoSAM:基于基础模型的无训练冷冻et层析图分割。
Yizhou Zhao, Hengwei Bian, Michael Mu, Mostofa R Uddin, Zhenyang Li, Xiang Li, Tianyang Wang, Min Xu

Cryogenic Electron Tomography (CryoET) is a useful imaging technology in structural biology that is hindered by its need for manual annotations, especially in particle picking. Recent works have endeavored to remedy this issue with few-shot learning or contrastive learning techniques. However, supervised training is still inevitable for them. We instead choose to leverage the power of existing 2D foundation models and present a novel, training-free framework, CryoSAM. In addition to prompt-based single-particle instance segmentation, our approach can automatically search for similar features, facilitating full tomogram semantic segmentation with only one prompt. CryoSAM is composed of two major parts: 1) a prompt-based 3D segmentation system that uses prompts to complete single-particle instance segmentation recursively with Cross-Plane Self-Prompting, and 2) a Hierarchical Feature Matching mechanism that efficiently matches relevant features with extracted tomogram features. They collaborate to enable the segmentation of all particles of one category with just one particle-specific prompt. Our experiments show that CryoSAM outperforms existing works by a significant margin and requires even fewer annotations in particle picking. Further visualizations demonstrate its ability when dealing with full tomogram segmentation for various subcellular structures. Our code is available at: https://github.com/xulabs/aitom.

低温电子层析成像技术(Cryogenic Electron Tomography, CryoET)是结构生物学中一项有用的成像技术,但它需要人工注释,特别是在粒子拾取方面。最近的研究试图用少量学习或对比学习技术来解决这个问题。然而,有监督的培训对他们来说仍然是不可避免的。相反,我们选择利用现有的2D基础模型的力量,并提出了一个新颖的,无需培训的框架,CryoSAM。除了基于提示符的单粒子实例分割之外,我们的方法还可以自动搜索相似的特征,仅用一个提示符就可以实现全层图语义分割。CryoSAM由两大部分组成:1)基于提示符的三维分割系统,利用提示符递归地完成单粒子实例的分割;2)分层特征匹配机制,将相关特征与提取的层析图特征进行高效匹配。他们合作,使一个类别的所有粒子的分割只有一个特定的粒子提示。我们的实验表明,CryoSAM大大优于现有的作品,并且在粒子选择中需要更少的注释。进一步的可视化显示了它在处理各种亚细胞结构的全层析图分割时的能力。我们的代码可在:https://github.com/xulabs/aitom。
{"title":"CryoSAM: Training-free CryoET Tomogram Segmentation with Foundation Models.","authors":"Yizhou Zhao, Hengwei Bian, Michael Mu, Mostofa R Uddin, Zhenyang Li, Xiang Li, Tianyang Wang, Min Xu","doi":"10.1007/978-3-031-72111-3_12","DOIUrl":"10.1007/978-3-031-72111-3_12","url":null,"abstract":"<p><p>Cryogenic Electron Tomography (CryoET) is a useful imaging technology in structural biology that is hindered by its need for manual annotations, especially in particle picking. Recent works have endeavored to remedy this issue with few-shot learning or contrastive learning techniques. However, supervised training is still inevitable for them. We instead choose to leverage the power of existing 2D foundation models and present a novel, training-free framework, CryoSAM. In addition to prompt-based single-particle instance segmentation, our approach can automatically search for similar features, facilitating full tomogram semantic segmentation with only one prompt. CryoSAM is composed of two major parts: 1) a prompt-based 3D segmentation system that uses prompts to complete single-particle instance segmentation recursively with Cross-Plane Self-Prompting, and 2) a Hierarchical Feature Matching mechanism that efficiently matches relevant features with extracted tomogram features. They collaborate to enable the segmentation of all particles of one category with just one particle-specific prompt. Our experiments show that CryoSAM outperforms existing works by a significant margin and requires even fewer annotations in particle picking. Further visualizations demonstrate its ability when dealing with full tomogram segmentation for various subcellular structures. Our code is available at: https://github.com/xulabs/aitom.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15008 ","pages":"124-134"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12923679/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147273604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1